9.7 Transforming Information: How We Reach Conclusions

Reasoning is a mental activity that consists of organizing information or beliefs into a series of steps in order to reach conclusions. Not surprisingly, sometimes our reasoning seems sensible and straightforward, and other times it seems a little off. Consider some reasons offered by people who filed actual insurance accident claims (www.swapmeetdave.com):

When people like these hapless drivers argue with you in a way that seems inconsistent or poorly thought out, you may accuse them of being “illogical.” Logic is a system of rules that specifies which conclusions follow from a set of statements. To put it another way, if you know that a given set of statements is true, logic will tell you which other statements must also be true. If the statement “Jack and Jill went up the hill” is true, then according to the rules of logic, the statement “Jill went up the hill” must also be true. To accept the truth of the first statement while denying the truth of the second statement would be a contradiction. Logic is a tool for evaluating reasoning, but it should not be confused with the process of reasoning itself. Equating logic and reasoning would be like equating carpenter’s tools (logic) with building a house (reasoning).

Practical, Theoretical, and Syllogistic Reasoning

Earlier in the chapter, we discussed decision making, which often depends on reasoning with probabilities. Practical reasoning and theoretical reasoning also allow us to make decisions (Walton, 1990). Practical Reasoning is figuring out what to do, or reasoning directed toward action. Means–ends analysis is one kind of practical reasoning. An example is figuring out how to get to a concert across town if you don’t have a car. In contrast, theoretical reasoning (also called discursive reasoning) is reasoning directed toward arriving at a belief. We use theoretical reasoning when we try to determine which beliefs follow logically from other beliefs.

Suppose you asked your friend Bruce to take you to a concert, and he said that his car wasn’t working. You’d undoubtedly find another way to get to the concert. If you then spied him driving into the concert parking lot, you might reason: “Bruce told me his car wasn’t working. He just drove into the parking lot. If his car wasn’t working, he couldn’t drive it here. So, either he suddenly fixed it, or he was lying to me. If he was lying to me, he’s not much of a friend.” Notice the absence of an action-oriented goal. Theoretical reasoning is just a series of inferences concluding in a belief–in this case, about your so-called friend’s unfriendliness!

If you concluded from these examples that we are equally adept at both types of reasoning, experimental evidence suggests you’re wrong. People generally find figuring out what to do easier than deciding which beliefs follow logically from other beliefs. In cross-cultural studies, this tendency to respond practically when theoretical reasoning is sought has been demonstrated in individuals without schooling. Consider, for example, this dialogue between a Nigerian rice farmer, a member of the preliterate Kpelle people, and an American researcher (Scribner, 1975, p. 155):

Experimenter: All Kpelle men are rice farmers. Mr. Smith (this is a Western name) is not a rice farmer. Is he a Kpelle man?
Farmer: I don’t know the man in person. I have not laid eyes on the man himself.
Experimenter: Just think about the statement.
Farmer: If I know him in person, I can answer that question, but since I do not know him in person, I cannot answer that question.
Experimenter: Try and answer from your Kpelle sense.
Farmer: If you know a person, if a question comes up about him, you are able to answer. But if you do not know a person, if a question comes up about him, it’s hard for you to answer it.

389

As this excerpt shows, the farmer does not seem to understand that the problem can be resolved with theoretical reasoning. Instead, he is concerned with retrieving and verifying facts, a strategy that does not work for this type of task.

A very different picture emerges when members of preliterate cultures are given tasks that require practical reasoning. One well-known study of rural Kenyans illustrates a typical result (Harkness, Edwards, & Super, 1981). The problem describes a dilemma in which a boy must decide whether to obey his father and give the family some of the money he has earned, even though his father previously promised that the boy could keep it all. After hearing the dilemma, the participants were asked what the boy should do. Here is a typical response from a villager:

A child has to give you what you ask for just in the same way as when he asks for anything you give it to him. Why then should he be selfish with what he has? A parent loves his child and maybe the son refused without knowing the need of helping his father…. By showing respect to one another, friendship between us is assured, and as a result this will increase the prosperity of our family.

What have cross-cultural studies shown us about reasoning tests?

This preliterate individual had little difficulty understanding this practical problem. His response is intelligent, insightful, and well reasoned. A principal finding from this kind of cross-cultural research is that the appearance of competency on reasoning tests depends more on whether the task makes sense to participants than on their problem-solving ability.

390

Educated individuals in industrial societies are prone to similar failures in reasoning, as illustrated by belief bias: People’s judgments about whether to accept conclusions depend more on how believable the conclusions are than on whether the arguments are logically valid (Evans, Barston, & Pollard, 1983; see also The Real World Box). For example, in syllogistic reasoning assesses whether a conclusion follows from two statements that are assumed to be true. Consider the two following syllogisms, evaluate the argument, and ask yourself whether or not the conclusions must be true if the statements are true:

Syllogism 1

391

Syllogism 2

THE REAL WORLD: From Zippers to Political Extremism: An Illusion of Understanding

Zippers are extremely helpful objects and we have all used them more times than we could possibly recall. Most of us also think that we have a pretty good understanding of how a zipper works–at least until we are asked to provide a step-by-step explanation. In experiments by Rozenblit and Keil (2002), participants initially rated the depth of their understanding of various everyday objects (e.g., zippers, flush toilets, sewing machines) or procedures (e.g., how to make chocolate cookies from scratch), tried to provide detailed, step-by-step explanations, viewed expert descriptions and diagrams, and then re-rated their depth of understanding. The second set of ratings was significantly lower than the first. Attempting to explain the workings of the objects and procedures, and then seeing the much more detailed expert description, led participants to realize that they had greatly overestimated the depth of their understanding, which Rozenblit and Keil referred to as the illusion of explanatory depth. Additional experiments revealed that the illusion of explanatory depth can occur as a consequence of attempting to generate detailed explanations even when expert descriptions are not subsequently provided.

STEVE MILLER/SUPERSTOCK

Recent research suggests that the illusion of explanatory depth applies to a very different domain of everyday life: political extremism. Many pressing issues of our times, such as climate change and health care, share two features: They involve complex policies and tend to generate extreme views at either end of the political spectrum. Fernbach et al. (2013) asked whether polarized views occur because people think that they understand the relevant policies in greater depth than they actually do. To investigate this hypothesis, the researchers asked participants to rate their positions regarding six contemporary political policies (sanctions on Iran for its nuclear program, raising the retirement age for social security, single-payer health care system, cap-and-trade system for carbon emissions, national flat tax, and merit-based pay for teachers) on a 7-point scale ranging from strongly against to strongly in favor. Next, the participants rated their understanding of each of the six policies on the 7-point scale used previously by Rozenblit and Keil (2002). Participants were then asked to generate detailed explanations of two of the six policies, followed in each case by a second set of ratings concerning their positions and their level of understanding of all the policies.

Fernbach et al. (2013) found that after attempting to generate detailed explanations, participants provided lower ratings of understanding and less extreme positions concerning all six policies than they had previously. Furthermore, those participants who exhibited the largest decreases in their pre- versus post-explanation understanding ratings also exhibited the greatest moderation of their positions as a result of explaining them. Were these changes simply a result of thinking more deeply about the policies, or were they specifically attributable to generating explanations? To address this issue, the researchers conducted an additional experiment in which some participants provided explanations of policies, whereas others listed reasons why they held their positions. Once again, generating explanations led to lower understanding ratings and less extreme positions. However, no such changes were observed in participants who listed reasons why they held their views. A final experiment showed that after generating explanations, participants indicated that they would be less likely to make donations to relevant advocate groups, reflecting moderation of their positions.

The overall pattern of results supports the idea that extreme political views are enabled, at least in part, by an illusion of explanatory depth: Once people realize that they don’t understand the relevant policy issues in as much depth as they had thought, their views moderate. There probably aren’t too many psychological phenomena that are well-illustrated by both political policies and the workings of zippers, but the illusion of explanatory depth is one of them.

If you’re like most people, you probably concluded that the reasoning is valid in Syllogism 1 but flawed in Syllogism 2. Indeed, researchers found that nearly 100% of participants accepted the first conclusion as valid, but fewer than half accepted the second (Evans, Barston, & Pollard, 1983). But notice that the syllogisms are in exactly the same form. This form of syllogism is valid, so both conclusions are valid. Evidently, the believability of the conclusions influences people’s judgments.

Reasoning and the Brain

Research using fMRI provides novel insights into belief biases on reasoning tasks. In belief-laden trials, participants were scanned while they reasoned about syllogisms that could be influenced by knowledge affecting the believability of the conclusions. In belief-neutral trials, syllogisms contained obscure terms whose meanings were unknown to participants, as in the following example:

Syllogism 3

Belief-neutral reasoning activated different brain regions than did belief-laden reasoning (as shown in FIGURE 9.22). Activity in a part of the left temporal lobe involved in retrieving and selecting facts from long-term memory increased during belief-laden reasoning. In contrast, that part of the brain showed little activity and parts of the parietal lobe involved in mathematical reasoning and spatial representation showed greater activity during belief-neutral reasoning (Goel & Dolan, 2003). This evidence suggests that participants took different approaches to the two types of reasoning tasks, relying on previously encoded memories in belief-laden reasoning and on more abstract thought processes in belief-neutral reasoning. These findings fit with other results from neuroimaging studies indicating that there is no single reasoning center in the brain; different types of reasoning tasks call on distinct processes that are associated with different brain regions (Goel, 2007).

Figure 9.22: Active Brain Regions in Reasoning These images from an fMRI study show that different types of reasoning activate different brain regions. Areas within the parietal lobe (a) were especially active during logical reasoning that is not influenced by prior beliefs (belief-neutral reasoning), whereas (b) shows an area within the left temporal lobe showed enhanced activity during reasoning that was influenced by prior beliefs (belief-laden reasoning). This suggests that people approach each type of reasoning problem in a different way.
Courtesy Vinod Goel

392

  • The success of human reasoning depends on the content of the argument or scenario under consideration. People seem to excel at practical reasoning while stumbling when theoretical reasoning requires evaluation of the truth of a set of arguments.
  • Belief bias describes a distortion of judgments about conclusions of arguments, causing people to focus on the believability of the conclusions rather than on the logical connections between the premises.
  • Neuroimaging provides evidence that different brain regions are associated with different types of reasoning.
  • We can see here and elsewhere in the chapter that some of the same strategies that earlier helped us to understand perception, memory, and learning–carefully examining errors and trying to integrate information about the brain into our psychological analyses–are equally helpful in understanding thought and language.