27.2 Problem Solving: Strategies and Obstacles

27-2 What cognitive strategies assist our problem solving, and what obstacles hinder it?

One tribute to our rationality is our problem-solving skill. What’s the best route around this traffic jam? How shall we handle a friend’s criticism? How can we get in the house without our keys?

Some problems we solve through trial and error. Thomas Edison tried thousands of light bulb filaments before stumbling upon one that worked. For other problems, we use algorithms, step-by-step procedures that guarantee a solution. But step-by-step algorithms can be laborious and exasperating. To find a word using the 10 letters in SPLOYOCHYG, for example, you could try each letter in each of the 10 positions—907,200 permutations in all. Rather than give you a computing brain the size of a beach ball, nature resorts to heuristics, simpler thinking strategies. Thus, you might reduce the number of options in the SPLOYOCHYG example by grouping letters that often appear together (CH and GY) and excluding rare letter combinations (such as two Y’s together). By using heuristics and then applying trial and error, you may hit on the answer. Have you guessed it?1

Sometimes we puzzle over a problem and the pieces suddenly fall together in a flash of insight—an abrupt, true-seeming, and often satisfying solution (Topolinski & Reber, 2010). Ten-year-old Johnny Appleton’s insight solved a problem that had stumped construction workers: how to rescue a young robin from a narrow 30-inch-deep hole in a cement-block wall. Johnny’s solution: Slowly pour in sand, giving the bird enough time to keep its feet on top of the constantly rising pile (Ruchlis, 1990).

Teams of researchers have identified brain activity associated with sudden flashes of insight (Kounios & Beeman, 2009; Sandkühler & Bhattacharya, 2008). They gave people a problem: Think of a word that will form a compound word or phrase with each of three other words in a set (such as pine, crab, and sauce), and press a button to sound a bell when you know the answer. (If you need a hint: The word is a fruit.2) EEGs or fMRIs (functional MRIs) revealed the problem solver’s brain activity. In the first experiment, about half the solutions were by a sudden Aha! insight. Before the Aha! moment, the problem solvers’ frontal lobes (which are involved in focusing attention) were active, and there was a burst of activity in the right temporal lobe, just above the ear (FIGURE 27.3). In another experiment, researchers used electrical stimulation to decrease left hemisphere activity and increase right hemisphere activity. The result was improved insight, less restrained by the assumptions created by past experience (Chi & Snyder, 2011).

Figure 27.3
The Aha! moment A burst of right temporal lobe activity accompanied insight solutions to word problems (Jung-Beeman et al., 2004). The red dots designate EEG electrodes. The light gray lines show the distribution of high-frequency activity accompanying insight. The insight-related activity is centered in the right temporal lobe (yellow area).

Insight strikes suddenly, with no prior sense of “getting warmer” or feeling close to a solution (Knoblich & Oellinger, 2006; Metcalfe, 1986). When the answer pops into mind (apple!), we feel a happy sense of satisfaction. The joy of a joke may similarly lie in our sudden comprehension of an unexpected ending or a double meaning: “You don’t need a parachute to skydive. You only need a parachute to skydive twice.” Comedian Groucho Marx was a master at this: “I once shot an elephant in my pajamas. How he got in my pajamas I’ll never know.”

Heuristic searching To find guava juice, you could search every supermarket aisle (an algorithm), or check the bottled beverage, natural foods, and produce sections (heuristics). The heuristics approach is often speedier, but an algorithmic search guarantees you will find it eventually.

Inventive as we are, other cognitive tendencies may lead us astray. For example, we more eagerly seek out and favor evidence that supports our ideas than evidence that refutes them (Klayman & Ha, 1987; Skov & Sherman, 1986). Peter Wason (1960) demonstrated this tendency, known as confirmation bias, by giving British university students the three-number sequence 2-4-6 and asking them to guess the rule he had used to devise the series. (The rule was simple: any three ascending numbers.) Before submitting answers, students generated their own three-number sets and Wason told them whether their sets conformed to his rule. Once certain they had the rule, they could announce it. The result? Seldom right but never in doubt. Most students formed a wrong idea (“Maybe it’s counting by twos”) and then searched only for confirming evidence (by testing 6-8-10, 100-102-104, and so forth).

358

“Ordinary people,” said Wason (1981), “evade facts, become inconsistent, or systematically defend themselves against the threat of new information relevant to the issue.” Thus, once people form a belief—that vaccines cause (or do not cause) autism spectrum disorder, that people can (or cannot) change their sexual orientation, that gun control does (or does not) save lives—they prefer belief-confirming information. The results can be momentous. The U.S. war against Iraq was launched on the belief that dictator Saddam Hussein possessed weapons of mass destruction (WMD) that posed an immediate threat. When that assumption turned out to be false, the bipartisan U.S. Senate Select Committee on Intelligence (2004) laid blame on confirmation bias: Administration analysts “had a tendency to accept information which supported [their presumptions] … more readily than information which contradicted” them. Sources denying such weapons were deemed “either lying or not knowledgeable about Iraq’s problems,” while those sources who reported ongoing WMD activities were seen as “having provided valuable information.”

“The human understanding, when any proposition has been once laid down … forces everything else to add fresh support and confirmation.”

Francis Bacon, Novum Organum, 1620

Once we incorrectly represent a problem, it’s hard to restructure how we approach it. If the solution to the matchstick problem in FIGURE 27.4 eludes you, you may be experiencing fixation—an inability to see a problem from a fresh perspective. (For the solution, see FIGURE 27.5.)

Figure 27.4
The matchstick problem How would you arrange six matches to form four equilateral triangles?
Figure 27.5
Solution to the matchstick problem To solve this problem, you must view it from a new perspective, breaking the fixation of limiting solutions to two dimensions.

A prime example of fixation is mental set, our tendency to approach a problem with the mind-set of what has worked for us previously. Indeed, solutions that worked in the past often do work on new problems. Consider:

Given the sequence O-T-T-F-?-?-?, what are the final three letters?

Most people have difficulty recognizing that the three final letters are F(ive), S(ix), and S(even). But solving this problem may make the next one easier:

Given the sequence J-F-M-A-?-?-?, what are the final three letters? (If you don’t get this one, ask yourself what month it is.)

As a perceptual set predisposes what we perceive, a mental set predisposes how we think; sometimes this can be an obstacle to problem solving, as when our mental set from our past experiences with matchsticks predisposes us to arrange them in two dimensions.

359