14.4 To Cooperate or Not: Prosocial Behavior and the Dilemma of Social Life

Evolutionary theorists have always had difficulty explaining prosocial behavior, voluntary behavior intended to benefit other people, introduced in Chapter 12. They’ve had an easier time explaining aggression or other behaviors that serve to benefit the individual, often at the expense of others. It may not be nice, but stealing a neighbors’ property or spouse can have positive consequences for the thief and potentially for his or her genes. Yet, as Michael Tomasello (2009) argued in his book Why We Cooperate, prosocial behavior and cooperation are every bit as much of evolved human nature as are aggression and competition. Tomasello proposed that there are two components to prosocial behavior, altruism—-behaviors in which an actor tries to help another individual achieve some goal at some expense (and no obvious benefit) to the actor, and mutualism, or cooperation, in which two or more individuals coordinate their actions to produce some mutually beneficial outcome, especially one that could not be achieved if working alone. Evidence that both altruism and mutualism are part of humans’ evolved nature is that they develop relatively early in childhood, are mediated by empathy, and do not increase with rewards; moreover, hints of such behavior are seen in humans’ closest living relatives, chimpanzees (Warneken & Melis, 2012).

© The New Yorker Collection, 2003 Alex Gregory from cartoonbank.com. All Rights Reserved.

With respect to chimpanzees, although they seem to cooperate with one another in the wild, forming hunting and raiding parties, for example, (Boesch & Boesch, 1989), the results of most laboratory studies of cooperative behavior reveal that they display little. For example, in one study (Povinelli & O’Neill, 2000), chimpanzees were trained to cooperatively pull ropes to retrieve a food reward. When the trained chimpanzees were later paired with naïve chimps, cooperative behavior (that is, both chimpanzees working together to retrieve the reward) occurred in only 2 of 10 pairings, and in one of these pairings the naïve animal seemingly discovered on her own how to pull the rope to retrieve the reward. Yet, more recent research has shown that, under some conditions, adult chimpanzees will indeed cooperate with one another to achieve a goal. For example, using an apparatus in which two individuals must cooperate by each pulling a rope to retrieve a treat, Alicia Melis and her colleagues (2006) showed that pairs of chimpanzees that were tolerant of one another outside of the testing context (who were likely to share food with one another) were able to simultaneously pull the ropes to obtain and share a food reward. In contrast, less tolerant pairs of chimpanzees were not. In another task, chimpanzees were able to cooperate with a skilled (human) partner to achieve a food reward, although less consistently and readily than human children (Warneken & Tomasello, 2006). However, unlike children who protested when the adult stopped responding during a social game (when there was no reward), none of the chimpanzees did. These findings suggest that humans, from a very early age, not only have the social-cognitive abilities to engage in cooperative behaviors, but, unlike their simian relatives, see cooperation in social games as normative.

562

However, as we’ll see in this section, there is always conflict between cooperating and looking out for Number One. At the same time that they are teammates working for common ends, the members of a group are also individuals with self-interests that can run counter to those of the group as a whole. The tension between acting for the good of the group (cooperation) and acting for one’s own selfish good at the expense of the others (defection) is epitomized in social dilemmas. A social dilemma exists whenever a particular course of action or inaction will benefit the individual but harm the others in the group and cause more harm than good to everyone if everyone takes that course.

The Tragedy of the Commons: A Social-Dilemma Allegory

25

How does “the tragedy of the commons” illustrate the critical importance of social dilemmas to human survival? What are some examples of real-world social dilemmas?

The significance of social dilemmas for human survival was dramatically illustrated by the ecologist Garrett Hardin (1968) with an allegory that he called the tragedy of the commons. Hardin compared our whole planet with the common grazing land that used to lie at the center of New England towns. When the number of cattle grazing the pasture began to reach the limit that the pasture could support, each farmer was faced with a dilemma: “Should I add another cow to my herd? One more cow will only slightly hurt the pasture and my neighbors, and it will significantly increase my profits. But if everyone adds a cow, the pasture will fail and all the cattle will die.” The dilemma becomes a tragedy if all the farmers reason: “It is not my increase in cattle, but the combined increase by everyone, that will determine the fate of the commons. I will lose if others as well as I increase their herd, but I will lose even more if others increase their herd and I do not.” So they all add a cow, the pasture gives out, the cattle die, and the townspeople all suffer the loss.

A modern tragedy of the commons The oceans are common fishing grounds where thousands of people make their living. When too many fish are caught, the supply diminishes, and valued species may even become extinct. Each fisherman can reason, logically, that his catch contributes very little to the problem; the diminished supply of fish is caused by the thousands of other fishermen.
© Jim West/Alamy

We are all constantly involved in social dilemmas, some so grand in scale as to encompass all members of our species as a single group and others much smaller in scale. Here’s a grand one: Sound logic tells me that the pollution I personally add to the Earth’s atmosphere by driving a gasoline-burning automobile does not seriously damage the air. It is the pollution of the millions of cars driven by others that causes the serious damage. So I keep driving, everyone else does too, and the pollution keeps getting worse.

Here’s a social dilemma of smaller scale, more typical of the kind that most of us actually experience as a dilemma: If you are part of a team of students working on a project to which everyone is supposed to contribute and for which everyone will receive the same grade, you might benefit by slacking off and letting others do the work. That way you could spend your time on other courses, where your grade depends only on your own effort. But if everyone in your group reasoned that way, the group project would not get done and you, along with the others in your group, would fail.

Another example, intermediate in scale, is that of public radio, which depends on voluntary contributions from its listeners. Any individual listener might reason that his or her contribution or lack of it won’t make or break the station and might decide, therefore, to leave the contributing to others. If everyone reasoned that way, the station would disappear.

Every project that depends on group effort or voluntary contributions poses a social dilemma. In each case, social working, or contributing, is the individual’s cooperative solution, and social loafing, or free riding, is the noncooperative solution. We are all involved in social dilemmas every day. What are the factors that lead us to cooperate or not in any given instance? Let’s first look more closely at the logic of social dilemmas, as exemplified in laboratory games, and then at some conditions that promote cooperation in real-life social dilemmas.

563

The Logic of Social Dilemmas Exemplified in Games

To study the logic of social dilemmas, stripped of some of their real-world complexity, social scientists have invented a variety of social-dilemma games. Going further—stripping away also the emotions, values, and norms that human subjects carry with them—some social scientists use computer programs rather than humans as the players. Such studies help researchers understand the underlying logic of the dilemmas and develop hypotheses about real-life conditions that would tip the balance toward cooperation or away from it.

The One-Trial Prisoner’s Dilemma Game

26

What are the defining features of prisoner’s dilemma games, and how do they put each player into a social dilemma?

The most common social-dilemma games used by researchers are two-person games called prisoner’s dilemma games. They are called that because of their historical origin in a hypothetical dilemma in which each of two prisoners must choose whether to remain silent (“cooperate” with one’s partner in crime; not “snitch”) or to confess, admitting that you both are guilty (“defecting”; throwing one’s partner “under the bus”). If both remain silent (“cooperate”), both will get a short prison sentence based on other charges. If both confess (“defect”), they will both get a moderately long sentence. If only one confesses, that one will be granted immunity and get no sentence but the partner will get a very long sentence. They can neither communicate nor learn the other’s choice until both have chosen.

In prisoner’s dilemma games played in the psychology laboratory, the consequence for one choice or the other is not a reduced or increased prison sentence but a lower or greater monetary reward. Figure 14.11 shows a typical payoff matrix for a two-person game. On each trial, each player can choose either to cooperate for the common good or to defect. Neither learns of the other’s choice until both have responded, and the payoff to each player depends on the combination of their two responses. As in all prisoner’s dilemma games, the payoff matrix has the following characteristics:

Figure 14.11: Sample payoff matrix for a prisoner’s dilemma game On each trial, each player must decide whether to cooperate or defect, without knowing in advance what the other player will do. The payoff to each player depends on the combination of the two players’ decisions. In this example, the possible payoffs to player 1 are shown in the blue portions of the matrix, and the possible payoffs to player 2 are shown in the green portions.

Both players are informed of the payoff matrix and have adequate time to think about their choice. The game is a social dilemma because the highest individual payoff to either player comes from defecting, but the highest total payoff to the two combined comes from cooperating. If the other player defects, you will get more for defecting ($1) than for cooperating ($0); if the other cooperates, you will still get more for defecting ($5) than for cooperating ($3); but if you both defect, you will each get less ($1) than you would if you had both cooperated ($3).

564

In the one-trial version of the game, each player plays only once with a given other player. When the players are anonymous to each other, are not allowed to discuss their choices, and see their task as winning as much for themselves as they can, they usually defect. A person who always defects on one-trial prisoner’s dilemma games will necessarily win more money than a person who sometimes or always cooperates. Logic tells both players to defect, so they both do and get $1 each. What a pity they are so logical! If they were illogical, they might have both cooperated and received $3 each.

The Iterative Prisoner’s Dilemma Game: The Power of Reciprocity

27

Why are players more likely to cooperate in an iterative (repeated) prisoner’s dilemma game than in a one-trial game?

If two players play the same prisoner’s dilemma game repeatedly (iteratively) with each other for a series of trials rather than just once, the logic changes. Cooperation becomes a reasonable choice even when the only goal is to maximize one’s own profits. Each player might now reason: “If I cooperate on this trial, maybe that will convince the other player to cooperate on the next. We’ll both do a lot better over time if we cooperate and get $3 each per trial than if we defect and get $1 each per trial.” In other words, logic and selfishness, which lead players to defect in the onetrial game, can lead them to cooperate in the iterative game.

In order to identify the best strategy in an iterative prisoner’s dilemma game (the strategy that maximizes one’s own earnings), Robert Axelrod (1984) held two round-robin tournaments played by computer programs rather than by human subjects. The winner of the competition was a program called Tit-for-Tat (or TFT). It consisted of just two rules: (1) The first time you meet a new program, cooperate with it. (2) After that, do on each trial what the other program did on its most recent previous trial with you.

Notice that TFT is incapable of “beating” any other program in head-to-head competition. It never cooperates less often than the other program does, and therefore it can never win more points in play with another program than that program wins in play with it. The TFT program won the tournament not by beating other programs in individual encounters but by getting them to cooperate with it. Other programs earned at least as many points as TFT in their games with TFT, but they did not do so well in their games with each other, and that is why TFT earned the most points in the end.

Why is TFT so effective in eliciting cooperation? According to Axelrod’s analysis, there are four reasons:

  1. TFT is nice. By cooperating from the start, it encourages the other player to cooperate.
  2. TFT is not exploitable. By reciprocating every failure to cooperate with its own failure to cooperate on the next turn, it discourages the other player from defecting.
  3. TFT is forgiving. It resumes cooperating with any program as soon as that program begins to cooperate with it.
  4. TFT is transparent. It is so simple that other programs quickly figure out its strategy and learn that they are best off cooperating with it.

Studies since Axelrod’s have shown that TFT and similar “nice” programs are highly effective in eliciting cooperation not just from other computer programs but also from human subjects in the laboratory (Imhof et al., 2007; Nowak, 2006). People, like computer programs, figure out rather quickly that their best strategy when playing with TFT is to cooperate.

Decline in Cooperation as the Number of Players Increases

28

Why might we logically expect small groups to cooperate more than large ones? What evidence indicates that indeed they do?

Prisoner’s dilemma games involve just two players, but other social-dilemma games—including those called public-goods games—can involve any number. In a typical public-goods game, each player is given a sum of money and then, under conditions of anonymity, must choose whether to keep the money or contribute it to a common pool (the public good). Then, if and only if at least a certain percentage of players (say, 75 percent) have contributed, all players, including those who haven’t contributed, receive a reward that is substantially greater than the amount each person was asked to contribute.

Should I drive or cycle to work? For some people it is easy to resolve this dilemma in the cooperative direction. Bicycle commuting not only helps preserve the earth’s atmosphere but is also good exercise and more fun than driving.
Lorne Resnick/Stone/Getty Images

565

In such games, each individual player’s choice to contribute or not has a bigger impact on the whole group effort if the number of players is small than if it is large, so the temptation to refrain from contributing becomes greater as the number of players increases. With few players, each one might logically reason: “I’d better contribute or we won’t reach the needed percentage to win the reward.” But with many players, each might logically reason: “My contribution will have little effect on the total pool of contributions, so my best strategy is to keep my money and hope enough others will contribute to produce the reward.”

The result is that far fewer rewards are won in such games when group size is large than when it is small (Alencar et al., 2008; Glance & Huberman, 1994). The same thing happens in experiments in which groups of people are asked to exert effort for a common goal. People work harder in small groups than in large ones (Karau & Williams, 1995, 2001). You are more likely to contribute to a course project for a common grade if you are one member on a team of 3 students than if you are one on a team of 30. The larger the group, the greater is the diffusion of responsibility, and the less is the likelihood that a given individual will contribute.

Conditions That Promote Cooperation

In real life, and even in laboratory games, people cooperate more in social dilemmas than would be expected if their choices were based solely on immediate self-interest (Fehr & Fischbacher, 2003, 2004). Many people work hard on group projects even in large groups. Many people contribute to public television. Some people (though not nearly enough) even choose to ride a bicycle or use public transportation, rather than drive a car, to reduce pollution and help save the planet. What are the forces that lead us to cooperate in such “illogical” ways?

Evolution, cultural history, and our own individual experiences have combined to produce in us decision-making mechanisms that are not confined to an immediate cost-benefit analysis. Consciously or unconsciously, thoughtfully or automatically, we take into account factors that have to do with not just our short-term interests but also our long-term interests, which often reside in maintaining good relationships with other people. Many aspects of our social nature can be thought of as adaptations for cooperating in social dilemmas.

Accountability, Reputation, and Reciprocity as Forces for Cooperation

In the iterative prisoner’s dilemma game, people and computer programs using the TFT strategy do well by cooperating because that encourages other players to cooperate in return. By cooperating, players greatly increase their long-term earnings at the expense of slightly reduced short-term earnings on any given play. The strategy works only because the other players can identify who did or did not cooperate, can remember that information from one trial to the next, and are inclined to respond to each player in accordance with that player’s previous action toward them. In human terms, we would say that TFT is successful because each player is accountable for his or her actions. Through that accountability, TFT establishes a reputation as one who helps others and reciprocates help given by others but who won’t be exploited by those who fail to reciprocate.

29

Why is TFT especially successful in situations where players can choose their partners? How do real-life social dilemmas differ from laboratory games with respect to accountability, reputation, and reciprocity?

When players of laboratory social-dilemma games believe that others, who can identify them, will learn about their choices, they behave more generously, or more cooperatively, than they do in anonymous conditions (Piazza & Bering, 2008). When the players are free to choose the partners with whom they play, they favor those who have already developed a reputation for cooperation (Sheldon et al., 2000). The TFT strategy does especially well in this situation because it attracts partners who seek to cooperate and repels potential partners who seek to compete (Van Lange & Visser, 1999).

566

In laboratory games, the factors that foster cooperation—accountability, reputation, and reciprocity—are neatly confined to the specific actions allowed by the game. But in real life they are not confined; they spill out everywhere. If I help a person today, that person and others who hear of my help may be disposed to help me in the future, in ways that I cannot even guess at today. Stories of long-range, unanticipated reciprocity are easy to find in everyone’s autobiography. Research studies, in various cultures, suggest that people everywhere tend to keep track of the degree to which others are helpful and to offer the greatest help to those who have themselves been most helpful in the past (Fehr, 2004; Gurven, 2004; Price, 2003).

Norms of Fairness and Punishment of Cheaters as Forces for Cooperation

People everywhere seem to have a strong sense of fairness, which goes beyond immediate self-interest. In many situations, people would rather gain nothing than enter into an unfair agreement by which they gain a little and another person gains a lot (Fehr & Fischbacher, 2003, 2004; Tabibnia et al., 2008). This sense of fairness has been illustrated using laboratory games called ultimatum games.

30

How have laboratory games demonstrated the human sense of justice and willingness to punish even at a personal cost? How does such behavior promote long-term cooperation?

In a typical ultimatum game, two players are told that a certain amount of money, say $100, will be divided between them. One player, the proposer, is allowed to propose how to divide the money, and the other player, the responder, has the choice of accepting or rejecting the offer. If the offer is accepted, the two players keep the amounts of money proposed by the proposer; if the offer is rejected, nobody gets anything. When this game is played in a one-shot manner (meaning that there will be no future games) under conditions of complete anonymity, it is irrational, from an economic point of view, for the responder to reject any offer above $0. Yet, the repeated result, wherever this game is played, is that responders typically reject any offer that is considerably less than half of the total money. In one experiment, this was true even when the amount of money involved was so much that the smaller (rejected) portion would pay the player’s living expenses for several weeks (Cameron, 1999).

Figure 14.12: Altruistic punishment increases cooperation In each trial of this social-dilemma game, each player was given 20 money units and could contribute any amount of that to a common pool. The money contributed was then multiplied by 1.4 and redistributed evenly among the four players. In one condition, players could punish low contributors by giving up one of their own money units to have three units taken away from the punished player. With the punishment option, cooperation increased from trial to trial. Without that option, it decreased from trial to trial. At the end of all trials, the players could exchange the money units they had accumulated for real money.
(Based on data from Fehr & Gächter, 2002.)

Other experiments show that in public-goods games with more than two players, people are willing to give up some of their own earnings in order to punish a player who has contributed substantially less than his or her share to the public good (called altruistic punishment; Fehr & Fishbacher, 2004; Gächter et al., 2008). The punishment, in such cases, involves removing some of the winnings that the “cheater” has garnered. As Figure 14.12 shows, when the opportunity to punish cheaters is introduced into the rules of such a game, most people stop cheating, so the total amount of cooperation—and thus, the total amount earned by all players combined—increases.

The fact that people are willing to go out of their way and pay a cost to punish cheaters may be a fact of human nature—and/or a norm resulting from cultural training—that helps maintain cooperation in human societies. Such behavior is mediated by emotions. Neuroimaging studies of people in ultimatum and public-goods games have shown that when people are cheated, brain areas associated with anger become active (Sanfey et al., 2003), and when people punish cheaters, brain areas associated with pleasure become active (de Quervain et al., 2004). Apparently, we are emotionally wired to get mad at cheaters and to enjoy revenge, even when that revenge entails an economic cost to ourselves. Similarly, when people donate money to a charity, the same brain areas associated with rewards (mesolimbic reward system) are activated as when they receive a monetary reward (Moll et al., 2006). In fact, neuroimaging studies have shown that different types of social decision-making are associated with activation in specific brain regions (Rilling & Sanfey, 2011) (see Figure 14.13).

Figure 14.13: Using neuroimaging techniques Researchers have documented that most forms of social decision-making are associated with activation in specific areas of the brain. Depicted are (a) medial and (b) lateral views of the brain showing the neural systems that mediate nine different types of social decisions. Solid lines indicate surface structures; dashed lines, deep structures; −, inhibitory influences; +, stimulatory influences; arrows, white matter connections; DMPFC, dorsomedial prefrontal cortex; TPJ, temporo-parietal junction; VMPFC, ventromedial prefrontal cortex; dACC, dorsal anterior cingulate cortex; DLPFC, dorsolateral prefrontal cortex; VLPFC, ventrolateral prefrontal cortex; LOFC, lateral orbitofrontal cortex; STS, superior temporal sulcus; 5-HT, serotonin; OT, oxytocin; T, testosterone.
(With permission from Rilling, J. K., & Sanfey, A. G., 2011. The neuroscience of social decision-making. In Susan T. Fiske (Ed.), Annual Review of Psychology, 62, 23-48. Permission of Annual Reviews, Inc., conveyed through Copyright Clearance Center, Inc.)

567

568

Social Identity Promotes Cooperation Within Groups and Competition Across Groups

As discussed in Chapter 13, people everywhere have two different ways of thinking about themselves, which serve different functions. One is personal identity, which entails thought of oneself as an independent person with self-interests distinct from those of other people. The other is social identity, which entails thought of oneself as a more or less interchangeable member of a larger entity, the group, whose interests are shared by all members. Evolutionarily, the two modes of self-thought may have arisen from our need to survive both as individuals and as groups (Guisinger & Blatt, 1994). If I save myself but destroy the group on which I depend, I will, in the long run, destroy myself. We don’t logically think the issue through each time but, rather, tend automatically to cooperate more when we think of the others as members of our group than when we don’t.

31

What is some evidence that social identity can lead to helping group-mates and hurting those who are not group-mates?

Many experiments have shown that people in all types of social-dilemma games cooperate much more when they think of the other players as group-mates than when they think of them as separate individuals or as members of other groups (Dawes & Messick, 2000; Van Vugt & Hart, 2004; Yamagishi & Mifune, 2008). In one study, for example, players differing in age cooperated much more if they were introduced to one another as citizens of the same town than if they were introduced as representatives of different age groups (Kramer & Brewer, 1984). In studies, simply referring to a set of unacquainted players as a “group,” or allowing them to shake hands and introduce themselves before starting the game, increased their cooperation (Boone et al., 2008; Wit & Wilke, 1992).

© The New Yorker Collection, 2001 Barbara Smaller from cartoonbank.com. All Rights Reserved.

People are also more likely to feel empathy for in-group than for out-group members. In fact, when groups are in competition with one another, people may experience schadenfreude, pleasure at another’s pain (Cikara et al., 2011). For example, males, but not females, show activation in reward-related areas of the brain (left ventral striatum) when a competitor receives a painful electric shock (Singer et al., 2006), and both males and females display activation in reward-related areas of the brain (bilateral ventral striatum) when a social competitor has rumors spread about him or her (Takahashi et al., 2009).

Identification with a group increases people’s willingness to help members of their own group but decreases their willingness to help members of another group. Groups playing social-dilemma games with other groups are far more likely to defect than are individuals playing with other individuals, even when the payoffs to individuals for cooperating or defecting are identical in the two conditions (Wildschut et al., 2007). People are much less likely to trust others and more likely to cheat others when they view those others as part of another group than when they view them as individuals. In real life, as in the laboratory, interactions between groups are typically more hostile than interactions between individuals (Hoyle et al., 1989). All too often the hostility becomes extreme. Such in-group favoritism and out-group discrimination is evident as early as the preschool years on the basis of sex and race (Aboud, 2003; Patterson & Bigler, 2006).

Group Against Group: Lessons from Robbers Cave

The most vicious and tragic side of our nature seems to emerge when we see ourselves as part of a group united against some other group. Perhaps because of an evolutionary history of intertribal warfare, we can be easily provoked into thinking of other groups as enemies and as inferior beings, unworthy of respectful treatment. The history of humankind can be read as a sad, continuing tale of intergroup strife that often becomes intergroup atrocity.

569

There are ethical limits to the degree to which researchers can create the conditions that bring out intergroup hostility for the purpose of study. Muzafer Sherif and his colleagues (1961; Sherif, 1966) approached those limits in a now-famous study, conducted in the 1950s, with 11- and 12-year-old boys at a 3-week camping program in Oklahoma’s Robbers Cave Park (so named because it was once used as a hideout by the famous outlaw Jesse James). The researchers were interested in understanding how hostility between groups develops and how it can be resolved.

The Escalation of Conflict

To establish two groups of campers, Sherif and his colleagues divided the boys into two separate cabins and assigned separate tasks to each group, such as setting up camping equipment and improving the swimming area. Within a few days, with little adult intervention, each cabin of boys acquired the characteristics of a distinct social group. Each group established its own leaders, its own rules and norms of behavior, and its own name—the Eagles and the Rattlers.

When the groups were well established, the researchers proposed a series of competitions, and the boys eagerly accepted the suggestion. They would compete for valued prizes in such games as baseball, touch football, and tug-of-war. As Sherif had predicted from previous research, the competitions promoted three changes in the relationships among the boys within and between groups:

32

What changes occurred within and between two groups of boys as a result of intergroup competitions at a summer camp?

  1. Within-group solidarity. As the boys worked on plans to defeat the other group, they set aside their internal squabbles and differences, and their loyalty to their own group became even stronger than it was before.
  2. Negative stereotyping of the other group. Even though the boys had all come from the same background (white, Protestant, and middle class) and had been assigned to the groups on a purely random basis, they began to see members of the other group as very different from themselves and as very similar to one another in negative ways. For example, the Eagles began to see the Rattlers as dirty and rough, and in order to distinguish themselves from that group they adopted a “goodness” norm and a “holier-than-thou” attitude.
  3. Hostile between-group interactions. Initial good sportsmanship collapsed. The boys began to call their rivals names, accuse them of cheating, and cheat in retaliation. After being defeated in one game, the Eagles burned one of the Rattlers’ banners, which led to an escalating series of raids and other hostilities. What at first was a peaceful camping experience turned gradually into something verging on intertribal warfare.

Resolving the Conflict by Creating Common Goals

Rattlers versus Eagles What at first were friendly competitions, such as the tug-of-war shown here, degenerated into serious hostility and aggression between the two groups of boys in Sherif’s field study at Robbers Cave.
The Carolyn and Muzafer Sherif papers, Archives of the History of American Psychology, The Center for the History of Psychology, The University of Akron.

In the final phase of their study, Sherif and his colleagues tried to reduce hostility between the two groups, a more difficult task than provoking it had been. In two previous studies similar to the one at Robbers Cave, Sherif had tried a number of procedures to reduce hostility, all of which had failed. Peace meetings between leaders failed because those who agreed to meet lost status within their own groups for conceding to the enemy. Individual competitions (similar to the Olympic Games) failed because the boys turned them into group competitions by tallying the total victories for each group. Sermons on brotherly love and forgiveness failed because, while claiming to agree with the messages, the boys simply did not apply them to their own actions.

At Robbers Cave, the researchers tried two new strategies. The first involved joint participation in pleasant activities. Hoping that mutual enjoyment of noncompetitive activities would lead the boys to forget their hostility, the researchers brought the two groups together for such activities as meals, movies, and shooting firecrackers. This didn’t work either. It merely provided opportunities for further hostilities. Meals were transformed into what the boys called “garbage wars.”

570

33

How did Sherif and his colleagues succeed in promoting peace between the two groups of boys?

The second new strategy, however, was successful. This involved the establishment of superordinate goals, defined as goals that were desired by both groups and could be achieved best through cooperation between the groups. The researchers created one such goal by staging a breakdown in the camp’s water supply. In response to this crisis, boys in both groups volunteered to explore the mile-long water line to find the break, and together they worked out a strategy to divide their efforts in doing so. Two other staged events similarly elicited cooperation. By the end of this series of cooperative adventures, hostilities had nearly ceased, and the two groups were arranging friendly encounters on their own initiative, including a campfire meeting at which they took turns presenting skits and singing songs. On their way home, one group treated the other to milkshakes with money left from its prizes.

A superordinate task brings people together When the Red River overflowed in Fargo, North Dakota, the local people worked together to create and stack sandbags to save homes from flooding. None of them could succeed at this task alone; they needed one another. Social-psychological theory predicts that this experience will create positive bonds among the volunteers and promote their future cooperation.
REUTERS/Eric Miller/Landov

Research since Sherif’s suggests that the intergroup harmony brought on by superordinate goals involves the fading of group boundaries (Bodenhausen, 1991; Gaertner et al., 1990). The groups merge into one, and each person’s social identity expands to encompass those who were formerly excluded. The boys at Robbers Cave might say, “I am an Eagle [or a Rattler], but I am also a member of the larger camp group of Eagles plus Rattlers—the group that found the leak in the camp’s water supply.”

If there is hope for a better human future, one not fraught with wars, it may lie in an increased understanding of the common needs of people everywhere and the establishment of superordinate goals. Such goals might include those of stopping the pollution of our shared atmosphere and oceans, the international drug trade, diseases such as AIDS that spread worldwide, and the famines that strike periodically and disrupt the world. Is it possible that we can conceive of all humanity as one group, spinning together on a small and fragile planet, dependent on the cooperation of all to find and stop the leaks?

SECTION REVIEW

Social dilemmas require individuals to decide whether or not to act for the common good.

The Tragedy of the Commons

  • In a social dilemma, the choice to behave in a certain way produces personal benefit at the expense of the group and leads to harm for all if everyone chooses that option.
  • In the tragedy of the commons, each individual puts one extra cow on the common pasture, thinking his or her cow will make little difference; but the collective effect is disastrous to all.

Social-Dilemma Games

  • Social-dilemma games, including prisoner’s dilemma games, are used to study the conditions that promote cooperation.
  • Players tend to defect in one-shot prisoner’s dilemma games. When players can play each other repeatedly, however, a tit-for-tat strategy can promote cooperation.
  • In social-dilemma games with multiple players, cooperation generally declines as the number of players increases.

Roles of Accountability and Social Identity

  • Cooperation increases when players are accountable for their actions and can develop reputations as cooperators or cheaters.
  • The tendency for people to reject unfair offers and to punish cheaters, even at their own expense, is a force for cooperation.
  • Shared social identity among group members increases cooperation within the group but decreases cooperation with other groups.

Group Against Group

  • In the Robbers Cave experiment, competition between the two groups of boys led to solidarity within groups, negative stereotyping of the other group, and hostile interactions between groups.
  • Hostility was greatly reduced by superordinate goals that required the two groups to cooperate.

571