9.1 How Others Influence Our Behavior

Social influence research examines how other people and the social forces they create influence an individual’s behavior. There are many types of social influence, including conformity, compliance, obedience, and group influences. In this section, we will discuss all of these types of social influence. We start with conformity.

Why We Conform

conformity A change in behavior, belief, or both to conform to a group norm as a result of real or imagined group pressure.

Conformity is usually defined as a change in behavior, belief, or both to conform to a group norm as a result of real or imagined group pressure. The word conformity has a negative connotation. We don’t like to be thought of as conformists. However, conformity research indicates that humans have a strong tendency to conform. To understand the two major types of social influence leading to conformity (informational social influence and normative social influence), we’ll consider two classic studies on conformity: Muzafer Sherif ’s study using the autokinetic effect and Solomon Asch’s study using a line-length judgment task.

373

The Sherif study and informational social influence. In Sherif ’s study, the participants thought that they were part of a visual perception experiment (Sherif, 1937). Participants in completely dark rooms were exposed to a stationary point of light and asked to estimate the distance the light moved. Thanks to an illusion called the autokinetic effect, a stationary point of light appears to move in a dark room because there is no frame of reference and our eyes spontaneously move. How far and in what direction the light appears to move varies widely among different people. During the first session in Sherif’s study, each participant was alone in the dark room when making his judgments. Then, during the next three sessions, he was in the room with two other participants and could hear the others’ estimates of the illusory light movement. What happened?

The average of the individual estimates varied greatly during the first session. Over the next three group sessions, however, the individual estimates converged on a common group norm (see Figure 9.1). All of the participants in the group ended up making the same estimates. What do you think would happen if you brought the participants back a year later and had them make estimates again while alone? Would their estimates regress back to their earlier individual estimates or stay at the group norm? Surprisingly, they stay at the group norm (Rohrer, Baron, Hoffman, & Swander, 1954).

image
Figure 9.1: Figure 9.1 | Results of Sherif’s Study of Conformity | When the three participants in Sherif’s study were alone in the laboratory room on the first day, their estimates of the apparent movement of the stationary point of light varied greatly. However, once they were all together in the laboratory room from the second through the fourth day and could hear one another, their estimates converged. By the fourth day, they were all making the same judgments.
(Data from Sherif, M. & Sherif, C.W. (1969) Social Psychology. New York: Harper and Row, p. 209.)

informational social influence Influence stemming from the need for information in situations in which the correct action or judgment is uncertain.

To understand why conformity was observed in Sherif’s study, we need to consider informational social influence. This effect stems from our desire to be right in situations in which the correct action or judgment is not obvious and we need information. In Sherif’s study, participants needed information because of the illusory nature of the judgment; thus, their conformity was due to informational social influence. When a task is ambiguous or difficult and we want to be correct, we look to others for information. But what about conformity when information about the correct way to proceed is not needed? To understand this type of conformity, we need to consider Asch’s study on line-length judgment and normative social influence.

374

image
Solomon Asch
Solomon Asch Center for Study of Ethnopolitical Conflict

The Asch study and normative social influence. Student participants in Asch’s study made line-length judgments similar to the one in Figure 9.2 (Asch, 1955, 1956). These line-length judgments are easy. If participants are alone when making these judgments, they don’t make mistakes. But in Asch’s study, they were not alone. There were others in the room who were Asch’s student confederates who were playing the role of participants. Across various conditions in the study, the number of confederates varied from 1 to 15. On each trial, judgments were made verbally, and Asch structured the situation so that confederates responded before the actual participant. Seating was arranged so that the actual participant was the next to last to respond. The confederates purposely made judgmental errors on certain trials. Asch wondered what the actual participant would do on these critical trials when confronted with other people unanimously voicing an obviously incorrect judgment. For example, if all of the other participants before it was your turn said that “1” was the answer to the sample judgment in Figure 9.2, what would you say?

image
Figure 9.2: Figure 9.2 | An Example of Asch’s Line-Length Judgment Task | The task is to judge which one of the comparison lines (1, 2, or 3) is the same length as the standard line on the left.

Surprisingly, a large number of the actual participants conformed to the obviously incorrect judgments offered by the confederates. About 75% of the participants conformed at least once, and overall, participants conformed 37% of the time. Asch’s results have been replicated many times and in many different countries (Bond & Smith, 1996). The correct answers are clear in the Asch judgment task, and there are no obvious reasons to conform. The students in the experimental room didn’t know one another, and they were all of the same status. The judgment task was incredibly easy. Why, then, was any conformity observed?

375

image
Asch’s Conformity Study | This is a photograph taken from one of Asch’s conformity experiments. The student in the middle (#6) is the actual participant. The other participants are confederates of the experimenter. As you can see, the actual participant is perplexed by the obviously incorrect responses of the confederate participants on this trial.
Reproduced with permission. Copyright © 2013 Scientific American, Inc. All rights reserved

normative social influence Influence stemming from our desire to gain the approval and to avoid the disapproval of others.

The reason for the conformity observed in Asch’s study is normative social influence, an effect stemming from our desire to gain the approval and to avoid the disapproval of others. We change our behavior to meet the expectations of others and to get the acceptance of others. We go along with the crowd. But Asch, who died in 1996, always wondered whether the subjects who conformed did so only out of normative needs, knowing that their answers were wrong, or whether they did so because the social pressure of the situation also actually changed their perceptions to agree with the group—what the confederates said changed what the participants saw (Blakeslee, 2005). Berns et al. (2005) found some evidence that the latter may possibly be the case. Using fMRI (see Chapter 2), they scanned subjects’ brain activity while participating in an Asch-type conformity experiment employing a mental rotation task instead of a line-length judgment task. As in Asch’s studies, conformity was observed. Subjects gave the group’s incorrect answers (conformed) 41% of the time. Surprisingly, the brain activity for conforming responses was in the cortical areas dedicated to visual and spatial awareness, regions devoted to perception. However, the brain activity for independent responses was in the cortical areas devoted to emotion, such as the amygdala, indicating that nonconformity creates an emotional cost. These findings led the study’s lead author, Gregory Berns, to conclude that, “We like to think that seeing is believing, but the study’s findings show that seeing is believing what the group tells you to believe” (Blakeslee, 2005). This conclusion seems much too strong given the limitations on deciding the exact nature of the visual processing leading to the activity in the indicated brain areas. Although it can be concluded that different brain areas were active during conforming versus nonconforming responding, clearly, more research is needed before it can be concluded that the conformers actually changed their perceptions.

376

The finding by Berns et al. (2005) that the brain activity for independent responses indicated that nonconformity had an emotional cost, however, fits with Asch’s belief that a shared reality is the foundation of social behavior, and people become very upset when it is violated, as it was on the critical trials in Asch’s experiment (Gilchrist, 2015). According to Gilchrist, we feel the greatest distress when we find ourselves isolated on highly unambiguous properties of our shared reality, such as the line-length judgments in Asch’s study. Interestingly, Gilchrist relates the situation on critical trials to the intense debate that arose over The Dress color controversy (white and gold perceivers vs. blue and black perceivers) that we discussed in Chapter 3. Remember, those dramatically different perceptions of the dress’s color are thought to have stemmed from subjective differences among people in color constancy perception created by the brain’s determination of the illumination source for the dress. We take color as a highly unambiguous property of reality. Hence, our shared view of reality was violated when confronted with perceptions of the colors of the dress different from our own, leading to the ensuing heated, emotional debate about the colors of the dress. Similarly, participants in Asch’s experiment were confronted with a violation of shared reality created by the obviously incorrect judgments of the other participants to an unambiguous property of the reality (line length), which led them to be emotionally upset.

Although Asch’s study found some evidence for conformity stemming from normative social influence, it is important to remember that most of the time, most of the subjects in Asch’s study responded independently (gave correct answers) on the critical trials. For example, whereas 37% of the responses on critical trials were conforming (incorrect), 63% were independent (correct) responses (Asch, 1956). In addition, Asch reported that 25% of the participants never conformed, and only 5% conformed on all critical trials. Thus, 5 times as many participants were consistently independent as were consistently conforming, and although 75% of the subjects conformed at least once, 95% of the participants gave independent responses at least once. In sum, the subjects in Asch’s study truly tended to respond independently more often than in a conforming manner. Even Asch was struck by the rate at which subjects responded independently (Friend, Rafferty, & Bramel, 1990). Friend et al. maintain that when Asch’s work first appeared, it was taken as evidence of the powers of independence, but with the passing years, it has been represented as a study of conformity. In sum, Asch’s study led to two complementary findings. Normative social influence leads some people to conform even in performing a nonambiguous task, but the majority of people, although emotionally affected, are not swayed in this situation by such influence. Next we will consider some factors that impact the amount of conformity observed.

Situational, cultural, and gender factors that impact conformity. Asch and other conformity researchers have found many situational factors that affect whether we conform. Let’s consider three. (1) Unanimity of the group is important. It’s difficult to be a minority of one, but not two. For example, Asch found that the amount of conformity drops considerably if just one of the confederate participants gives an answer—right or wrong—that differs from the rest of the group. (2) The mode of responding (voting aloud versus secret ballot) is also important. In Asch’s study, if the actual participant did not have to respond aloud after hearing the group’s responses, but rather could write his response, the amount of conformity dropped dramatically. So, in the various groups to which you belong, be sure to use secret ballots when voting on issues if you want the true opinions of your group members. (3) Finally, more conformity is observed from a person who is of lesser status than the other group members or who is attracted to the group and wants to be a part of it. These situational factors are especially effective in driving conformity when there is a probationary period for attaining group membership.

377

Cultural factors also seem to impact the amount of conformity that is observed. As mentioned earlier, Bond and Smith (1996) conducted a meta-analysis of 133 conformity studies drawn from 17 countries using an Asch-type line-length judgment task to investigate whether the level of conformity had changed over time and whether it is related cross-culturally to individualism–collectivism. Broadly defined, individualism emphasizes individual needs and achievement. Collectivism, in contrast, emphasizes group needs, thereby encouraging conformity and discouraging dissent with the group. Analyzing just the studies conducted in the United States, they found that the amount of conformity had declined since the 1950s, paralleling the change in our culture toward more individualism. Similarly, they found that collectivist countries (e.g., Hong Kong) tended to show higher levels of conformity than individualist countries (e.g., the United States). Thus, whereas Asch’s basic conformity findings have been replicated in many different countries, cultural factors do play a role in determining the amount of conformity that is observed.

In their meta-analysis, Bond and Smith (1996) also found evidence for gender differences in conformity. They observed a higher level of conformity for female participants, which is consistent with earlier reviews of conformity studies (e.g., Eagly & Carli, 1981). They also found that this gender difference in conformity had not narrowed over time. Mori and Arai (2010) recently replicated this gender difference finding, using a very clever presentation technique, called the fMORI technique, that eliminates the need for confederates. This technique allowed the researchers to present different stimuli to the minority participants and the majority participants on critical trials without their awareness. The top part of the standard lines appeared in either green or magenta so that the two groups of participants would see them differently when wearing different types of polarizing sunglasses that filtered either green or magenta to make the lines appear longer or shorter. Mori and Arai’s study, incorporating a new presentation technique, testifies to the continuing interest in Asch’s conformity research, which is more than a half-century old.

Why We Comply

378

compliance Acting in accordance with a direct request from another person or group.

Conformity is a form of social influence in which people change their behavior or attitudes in order to adhere to a group norm. Compliance is acting in accordance to a direct request from another person or group. Think about how often others—your parents, roommates, friends, salespeople, and so on—make requests of you. Social psychologists have identified many different techniques that help others to achieve compliance with such requests. Salespeople, fundraisers, politicians, and anyone else who wants to get people to say “yes” use these compliance techniques. After reading this section you should be much more aware of how other people—especially salespeople—attempt to get you to comply with their requests. Of course, you’ll also be better equipped to get other people to comply with your requests. As we discuss these compliance techniques, note how each one involves two requests, and how it is the second request with which the person wants compliance. We’ll start our discussion with a compliance technique that you have probably encountered—the foot-in-the-door technique.

foot-in-the-door technique Compliance to a large request is gained by preceding it with a very small request.

The foot-in-the-door technique. In the foot-in-the-door technique, compliance to a large request is gained by preceding it with a very small request. The tendency is for people who have complied with the small request to comply with the next, larger request. The principle is simply to start small and build. One classic experimental demonstration of this technique involved a large, ugly sign about driving carefully (Freedman & Fraser, 1966). People were asked directly if this ugly sign could be put in their front yards, and the vast majority of them refused. However, the majority of the people who had complied with a much smaller request two weeks earlier (for example, to sign a safe-driving petition) agreed to have the large, ugly sign put in their yard. The smaller request had served as the “foot in the door.”

In another study, people who were first asked to wear a pin publicizing a cancer fund-raising drive and then later asked to donate to a cancer charity were far more likely to donate to the cancer charity than were people who were asked only to contribute to the charity (Pliner, Hart, Kohl, & Saari, 1974). Why does the foot-in-the-door technique work? Its success seems to be partially due to our behavior (complying with the initial request) affecting our attitudes both to be more positive about helping and to view ourselves as generally charitable people. In addition, once we have made a commitment (such as signing a safe-driving petition), we feel pressure to remain consistent (putting up the large ugly sign) with this earlier commitment.

This technique was used by the Chinese Communists during the Korean War on prisoners of war to help brainwash them about Communism (Ornstein, 1991). Many prisoners returning after the war had praise for the Chinese Communists. This attitude had been cultivated by having the prisoners first do small things like writing out some questions and then the pro-Communist answers, which they might just copy from a notebook, and then later writing essays in the guise of students summarizing the Communist position on various issues such as poverty. Just as the participants’ attitudes changed in the Freedman and Fraser study and they later agreed to put the big, ugly sign in their yard, the POWs became more sympathetic to the Communist cause. The foot-in-the-door technique is a very powerful technique. Watch out for compliance requests of increasing size. Say no, before it is too late to do so.

379

door-in-the-face technique Compliance is gained by starting with a large, unreasonable request that is turned down and following it with a more reasonable, smaller request.

The door-in-the-face technique. The door-in-the-face technique is the opposite of the foot-in-the-door technique (Cialdini, Vincent, Lewis, Catalan, Wheeler, & Danby, 1975). In the door-in-the-face technique, compliance is gained by starting with a large, unreasonable request that is turned down and following it with a more reasonable, smaller request. The person who is asked to comply appears to be slamming the door in the face of the person making the large request. It is the smaller request, however, that the person making the two requests wants all along. For example, imagine that one of your friends asked you to watch his pet for a month while he is out of town. You refuse. Then your friend asks for what he really wanted, which was for you to watch the pet over the following weekend. You agree. What has happened? You’ve succumbed to the door-in-the-face technique.

The success of the door-in-the-face technique is probably due to our tendency toward reciprocity, making mutual concessions. The person making the requests appears to have made a concession by moving to the much smaller request. Shouldn’t we reciprocate and comply with this smaller request? Fear that others won’t view us as fair, helpful, and concerned for others likely also plays a role in this compliance technique’s success. The door-in-the-face technique seems to have even been involved in G. Gordon Liddy getting the Watergate burglary approved by the Committee for the Re-election of the President, abbreviated CRP, but often mocked by the acronym CREEP (Cialdini, 1993). The committee approved Liddy’s plan with a bare bones $250,000 budget, after they had disapproved plans with $1 million and $500,000 proposed budgets. The only committee person who opposed acceptance had not been present for the previous two more costly proposal meetings. Thus, he was able to see the irrationality of the plan and was not subject to the door-in-the-face reciprocity influence felt by other committee members.

image
BLONDIE
Blondie © 2013 King Features Syndicate, Inc. World rights reserved

380

The low-ball technique. Consider the following scenario (Cialdini, 1993). You go to buy a new car. The salesperson gives you a great price, much better than you ever imagined. You go into the salesperson’s office and start filling out the sales forms and arranging for financing. The salesperson then says that before completing the forms, she forgot that she has to get approval from her sales manager. She leaves for a few minutes and returns looking rather glum. She says, regretfully, that the sales manager said that he couldn’t give you that great price you thought you had. The sales price has to be a higher one. What do most people do in this situation? You probably are thinking that you wouldn’t buy the car. However, research on this compliance tactic, called the low-ball technique, indicates that it does work—people buy the car at the higher price (Cialdini, Cacioppo, Bassett, & Miller, 1978).

low-ball technique Compliance to a costly request is gained by first getting compliance to an attractive, less costly request but then reneging on it.

In the low-ball technique, compliance to a costly request is achieved by first getting compliance to an attractive, less costly request and then reneging on it. This is similar to the foot-in-the-door technique in that a second, larger request is the one desired. In the low-ball technique, however, the first request is one that is very attractive to you. You are not making a concession (as in the foot-in-the-door technique), but rather getting a good deal. However, the “good” part of the deal is then taken away. Why does the low-ball technique work? The answer is that many of us feel obligated to go through with the deal after we have agreed to the earlier deal (request) even if the deal has changed for the worse. This is similar to the pressure we feel to remain consistent in our commitment that helps drive the foot-in-the-door technique. So remember, if somebody tries to use the low-ball technique on you, walk away. You are not obligated to comply with the new request.

that’s-not-all technique Compliance to a planned second request with additional benefits is gained by presenting this request before a response can be made to a first request.

The that’s-not-all technique. There’s another compliance technique, which is often used in television infomercials. Just after the price for the product is given and before you can decide yes or no about it, the announcer says, “But wait, that’s not all, there’s more,” and the price is lowered or more merchandise is included, or both, in order to sweeten the deal. Sometimes an initial price is not even given. Rather, the announcer says something like, “How much would you pay for this incredible product?” and then goes on to sweeten the deal before you can answer. As in the low-ball technique, the final offer is the one that was planned from the start. However, you are more likely to comply and take the deal after all of the buildup than if this “better” deal were offered directly (Burger, 1986). This technique is called the that’s-not-all technique—to gain compliance, a planned second request with additional benefits is made before a response to a first request can be made. Like the door-in-the-face technique, salespeople also use this technique. For example, before you can answer yes or no to a price offered by a car salesperson, he throws in some “bonus options” for the car. As in the door-in-the-face technique, reciprocity is at work here. The seller has made a concession (the bonus options), so shouldn’t you reciprocate by taking the offer, complying? We often do.

381

In summary, each of these compliance techniques involves two requests (see Table 9.1). In the foot-in-the-door technique, a small request is followed by a larger request. In the door-in-the-face technique, a large request is followed by a smaller request. In the low-ball technique, an attractive first request is taken back and followed by a less-attractive request. In the that’s-not-all technique, a more attractive request is made before a response can be given to an initial request. In all cases, the person making the requests is attempting to manipulate you with the first request. It is the second request for which compliance is desired. The foot-in-the-door and low-ball techniques both lead to commitment to the first request with the hope that the person will feel pressure to remain true to his initial commitment and accede to the second request. The other two techniques involve reciprocity. Once the other person has made a concession (or accepted our refusal in the door-in-the-face technique) or done us a favor (or offered an even better deal in the that’s-not-all technique), we think we should reciprocate and accede to the second request.

Table 9.1: Table 9.1 Four Compliance Techniques
Technique 1st Request 2nd Request Major Reason for Compliance
Foot-in-the-door Small Larger Consistency
Door-in-the-face Large Smaller Reciprocity
Low-ball Attractive Less attractive Consistency
That’s-not-all Attractive More attractive Reciprocity

Why We Obey

obedience Following the commands of a person in authority.

Compliance is agreeing to a request from a person. Obedience is following the commands of a person in authority. Obedience is sometimes constructive and beneficial to us. It would be difficult for a society to exist, for instance, without obedience to its laws. Young children need to obey their caretakers for their own well-being. Obedience can also be destructive, though. There are many real-world examples of its destructive nature. Consider Nazi Germany, in which millions of innocent people were murdered, or the My Lai massacre in Vietnam, in which American soldiers killed hundreds of innocent children, women, and old people. In the My Lai massacre, the soldiers were ordered to shoot the innocent villagers, and they did. The phrase “I was only following orders” usually surfaces in such cases. When confronted with these atrocities, we wonder what type of person could do such horrible things. At times, however, it may be the situational social forces, and not the person, that should bear more of the responsibility for the actions. Just as we found that situational factors can lead us to conform and comply, can we find that such social forces can sometimes lead us to commit acts of destructive obedience?

382

image
Stanley Milgram
The Graduate Center, CUNY

Milgram’s basic experimental procedure. The largest empirical study examining the possibility of social forces as causes of destructive obedience was Stanley Milgram’s obedience study done at Yale University in the early 1960s (Milgram, 1963, 1965, 1974). It is arguably the most famous and most controversial study in psychology. After over 50 years, the debate about the ethical, methodological, and theoretical issues of the study shows no signs of abating. The fascination with Milgram’s study outside of psychology has also continued, as evidenced by the recent release of Experimenter, a 2015 theatrical film about Milgram and the obedience study that starred Peter Sarsgaard and Winona Ryder. This is not the first film made about this study. The Tenth Level, a 1976 CBS television movie starring William Shatner, was also about Milgram and the obedience experiments. Because of the study’s notoriety both within and outside psychology, we will describe Milgram’s experimental procedure and findings in more detail than usual. We start with a description of Milgram’s basic experimental procedure.

Let’s consider Milgram’s basic experimental procedure from the perspective of a participant in the study. Imagine that you have volunteered to be in an experiment on learning and memory. You show up at the assigned time and place, where you encounter two men, the experimenter and another participant, a middle-aged man. The experimenter explains that the study is examining the effects of punishment by electric shock on learning, specifically learning a list of word pairs (for example, blue–box). The teacher will read the list of word pairs to the learner, and then the testing will begin. The learner will have to indicate for each of the first words in the word pairs, which of four words had originally been paired with it on the list. The learner will press one of four switches, which will light up one of the four answer boxes located on top of the shock generator. The teacher will then inform the learner if he is correct. If not correct, the teacher will tell the learner the correct answer, shock the learner via the shock generator, and go on to the next test pair. The experimenter further explains that the two participants will draw slips for the roles of teacher and learner. The other participant draws “learner,” making you the teacher. You accompany the learner to an adjoining room where he is strapped into a chair with one arm hooked up to the shock generator in the other room. The shock levels in the experiment will range from 15 to 450 volts. The experimenter explains that high levels of shock need to be used in order for the study to be a valid test of its effectiveness as punishment. The experimenter gives you a sample shock of 45 volts so that you have some idea of the intensity of the various shock levels.

383

You return to the other room with the experimenter and sit down at the shock generator. It is big—3 feet long with a height and depth of 16 inches. It has an array of buttons, dials, and switches. There is a switch for each level of shock, starting at 15 volts and going up to 450 volts in 15-volt increments. There are also some labels below the switches—“Slight Shock,” “Very Strong Shock,” “Danger: Severe Shock,” and, under the last two switches, “XXX” in red. The experimenter reminds you that when the learner makes a mistake on the word-pair task, you are to administer the shock by pushing the appropriate switch. You are to start with 15 volts for the first wrong answer and increase the shock level by 15 volts for each one after that.

After some preliminary practice trials to ensure that you and the learner understand the task, the experiment begins. The learner makes some errors, and you administer the appropriate shock each time he does. Nothing else happens except for a few groans from the learner until at 120 volts the learner cries out that the shocks really hurt. As the shock level increases, you hear him cry out, and his screams escalate with the increasing voltage. At higher levels, he protests and says that he no longer wants to participate and that he isn’t going to respond anymore. After 330 volts, he doesn’t respond. You turn to the experimenter to see what to do. The experimenter says to treat a nonresponse as a wrong answer and to continue with the experiment. The learner never responds again.

This is a summary of the situation that Milgram’s participants confronted. What would you do? If you are like most people, you say you would stop at a rather low level of shock. Milgram asked various types of people (college students, nonstudent adults, and psychiatrists) what they thought they and other people would do. Inevitably, the response was that they would stop at a fairly low level of shock, less than 150 volts, that other people would also do so, and that virtually no one would go to the end of the shock generator. The psychiatrists said that maybe one person in a thousand would do so.

image
These photos from Milgram’s obedience studies show the shock generator that the teacher used and the learner being strapped into a chair and fitted with electrodes.
From the film Obedience © 1968 by Stanley Milgram, © renewed 1993 by Alexandra Milgram; and distributed by Alexander Street Press

384

Milgram’s findings. As you have probably guessed, Milgram (1963) didn’t find what these people predicted. For the experimental conditions just described, almost two out of every three participants (62.5%) continued to obey the experimenter and administered the maximum possible shock (450 volts). Milgram also found that the experimental situation generated considerable tension and emotional strain in the participants. Many participants showed signs of such tension—sweating, trembling, stuttering, biting their lips, and so on. Milgram attributed this tension to the conflict between our ingrained disposition not to harm others and the equally compelling tendency to obey authority. Surprisingly, Milgram had observed a similar 65% obedience rate in an experiment he conducted earlier in which the learner made no vocal protests but rather just pounded the laboratory wall in protest at 300 volts and stopped pounding and responding after 315 volts. Although a 65% obedience rate seems startlingly high for such limited learner feedback, Milgram found in a pilot study that without any auditory input (no vocal protests or pounding the wall), virtually every participant continued to obey the experimenter and administered the maximum possible shock (Milgram, 1965).

Because he had to change the laboratory in which he was conducting the obedience study after four experiments, Milgram felt it necessary to conduct a replication experiment using the conditions we initially described with different participants to determine if the new laboratory, which was much more modest and in the basement of the building, had any impact on his findings (Milgram, 1974). He added one small change to the script—the learner mentions a slight heart condition before the experiment and again during his protests (see Table 9.2). Milgram thought that this heart problem might provide additional justification for disobeying. It did not. He replicated his earlier 62.5% finding—this time 65% administered the maximum shock. The mentions of the heart condition had no significant impact on the obedience rate. There is a sad irony here in that the learner did have a heart problem and died from it about three years later (Blass, 2009). Before changing laboratories, Milgram used the experiment with no mention of a heart condition (what he termed the Voice-Feedback experiment) as the baseline for two more experiments and then used the replication experiment with the mention of a heart condition conducted in the new laboratory (what he termed the New Baseline experiment) thereafter as the baseline for the remaining 18 experiments. Because the obedience rate findings for the two baseline experiments were not significantly different and the finding for the New Baseline experiment (65%) was used as the baseline for almost all of Milgram’s other experiments, we will refer to it as the baseline obedience rate in the remainder of the discussion of Milgram’s study.

Table 9.2: Table 9.2 Examples of the Learner’s Protests in Milgram’s New Baseline Experiment in Which the Learner Has a Heart Condition
120 volts Ugh! Hey, this really hurts.
150 volts Ugh!!! Experimenter! That’s all. Get me out of here. I told you I had heart trouble. My heart’s starting to bother me now. Get me out of here, please. My heart’s starting to bother me. I refuse to go on. Let me out.
210 volts Ugh!! Experimenter! Get me out of here. I’ve had enough. I won’t be in the experiment any more.
270 volt (Agonized scream.) Let me out of here. Let me out of here. Let me out of here. Let me out. Do you hear? Let me out of here.
300 volts (Agonized scream.) I absolutely refuse to answer any more. Get me out of here. You can’t hold me here. Get me out. Get me out of here.
315 volts (Intensely agonized scream.) I told you I refuse to answer. I’m no longer part of this experiment.
330 volts (Intense and prolonged agonized scream.) Let me out of here. Let me out of here. My heart’s bothering me. Let me out, I tell you. (Hysterically) Let me out of here. Let me out of here. You have no right to hold me here. Let me out! Let me out! Let me out! Let me out of here! Let me out! Let me out!
For the complete schedule of protests, see Milgram (1974), pp. 56–57.

Do you realize how Milgram used the foot-in-the-door technique in achieving a high rate of obedience (Gilbert, 1981)? He had the participant start by administering very small shocks (beginning at 15 volts) and increased the level slowly (in 15-volt increments). The learner didn’t protest these mild, early shocks. The teacher had already obeyed several times by the time the learner started his protests (at 120 volts), and by the time the shock level was high, the teacher had obeyed numerous times. Milgram’s results might have been very different if he would have had participants start at a high level of shock or if the learner had protested the first few small shocks.

385

There are some additional procedural aspects of Milgram’s experiments that you should know before we discuss some of Milgram’s other findings. All of the experiments were conducted at Yale University except for two conducted in Bridgeport, Connecticut, and, except for one experiment, the participants were men, from age 20 to 50, who were volunteers solicited from the New Haven or Bridgeport communities and paid $4, plus 50¢ bus fare, for their participation. The drawing was rigged so that the true participants always drew the role of teacher. The teacher only thought he was administering the shocks. The learner was never actually shocked. The only real shock administered was the sample shock given to the teacher before the experiment began. Both the learner and the experimenter were two local men that Milgram had hired to play those roles. In actuality, the experimenter was a 31-year-old high school biology teacher, and the learner was a 47-year-old accountant. Milgram personally trained them for weeks prior to the study to make sure that they were as convincing as possible in their roles. For standardization purposes, however, the learner’s responses to the shocks were prerecorded on tape with each protest coordinated to a particular voltage level on the shock generator, and the learner’s responses to the test items followed a preset pattern of right and wrong answers.

386

In addition, if at any time the teacher protested or expressed doubt about continuing, the experimenter was supposed to use a series of four standardized prods to encourage the participant to continue. These prods were to be delivered in the following sequence: “Please continue,” or “Please go on”; “The experiment requires that you continue”; “It is absolutely essential that you continue”; and “You have no other choice, you must go on.” If after the fourth prod, the participant refused to continue, the participant would be classified as disobedient, the experimental session terminated, and the voltage at which the participant stopped noted. There were also two special prods to be used in special situations. First, if the participant asked if the learner was liable to suffer permanent physical injury, the experimenter was to say: “Although the shocks may be painful, there is no permanent tissue damage, so please go on.” If necessary, this would be followed by prods 2, 3, and 4. Second, if the participant said that the learner did not want to go on, the experimenter was to reply: “Whether the learner likes it or not, you must go on until he has learned all the word pairs correctly. So please go on.” Again, if necessary, this would be followed by prods 2, 3, and 4.

Overall, Milgram conducted 23 experiments with a total of 780 participants. Perry (2013) provides a chronological list with descriptions of all of Milgram’s experiments. The other 20 experiments that we haven’t discussed examined variants of the baseline condition to determine the impact of various situational factors (social forces) on the obedience rate observed. Remember, the baseline obedience rate was 65%. Thus, the impact of a situational factor is assessed by how much it increases or decreases this 65% baseline obedience rate.

According to Milgram, an important situational factor is the physical presence of the experimenter (the person with authority). He found that if the experimenter left the laboratory and gave his commands over the telephone, the obedience rate dropped to 20.5%. The closeness of the teacher and the learner is also important (as the auditory input indicated). Remember, virtually every participant administered the maximum shock when the learner did not vocally protest or pound the wall, but only about two of every three participants did so when the teacher could hear the learner’s protests or pounding. In another experiment, Milgram made the teacher and learner even closer by having them both in the same room instead of different rooms, and obedience rate dropped to 40%. It dropped even further, to 30%, when the teacher had to administer the shock directly by forcing the learner’s hand onto a shock plate. This means that obedience decreases as the teacher and learner are physically closer to each other. Interestingly, though, the maximum obedience rate doesn’t drop to 0% even when they touch; it is still 30%.

image
The teacher is administering the shock to the learner by directly forcing the learner’s arm on the shock plate. Even in this situation, maximal obedience was still 30%.
From the film Obedience © 1968 by Stanley Milgram, © renewed 1993 by Alexandra Milgram; and distributed by Alexander Street Press

To check whether the location of the experiments, prestigious Yale University, contributed to the high obedience rate in the baseline condition, Milgram ran the baseline condition in a rundown office building in nearby Bridgeport, Connecticut, completely dissociated from Yale. The obedience rate went down. Milgram found an obedience rate of 47.5%. Hence, Milgram concluded that the prestige and authority of the university setting did contribute some to the baseline obedience rate, but not nearly as much as the presence of the experimenter or the closeness of the teacher and the learner. None of these factors, however, lowered the obedience rate below 20%, but it dropped to 10% in an experiment with two confederate disobedient teachers modeling disobedience for the participant teacher. To get the obedience rate to 0%, Milgram had to set up a situation with two experimenters who at some point during the experiment disagreed. One said to stop the experiment, while the other said to continue. In this case, when one of the people in authority said to stop, all of the participant teachers stopped.

387

What about getting the obedience rate to increase from that observed in the baseline condition? Milgram tried to do that by taking the direct responsibility for shocking away from the teacher. Instead, the teacher only pushed the switch on the shock generator to indicate to another teacher (another confederate) in the room with the learner how much shock to administer. With this direct responsibility for shocking the learner lifted off of their shoulders, almost all of the participants (92.5%) obeyed the experimenter and administered the maximum shock level. This finding and all of the others that we have discussed are summarized in Table 9.3, so that you can compare them.

Table 9.3: Table 9.3 Results for Some of Milgram’s Experimental Conditions
Experimental Conditions Percent of Maximum Obedience Observed
Teacher and learner in different rooms, no auditory input (pilot study) 100
Teacher does not have direct responsibility for administering shock 92.5
Teacher and learner in different rooms and learner pounds wall at 300 volts and stops pounding and responding after 315 volts 65
Baseline condition—teacher and learner in different rooms, escalated vocal protests, and nonresponding after 330 volts 62.5; 65 in replication with learner mentioning heart condition before and during the experiment
Baseline condition but female participants 65
In office building in Bridgeport, Connecticut 47.5
Teacher and learner in same room 40
Teacher and learner in same room and teacher has to force learner’s hand onto shock plate 30
Experimenter not present 20.5
Two models of disobedience 10
Two experimenters who disagree 0

Following his first publication about the obedience experiments in which he described the findings for the baseline condition with no mention of a heart condition (Milgram, 1963), Milgram was surprised by the first published response by his peers (Baumrind, 1964) because it focused on his treatment of his participants and methodology and not his results. Baumrind severely criticized Milgram for his unethical treatment of the participants. She argued that he inflicted emotional harm, possibly irreparable, on the participants by placing them in an extremely traumatic situation in which they believed they were harming another human being and that the gain in knowledge from Milgram’s study did not outweigh the distress his participants had endured. In his rebuttal to Baumrind (1964), Milgram (1964) reported that when the experimental session was over, the participants were immediately debriefed and told that the learner was not actually shocked. He also reported the findings of a questionnaire sent to all experimental participants about a year later along with a study report, asking them to reflect on their experience during the experiment. Milgram reported that most participants in the obedience experiments had positive feelings about their participation. This positive reaction of most participants toward the experiments probably seems odd to you given the aversive nature of their experiences in the study. Why did they later feel so positive about such a negative experience? We will give an explanation of these seemingly contradictory questionnaire data later when we discuss the engaged-followership reinterpretation of Milgram’s findings. Milgram also reported that 9 months after the experiments, he had a psychiatrist interview 40 participants with the aim of identifying any who might have been harmed by the experiment and that the psychiatrist reported that he found no harm in any of the participants he interviewed. Milgram’s reply, however, didn’t satisfy Baumrind and other critics of his unethical methods and treatment of the participants, and this ethical controversy continued on into the 1970s and beyond.

388

Baumrind (1964) also argued that Milgram’s findings did not have external validity (the extent to which the results of a study can be generalized to other situations and to other people) and thus, contrary to Milgram’s claims, could not be used to explain real-world atrocities, such as the Holocaust. Orne and Holland (1968; cf. Milgram’s rebuttal, 1972) agreed with Baumrind about the lack of external validity. They also argued that, given the experimental methodology used by Milgram, it was likely that many of Milgram’s participants did not believe that they were really administering shocks to the learner, resulting in the high rate of obedience that was observed. This would indicate that there was also a lack of internal validity in that Milgram wasn’t studying what he thought he was studying. In their opinion, Milgram was too concerned with participants’ behavior and not enough with their perception of the situation. Participants in an experiment are concerned with being good subjects and acting in a manner that they perceive is expected of them. Orne and Holland reported some data that Orne and Evans (1965) had collected that indicated that the vast majority of participants (84%) would comply with an experimenter’s instructions to perform dangerous tasks, such as retrieving a coin from what appeared to be nitric acid, if they thought that they were participating in an experiment because they assumed that things were not at all as they appeared to be. However, participants who were not told they were participating in an experiment declined to perform these tasks. Hence, the participants in Milgram’s study knew they were participating in a scientific experiment and likely acted as they did because they too assumed that things were not as they appeared and that the learner was not actually being shocked. This is a trust issue—the participants trusted that the experimenter and Yale University would not let the learner be seriously harmed. Laurent (1987) pointed out a congruent finding in a Milgram replication at the prestigious Max Planck Institute in Germany—“the subjects . . . seem to have felt that the Max Planck Institute would not let anything dreadful happen” (Mantell & Panzarella, 1976, p. 244). Brannigan, Nicholson, and Cherry (2015) recently reiterated this criticism and echoed Orne and Holland’s argument: “The key point is that participants knew they were participating in a psychology experiment—a social space akin to a magician’s stage, one that licenses all sorts of atypical behavior and unexpected occurrences, but which also brings with it the strong expectation that nobody will be harmed. Many participants indicated that this awareness influenced their actions . . .” (p. 556, italics in original). Like the ethical criticisms, criticisms concerned with the lack of internal or external validity continued on for decades (e.g., Fenigstein, 2015; Laurent, 1987; Lutsky, 1995; Mixon, 1976; Parker, 2000; Patten, 1977). Now, over 50 years later, new ethical and methodological criticisms have surfaced with some new discoveries about Milgram’s experiments and his reporting of them. We discuss these next.

389

image
Fran/Cartoonstock.com

390

Recent revelations about Milgram’s obedience experiments. Recent analyses of the materials in the Milgram archives at Yale’s Sterling Memorial Library related to the obedience experiments have resulted in serious criticisms of both Milgram’s experiments and his reporting of them. These archival materials include audio recordings of the actual experiments; transcripts of participants’ conversations with the psychiatrist; participants’ questionnaire responses; and the notes, documentation, and correspondence accumulated during the obedience experiments. Gina Perry spent more than 4 years not only analyzing these archival materials but also conducting personal interviews with former participants, experts familiar with the research, and relatives of the men who served as the experimenter and learner in the experiments. Her research resulted in a book, Behind the Shock Machine, in which she summarizes her findings and criticisms of both Milgram and the obedience experiments (Perry, 2013). We will discuss Perry’s main criticisms and incorporate some of the criticisms of other archival researchers within this discussion. We’ll start with her revelation about the debriefing procedure that Milgram reported that he used.

Perry found that the archival materials reveal a somewhat different account of the debriefing than was reported by Milgram. Perry discovered that the majority of participants were not appropriately debriefed in a timely manner as Milgram claimed in his reply to Baumrind (Milgram, 1964) or later in his 1974 book that summarized many of the obedience experiments. According to Perry, the majority of participants did not learn that they did not actually shock the learner until almost a year later. Based on the participants’ comments on the questionnaire that Milgram had sent to them 9 months after the experiments, about three-fourths of the participants were not told the full story until they received the study report that Milgram sent to them with the questionnaire, but, as Perry points out, some may not have read or even received the report and thus would have never known that the learner was not actually shocked. Perry is not the only researcher criticizing Milgram’s inadequate debriefing of participants. Nicholson (2011), for example, analyzed the Milgram archival materials and, like Perry, concluded that Milgram misrepresented the extent and efficacy of his debriefing procedures, the risk posed by the experiment, and the harm done to his participants. According to Nicholson, Milgram most likely deliberately misrepresented the post-experimental debriefing in his published work to protect his credibility as a responsible researcher and the ethical integrity and possible future of the obedience experiments. Although we will never know with certainty why Milgram misrepresented the debriefing process, he clearly did so.

Perry’s analysis also discovered another significant discrepancy with what Milgram reported in his publications and what the archival materials reveal actually happened. Remember, according to Milgram, if at any time the teacher protested or expressed doubt about continuing, the experimenter was supposed to use a series of four standardized prods to encourage the participant to continue, and if after the fourth prod, the participant refused to continue, the session was terminated and the participant classified as disobedient. Perry analyzed the archival audiotapes for two experiments, an early one in the experimental sequence (the third experiment in which the teacher and learner were in the same room) and a later one (the twentieth experiment in which the participants were women). The audiotapes tell a very different story about the experimenter’s use of prods than Milgram told. The experimenter definitely did not follow the controlled script for using the four prods and took on a much more active role in getting participants to continue in the experiment. Perry found that in the early experiment, the experimenter followed the script for the first part of the experiment, but toward the end he was straying far from the script and was urging the teacher time and time again to continue, saying that they must go on. This off-script prodding was even more prevalent in the later experiment with women as teachers. By this time in the experimental sequence, the experimenter was adept at applying pressure and coercing the teachers. He would parry the teachers’ protests, escalate the pressure by inventing more coercive prods, and engage in arguments with the teacher about continuing on in the experiment. For example, the experimenter insisted that one woman continue 26 times and other women 14 times, 11 times, and 9 times instead of ending the session after the fourth standard prod was given. Perry concluded that there was definitely a mismatch between Milgram’s description of the experimenter’s use of prods and the audiotape evidence of what actually transpired.

391

experimenter bias A process in which the person performing the research influences the results in order to portray a certain outcome.

Gibson (2013) also conducted an analysis of the audiotapes of two of Milgram’s experiments (the second experiment which was the original baseline condition, and, like Perry, the later experiment with women as participants), and his conclusions agree with Perry’s. Gibson draws attention to the verbal exchanges between the experimenter and the teacher, especially the experimenter’s creative arguments designed to convince and persuade the participants to continue the experiment, leading to radical departures from the supposed standardized use of the four prods. Likewise, Russell (2009, p. 182), who studied the audiotapes from the two experiments conducted in Bridgeport, points out that the experimenter often invented his own, far more stressful prods and described the experimenter’s invented prods as “great feats of bottom-up innovation . . . to bring about what he sensed his boss desired.” Thus, the experimenter’s deviations from the script may have been driven by experimenter bias (a process in which the person performing the research influences the results in order to portray a certain outcome). Perry also noted that Milgram appears to have tacitly allowed the experimenter the license to improvise because he watched a number of the experimental conditions through a one-way mirror. In a similar vein, Russell pointed out that, given the regularity with which the experimenter strayed from the script and the fact that Milgram seems to have never attempted to correct him, Milgram probably approved of the experimenter’s actions. In sum, the experimenter’s behavior with respect to prodding the participants was clearly not standardized as Milgram claimed, and his deviations from the standardized prodding protocol reported by Milgram may have been driven by experimenter bias and tacitly approved by Milgram. Thus, we need to add a caveat to the findings given in Table 9.3—these findings were likely influenced to varying extents by experimenter bias.

392

Perry (2013) also found that Milgram did not report the findings of all 23 of the experiments that he conducted and thus may have selectively chosen to report only findings that supported his interpretation of the results. Of particular interest is the unreported Relationship experiment, which Russell (2014) has referred to as possibly the most controversial experiment that Milgram conducted. Milgram conducted a second experiment in Bridgeport, Connecticut, but he never reported it. The participants were 20 pairs of men (one serving as teacher and the other as learner) who were related in some way or knew each other well (a relative, a close acquaintance, or a neighbor). After the learner was strapped in and the teacher and experimenter left the room, Milgram came in and explained to the learner about the experimental ruse and coached him on how to vocalize like the confederate learner had done in the baseline condition in response to the supposed shocks. There was one other important difference between this experiment and the others that Milgram conducted (Rochat & Blass, 2014). The learner’s protests were aimed at the teacher and not the experimenter. What happened? Did teachers inflict pain on a relative, friend, or neighbor? No, they did not. Milgram found a high rate of disobedience, 85%, demonstrating that when a participant believed someone close to them was being hurt, they disobeyed. Perry argues that Milgram probably decided against publishing this result not only because it contradicted his overall emphasis on obedience in his publications, but also because it would be difficult to defend ethically, especially given the ethical firestorm that had arisen over the research that Milgram had reported in 1963.

Perry (2013; also see Parker, 2000) discovered evidence in the archives that a large number of participants had expressed doubts about the experimental setup and cover story in their responses to Milgram’s questionnaire. This agrees with Orne and Holland’s (1968) trust argument that we discussed earlier. This argument predicted that the obedience rate would be a function of the participants’ belief that the learner was actually being shocked. Perry (2013) discovered in the archival materials that Milgram had his research assistant, Taketo Murata, compile an analysis that examined this prediction, but chose not to publish it. The analysis was an experiment-by-experiment breakdown of the degree of shock given by participants who were certain that the learner was being shocked versus that given by participants who had doubts about this. In 18 of the 23 experiments, the participants who fully believed that the learner was being shocked gave lower levels of shock than the participants who expressed doubts about the learner being shocked. In addition, Murata found that in all 23 experiments, the participants most likely to disobey were those who said that they believed the learner was being shocked. The analysis also revealed that the believability varied across conditions, and interestingly, roughly two-thirds of the participants in the New Baseline experiment doubted that the shocks were real, which maps perfectly onto the 65% obedience rate that was observed for this experiment. Thus, it is no surprise that Milgram decided not to publish Murata’s analysis, again seemingly reporting only findings that supported his interpretation of the results. Hence, using the belief-response data from the follow-up questionnaire that he sent out almost a year after the experiments were over, Milgram (1972, 1974) argued that three-quarters of his participants acted under the belief that they were administering painful shocks. However, as Perry and others before her (e.g., Parker, 2000; Patten, 1977) have pointed out, Milgram’s questionable numerical conclusion stems from his inclusion of the 24% of the participants who had expressed some doubt about whether the learner was getting shocked. Perry argued that it was more truthful to say that half of the participants believed the shocks were real and about two-thirds of them disobeyed the experimenter.

393

You may be wondering what Milgram’s response has been to these recent criticisms. Sadly, we will never know what his response might have been. He died of a heart attack in 1984 at the age of 51. However, Diana Baumrind, Milgram’s first critic, has reviewed Perry’s book and concluded that, given Perry’s findings, Milgram’s version of the obedience experiments “can never again be accepted as the whole truth” (2015, p. 695). In addition to these recent criticisms that challenge both the validity of Milgram’s obedience experiments and the ethical behavior of Milgram in misrepresenting the experiments in his publications, a partial replication of Milgram’s New Baseline experiment has recently been conducted, and its findings have led to the conclusion that Milgram’s findings were not the result of the participants’ obeying the orders of an authority figure, but rather disobeying them. Related to this conclusion, a new explanation of Milgram’s findings has been proposed. We discuss the replication and new explanation next.

A recent replication and a new explanation of Milgram’s findings. The American Psychological Association tightened its ethical standards for research with the publication of its comprehensive “Ethical Principles in the Conduct of Research with Human Participants” in 1973. Based on this publication, researchers now had to obtain informed consent from potential participants. This meant that participants had to be informed of the purpose of the experiment and what was involved so that they could weigh the risks before deciding whether to participate in the study. Even if they did give their consent, they were also given the right to withdraw from an experiment after it had started if they chose to do so. In addition, in 1975 the U.S. Department of Health, Education, and Welfare began requiring that all research with human participants be reviewed and approved by institutional review boards before being conducted. An institutional review board at a university or college is a committee of faculty and staff members that screens all research proposals involving human participants in order to ensure the well-being of the participants. Thus, Milgram-type obedience research, with its potential to cause harm to participants and its use of deception that prevented informed consent, was curtailed. There have been a half dozen or so replication studies of Milgram’s baseline condition conducted in other countries, such as Jordan and West Germany (for a summary, see Blass, 2004, Table C.1). Interestingly, Milgram’s findings were not replicated in the study conducted in the country most like the United States—Australia (Kilham & Mann, 1974). The overall obedience rate was only 28%, and a gender difference was observed. The obedience rate for men (40%) was significantly greater than that for women (16%). Recently, some researchers began to wonder if Milgram’s findings could be replicated here in this country. Aren’t we more aware now of the dangers of blindly following authority than people were in the early 1960s? If so, wouldn’t participants now disobey more often than Milgram’s participants?

394

To answer such questions, Burger (2009) conducted a partial replication of Milgram’s New Baseline experiment. His participants were men and women who responded to newspaper advertisements and flyers distributed locally. Their ages ranged from 20 to 81 years, with a mean age of 42.9 years. Obviously, some changes to ensure the welfare of the participants were necessary in order to obtain permission from the Santa Clara University Institutional Review Board to run the study (Burger, 2007). For example, potential participants that a clinical psychologist deemed particularly vulnerable to stress were screened out. The main procedural change was that once participants pressed the 150-volt switch and started to read the next test item, the experiment was stopped. The 150-volt point was chosen because in Milgram’s study, once participants went past 150 volts, the vast majority continued to obey up to the highest shock level. In a meta-analysis of data from eight of Milgram’s obedience experiments, Packer (2008) also found that the 150-volt point was the critical juncture for disobedience (the voltage level at which participants were most likely to disobey the experimenter). This is likely due to the fact that it was at the 150-volt point that the learner began to verbally complain. Hence, in Burger’s study, it was a reasonable assumption that the percentage of participants that go past 150 volts was a good estimate of the percentage that would go to the end of the shock generator. Of course, the experimenter also ended the experiment when a participant refused to continue after hearing all four of the experimenter’s prods. What do you think Burger found? Almost 67% of the men pressed the 150-volt switch, and about 73% of the women did so. Although these percentages have to be adjusted down slightly because not every participant in Milgram’s study who went past 150 volts maximally obeyed, these results are very close to Milgram’s finding of 65% obedience for both men and women in the baseline condition. Even with such adjustments, Burger’s findings indicate that people reacted in this laboratory obedience situation today much like they did almost 50 years ago in Milgram’s original study. Burger, however, quizzically did not conclude that his participants were displaying obedience, but rather disobedience (Burger, Girgis, & Manning, 2011). Why did he conclude this?

Burger (2009) pointed out that only the fourth prod, “You have no other choice, you must go on” truly constitutes an order, and in an analysis of the participants’ comments and reactions in his partial replication study (reported in Burger et al., 2011), it was found that this prod did not elicit any obedience because not a single participant continued after receiving it. This finding begs the question as to whether this occurred in Milgram’s experiments. Fortunately, Gibson (2013) did a rhetorical analysis of the archival recorded interactions between the experimenter and participants in two of Milgram’s experiments, providing us with an answer to this question. Consistent with Burger’s conclusion, Gibson’s analysis revealed that the experimenter’s most order-like prods were overwhelmingly resisted by the participants. Thus, rather than showing that Milgram’s participants were obeying orders of those in authority, Milgram’s experiments seem to provide evidence of the opposite, that orders from an authority lead to disobedience and that the obedience that Milgram observed was due to other factors.

395

Related to Burger’s conclusion, Alex Haslam, Stephen Reicher, and their colleagues have proposed that Milgram’s “obedient” participants were motivated not by orders but by appeals to science and that their behavior needs to be reconceptualized as an act of “engaged followership” with the experimenter and the scientific community and not as a product of blind obedience to authority (Haslam, Reicher, & Birney, 2014; Haslam, Reicher, Millard, & McDonald, 2015). The level of obedience in each of Milgram’s experiments is thus predicated upon the extent of the participants’ acceptance of the experimenter’s scientific goals and the leadership exhibited by the experimenter in pursuing these goals, leading participants to identify with the experimenter and become engaged in helping him achieve his scientific goals. Haslam and Reicher further propose that the participants in Milgram’s experiments may also opt to identify with the learner and not the experimenter, leading them to “disobedient” behavior. Hence, the perturbing process of deciding which identification to make leads to the anxiety and upset witnessed in Milgram’s participants. Which identification participants tend to make is determined mainly by which identification a particular experimental setting favors. In line with this analysis, Haslam and Reicher note that one can explain the variance observed for the obedience rate in Milgram’s various experiments (from 0% to 100%) by examining how the situational factors in each experiment favor each type of identification (that is, the relative extent of identification with the experimenter versus identification with the learner). In fact, Reicher, Haslam, and Smith (2012) have shown that estimations of the levels of identification with the experimenter and with the learner made by both expert social psychologists and nonexpert college students for Milgram’s descriptions of 15 of his experiments are strong significant predictors of the level of obedience found in each of the experiments. In agreement with the engaged-followership explanation, identification with the experimenter was a strong positive predictor of the level of obedience observed, and identification with the learner was a strong negative predictor of the level of obedience observed.

Haslam, Reicher, and Birney (2014) also point out that there was a confound in Milgram’s study and Burger’s partial replication between the content of the prods and the order in which they were presented in that it is unclear whether the observed resistance to the fourth prod was the consequence of it being an order or that it came fourth after the other three prods had already been resisted. Possibly the participants were just tired of being prodded or were already committed to resisting when the fourth prod was given. They argue that the second prod, “The experiment requires that you continue,” is the one that relates most to their engaged-followership proposed explanation because it indicates that continuing is essential to the success of the experiment and hence, science. In a very cleverly designed analogue of the Milgram basic procedure with 30 steps, each involving progressively more toxic responses, Haslam, Reicher, and Birney demonstrated that continuation and completion of an objectionable task was positively predicted by the extent to which prods appealed to scientific goals but not by the extent that the prods were seen as orders. In agreement with Burger’s finding, the participants were far more inclined to disobey an order than to follow it.

396

As a further test of their engaged-followership explanation, Haslam, Reicher, and Millard (2015) used immersive digital realism (IDR) to restage and reexamine Milgram’s New Baseline experiment and four more of his obedience experiments. The IDR methodology circumvents the ethical barriers to conducting obedience research using Milgram’s original procedure with volunteer participants. In brief, this IDR used professional actors who deeply immerse themselves into portraying fictional characters in a film. A film director was hired to work with the actors in developing their characters before the filming. In this case, their characters were participants in the Milgram experiments being restaged and filmed in a faithful reproduction of the original laboratory environment, including the ominous shock generator. The film director, however, only informed the actors that their characters would be participants in a social psychology experiment in the film. They were not given any information about the nature or design of the Milgram experiments. Because the actors could differentiate the character’s behavior in the experiment from their own, any ethical issues were avoided. Following the digital filming of each restaged experiment, the actors were thoroughly debriefed and provided complete information about the study and its aims. Post-experimental interviews were also conducted and used to assess the participants’ relative identification with the experimenter and the learner in each experiment. Validating the use of the IDR, a strong correlation was found between the maximum level of shock administered in these restaged experiments and the mean maximum shock administered in the corresponding original experiments. Consistent with the engaged-followership explanation, relative identification with the experimenter versus the learner as assessed in the interviews was a good predictor of the maximum shock that participants administered in each experiment. In addition, as Burger found, there was near universal refusal by participants to continue after being given Milgram’s fourth prod (“You have no other choice, you must continue”).

Interestingly, Haslam and Reicher’s engaged-followership proposal not only provides an explanation of both Milgram’s results and Burger’s replication findings, it also provides an explanation of the discrepancy that we described earlier between the extremely stressful and aversive experimental experience of the participants and their positive feelings toward the experiments that they expressed in their questionnaire responses about their participation. In an analysis of the Yale archival questionnaire data from Milgram’s study, Haslam, Reicher, Millard, and McDonald (2015) showed that the participants were engaged with the science of the experiments and that they saw science—especially science at prestigious Yale University—as a “social good,” and being associated with this made them feel good. It is critical, as Haslam et al. point out, to realize that the participants’ questionnaire responses were made quite some time after their participation, so the stressful experimental situation that they experienced almost a year earlier was in the past, and the debriefing report that accompanied the questionnaire reminded them of the scientific goals of the study. In sum, led by this reminder of the study’s scientific goals, the participants felt as if they had contributed to scientific progress, and this gave meaning to their participation, transforming the unpleasant, stressful experimental experience into something to feel good about when they completed the questionnaires.

397

It should now be clear that Milgram’s obedience study was and still is very controversial. Recent criticisms based on analyses of the Yale archival materials have brought to light important new methodological and ethical concerns with both Milgram’s study and his reporting of it, questioning both the validity of the experiments and Milgram’s ethics in reporting them. Another recent interesting development is that Milgram’s interpretation of his findings in terms of people following the orders of an authority that lead to acts of destructive obedience now seems misguided. There are social forces that have an impact on the participants, but they are not the ones Milgram proposed. The new engaged-followership interpretation that posits that the participants’ “obedient” behavior was motivated by their active identification with the experimenter and science or the learner does a better job of explaining Milgram’s findings. In sum, a half-century later, questions about the ethical, methodological, and theoretical aspects of Milgram’s obedience experiments and his findings still linger. Nonetheless, one point seems certain—Milgram wasn’t really studying destructive obedience. Such obedience, however, has been studied in a real-world, nonlaboratory setting involving doctors and nurses. Let’s see how this was done and what was found.

The “Astroten” study. A fascinating aspect of the “Astroten” study on destructive obedience is that the participants did not even know they were in the study (the ultimate experimental deception). The participants in this study were nurses at work, on duty alone in a hospital ward (Hofling, Brotzman, Dalrymple, Graves, & Pierce, 1966). Each nurse received a call from a person using the name of a staff doctor not personally known by the nurse. The doctor ordered the nurse to give a dose exceeding the maximum daily dosage of an unauthorized medication (“Astroten”) to an actual patient in the ward. This order violated many hospital rules—medication orders need to be given in person and not over the phone, it was a clear overdose, and the medication was not even authorized. Twenty-two nurses, in different hospitals and at different times, were confronted with such an order. What do you think they did? Remember that this was not a laboratory study like Milgram’s. These were real nurses who were on duty in hospitals doing their jobs. Twenty-one of the twenty-two nurses did not question the order and went to give the medication, though they were intercepted before getting to the patient. Now what do you think that the researchers found when they asked other nurses and nursing students what they would do in such a situation? Of course, they said the opposite of what the nurses actually did. Nearly all of them (31 out of 33) said that they would have refused to give the medication, demonstrating the power of situational forces on obedience.

398

However, as with Milgram’s studies, there are problems with Hofling et al.’s (1966) Astroten study. In addition to the obvious ethical problems, such as people participating in a study without even knowing they are in a study, Rank and Jacobson (1977) pointed out other problematic aspects of the Astroten study. The situation that was created in the Astroten study was far from a normal hospital situation. The nurses had no knowledge of the medication involved, and they had no opportunity to seek advice from another nurse or doctor. So Rank and Jacobson replicated the Astroten study but modified the situation so that it was more akin to a normal hospital scenario. An actual known doctor on the hospital staff telephoned in an instruction to administer Valium at three times the recommended dosage, and other nurses were working so that the nurse who took the call could consult with someone if she chose to do so. Their findings were dramatically different from those found in the Astroten study. All 18 nurses in the study accepted the order, 12 of the 18 nurses procured the drug, but only 2 out of 18 nurses prepared to give the drug as ordered. They concluded that if nurses are aware of the toxic effects of a drug and allowed to interact normally with other nurses and staff, most nurses will not administer a drug overdose merely because a doctor orders it. More recently though, Krackow and Blass (1995) conducted a survey study of registered nurses on the subject of carrying out a physician’s order that could have harmful consequences for the patient. Nearly half (46%) of the nurses who completed the survey reported that they had carried out such orders, but they assigned most of the responsibility to the physicians and little to themselves. These findings are truly worrisome, because these were real patients who, through blind obedience, were placed into potentially life-threatening situations. Thus, across all of these studies some nurses (ranging from roughly 11% to almost 100%) would have carried out or did carry out doctors’ orders that would harm their patients (destructive obedience). Next we will discuss a real-world case of destructive obedience in which the participants inflicted harm not on others but rather on themselves.

The Jonestown massacre. Given the findings of destructive obedience in the research on the authority relationship between nurses and doctors, the Jonestown mass suicide orchestrated by the charismatic Reverend Jim Jones should be a little easier to understand. Jones was not only an authority figure to his followers, he was also their leader. Using various compliance techniques, Jones fostered unquestioned faith in himself as the cult leader and discouraged individualism. For example, using the foot-in-the-door technique, he slowly increased the financial support required of Peoples Temple members until they had turned over essentially everything they had (Levine, 2003). He even used the door-in-the-face and foot-in-the-door techniques to recruit members (Ornstein, 1991). He had his recruiters ask people walking by to help the poor. When they refused, the recruiters then asked them just to donate five minutes of their time to put letters in envelopes (door-in-the-face). Once they agreed to do this small task, they were then given information about future, related charitable work. After completing the small task, they then returned later to do more work as a function of the consistency aspect of the foot-in-the-door technique. As they contributed more and more of their time, they became more involved in the Peoples Temple and were then more easily persuaded to join.

399

image
The Jonestown Massacre | These are the dead bodies of some of the over 900 members of Reverend Jim Jones’s religious cult in Jonestown, Guyana, who committed mass suicide by drinking cyanide-laced Kool-Aid.
AP Photo

There is one situational factor leading to the Jonestown tragedy that is not so obvious, though—the importance of Jones moving his almost 1,000 followers from San Francisco to the rain forests of Guyana, an alien environment in the jungle of an unfamiliar country (Cialdini, 1993). In such an uncertain environment, the followers would look to the actions of others (informational social influence) to guide their own actions. The Jonestown followers looked to the settlement leaders and other followers, which then helped Jones to manage such a large group of followers. With this in mind, think about the day of the commanded suicide. Why was the mass suicide so orderly, and why did the people seem so willing to commit suicide? The most fanatical followers stepped forward immediately and drank the poison. Because people looked to others to define the correct response, they followed the lead of those who promptly, using syringes, injected the poison into their children’s throats and then willingly drank the poisoned Kool-Aid. In other words, drinking the poison seemed to be the correct thing to do. This situation reflects a “herd mentality,” getting some members going in the right direction so that others will follow like cattle being led to the slaughterhouse. The phrase “drinking the Kool-Aid” later became synonymous with such blind allegiance. Over 900 people were found dead, one-third of them children. There were roughly 30 survivors who managed to escape without being shot or were away from the compound when the mass suicide occurred. Did Reverend Jones drink the Kool-Aid? No, he was found dead from a single gunshot to the head, probably self-inflicted.

How Groups Influence Us

400

Usually when we think of groups, we think of formalized groups such as committees, sororities and fraternities, classes, or trial juries. Social psychologists, however, have studied the influences of all sorts of groups, from less formal ones, such as an audience at some event, to these more formal ones. Our discussion of the influences of such groups begins with one of the earliest ones that was studied, social facilitation.

social facilitation Facilitation of a dominant response on a task due to social arousal, leading to improved performance on simple or well-learned tasks and worse performance on complex or unlearned tasks when other people are present.

Social facilitation. How would your behavior be affected by the presence of other people, such as an audience? Do you think the audience would help or hinder? One of the earliest findings for such situations was social facilitation, improvement in performance in the presence of others. This social facilitative effect is limited, however, to familiar tasks for which the person’s response is automatic (such as doing simple arithmetic problems). When people are faced with difficult unfamiliar tasks that they have not mastered (such as solving a complex maze), performance is hindered by the presence of others. Why? One explanation proposes that the presence of others increases a person’s drive and arousal, and research studies have found that under increased arousal, people tend to give whatever response is dominant (most likely) in that situation (Bond & Titus, 1983; Zajonc, 1965). This means that when the task is very familiar or simple, the dominant response tends to be the correct one; thus performance improves. When the task is unfamiliar or complex, however, the dominant response is likely not the correct one; thus performance is hindered. This means that people who are very skilled at what they do will usually do better in front of an audience than by themselves, and those who are novices will tend to do worse. This is why it is more accurate to define social facilitation as facilitation of the dominant response on a task due to social arousal, leading to improved performance on simple or well-learned tasks and worse performance on complex or unlearned tasks when other people are present.

social loafing The tendency to exert less effort when working in a group toward a common goal than when individually working toward the goal.

diffusion of responsibility The lessening of individual responsibility for a task when responsibility for the task is spread across the members of a group.

Social loafing and the diffusion of responsibility. Social facilitation occurs for people on tasks for which they can be evaluated individually. Social loafing occurs when people are pooling their efforts to achieve a common goal (Karau & Williams, 1993). Social loafing is the tendency for people to exert less effort when working toward a common goal in a group than when individually accountable. Social loafing is doing as little as you can get away with. Think about the various group projects that you have participated in, both in school and outside of school. Didn’t some members contribute very little to the group effort? Why? A major reason is the diffusion of responsibility—the responsibility for the task is diffused across all members of the group; therefore, individual accountability is lessened.

401

Behavior often changes when individual responsibility is lifted. Remember that in Milgram’s study the maximum obedience rate increased to almost 100% when the direct responsibility for administering the shock was lifted from the teacher’s shoulders. This diffusion of responsibility can also explain why social loafing tends to increase as the size of the group increases (Latané, Williams, & Harkins, 1979). The larger the group, the less likely it is that a social loafer will be detected, and the more the responsibility for the task gets diffused. Think about students working together on a group project for a shared grade. Social loafing will be greater when the group size is seven or eight than when it is only two or three. However, for group tasks in which individual contributions are identifiable and evaluated, social loafing decreases (Williams, Harkins, & Latané, 1981; Harkins & Jackson, 1985). Thus, in a group project for a shared grade, social loafing decreases if each group member is assigned and responsible for a specific part of the project.

bystander effect The probability of a person’s helping in an emergency is greater when there are no other bystanders than when there are other bystanders.

The bystander effect and the Kitty Genovese case. Now let’s think about the Kitty Genovese case described at the beginning of this chapter. Given the New York Times 38-witnesses account of the murder with no one intervening until it was too late, subsequent media coverage described it as a sad consequence of big-city apathy. Experiments by John Darley, Bibb Latané, and other social psychologists, however, indicate that it wasn’t apathy, but rather that the diffusion of responsibility, as in social loafing, played a major role in this failure to help (Latané & Darley, 1970; Latané & Nida, 1981). Conducting experiments in which people were faced with emergency situations, Darley and Latané found what they termed the bystander effect—the probability of an individual helping in an emergency is greater when there is only one bystander than when there are many bystanders.

To understand this effect, Darley and Latané developed a model of the intervention process in emergencies. According to this model, for a person to intervene in an emergency, he must make not just one, but a series of decisions, and only one set of choices will lead him to take action. In addition, these decisions are typically made under conditions of stress, urgency, and threat of possible harm. The decisions to be made are (1) noticing the relevant event or not, (2) defining the event as an emergency or not, (3) feeling personal responsibility for helping or not, and (4) having accepted responsibility for helping, deciding what form of assistance he should give (direct or indirect intervention). If the event is not noticed or not defined as an emergency or if the bystander does not take responsibility for helping, he will not intervene. Darley and Latané’s research (e.g., Latané & Darley, 1968, and Latané & Rodin, 1969) demonstrated that the presence of other bystanders negatively influenced all of these decisions, leading to the bystander effect.

402

Let’s take a closer look at one of these experiments (Darley & Latané, 1968) to help you better understand the bystander effect. Imagine that you are asked to participate in a study examining the adjustments you’ve experienced in attending college. You show up for the experiment, are led to a booth, and are told that you are going to participate in a round-robin discussion of adjustment problems over the laboratory intercom. You put on earphones so that you can hear the other participants, but you cannot see them. The experimenter explains that this is to guarantee each student’s anonymity. The experimenter tells you that when a light in the booth goes on, it is your turn to talk. She also says that she wants the discussion not to be inhibited by her presence, so she is not going to listen to the discussion. The study begins. The first student talks about how anxious he has been since coming to college and that sometimes the anxiety is so overwhelming, he has epileptic seizures. Another student talks about the difficulty she’s had in deciding on a major and how she misses her boyfriend who stayed at home to go to college. It’s your turn, and you talk about your adjustment problems. The discussion then returns to the first student, and as he is talking, he seems to be getting very anxious. Suddenly, he starts having a seizure and cries out for help. What would you do?

Like most people, you would likely say that you would go to help him. However, this is not what was found. Whether a participant went for help depended upon how many other students the participant thought were available to help the student having the seizure (the bystander effect). Darley and Latané manipulated this number, so there were zero, one, or four others. In actuality, there were no other students present; the dialogue was all tape-recorded. There was only one participant. The percentage of participants who attempted to help before the victim’s cries for help ended decreased dramatically as the presumed number of bystanders increased, from 85% when alone to only 31% when four other bystanders who could help were assumed to be present. The probability of helping decreased as the responsibility for helping was diffused across more participants. Those participants who did not go for help were not apathetic, however. They were very upset and seemed to be in a state of conflict, even though they did not leave the booth to help. They appeared to want to help, but the situational forces (the presumed presence of other bystanders and the resulting diffusion of responsibility) led them not to do so. The bystander effect has been replicated many times for many different types of emergencies. Latané and Nida (1981) analyzed almost 50 bystander intervention studies with thousands of participants and found that bystanders were more likely to help when alone than with others about 90% of the time.

Now let’s see how Darley and Latané’s bystander effect can be applied to the Kitty Genovese case as reported in the New York Times. The responsibility for helping was diffused across the supposed witnesses to the attack. Because the bystanders did not communicate with one another, each bystander likely assumed that someone else had called the police, so they didn’t need to do so. However, after much time had elapsed and the police had not arrived, one of these bystanders likely decided that no one had intervened so he called the police, but it was too late. But how would Darley and Latané explain the bystander who did call the police early on in the attack? According to the Darley and Latané bystander intervention model that we described earlier, a bystander has to make a series of decisions that determine whether he will intervene or not intervene. Unlike the other bystanders, the bystander who called the police must have decided to take responsibility for helping and then decided to intervene indirectly by calling the police. Sadly, the police did not come. Similarly, the bystander who shouted at the attacker during the first attack must have decided to indirectly intervene.

403

It is important to understand that bystanders to an emergency have to make decisions under conditions of stress and urgency and thus sometimes may make decisions that are regretted later. Even if a bystander decides to take responsibility and act, deciding whether to intervene directly or indirectly is extremely difficult because of the possibility in some situations that he, too, could be harmed if he tries to directly intervene. The neighbor who actually left her apartment and went to the crime scene did decide to intervene directly, risking harm to herself, but the murderer had already fled. These bystanders’ interventions do not invalidate the bystander effect. The bystander effect describes what typically happens as the number of bystanders to an emergency increases; the bystander effect is not, however, what invariably happens. Remember, the remaining bystanders to the murder did nothing. For example, the only witness to the second fatal attack did not call the police but rather called a friend to ask what to do. The friend told him to get out of there, and he did, by sneaking out a back window in his apartment (Lemann, 2014). Why the differences in bystander reactions? A bystander’s actions are not entirely controlled by the situational forces; dispositional factors (personality traits) also impact the bystander’s behavior. In the discussion of trait theories of personality in the last chapter, we pointed out that there was a situational influence on whether a person’s behavior reflected a particular trait. Similarly, there is a dispositional influence on a person’s behavior when he is subject to strong situational forces. The critical thing to remember as a bystander, especially in situations in which you do not know what other bystanders are doing, is you should not assume that someone else is going to help. The bystander effect tells us that this assumption will likely lead to no one helping.

deindividuation The loss of self-awareness and self-restraint in a group situation that fosters arousal and anonymity.

Deindividuation. Diffusion of responsibility also seems to play a role in deindividuation, the loss of self-awareness and self-restraint in a group situation that fosters arousal and anonymity. The responsibility for the group’s actions is defused across all the members of the group. Deindividuation can be thought of as combining the increased arousal in social facilitation with the diminished sense of responsibility in social loafing. Deindividuated people feel less restrained, and therefore may forget their moral values and act spontaneously without thinking. The result can be damaging, as seen in mob violence, riots, and vandalism. In one experiment on deindividuation, college women wearing Ku Klux Klan–type white hoods and coats delivered twice as much shock to helpless victims than did similar women not wearing Klan clothing who were identifiable by name tags (Zimbardo, 1970). Once people lose their sense of individual responsibility, feel anonymous, and are aroused, they are capable of terrible things.

404

image
Deindividuation and the Ku Klux Klan | The uniform of Ku Klux Klan members, especially the hood, fosters deindividuation, the loss of self-awareness and self-restraint in a group situation that fosters arousal and anonymity. Deindividuation increases the likelihood that members will forget their moral values and act without thinking.
Pat Sullivan/AP Photo

group polarization The strengthening of a group’s prevailing opinion about a topic following group discussion about the topic.

Group polarization and groupthink. Two other group influences, group polarization and groupthink, apply to more structured, task-oriented group situations (such as committees and panels). Group polarization is the strengthening of a group’s prevailing opinion about a topic following group discussion of the topic. The group members already share the same opinion on an issue, and when they discuss it among themselves, this opinion is further strengthened as members gain additional information from other members in support of the opinion. This means that the initially held view becomes even more polarized following group discussion.

In addition to this informational influence of the group discussion, there is a type of normative influence. Because we want others to like us, we may express stronger views on a topic after learning that other group members share our opinion. Both informational and normative influences lead members to stronger, more extreme opinions. A real-life example is the accentuation phenomenon in college students—initial differences in college students become more accentuated over time (Myers, 2002). For example, students who do not belong to fraternities and sororities tend to be more liberal politically, and this difference grows during college at least partially because group members reinforce and polarize one another’s views. Group polarization for some groups may lead to destructive behavior, encouraging group members to go further out on a limb through mutual reinforcement. For example, group polarization within community gangs tends to increase the rate of their criminal behavior. Within terrorist organizations, group polarization leads to more extreme acts of violence. Alternatively, association with a quiet, nonviolent group, such as a quilter’s guild, strengthens a person’s tendency toward quieter, more peaceful behavior. In summary, group polarization may exaggerate prevailing attitudes among group members, leading to more extreme behavior.

405

groupthink A mode of group thinking that impairs decision making, because the desire for group harmony overrides a realistic appraisal of the possible decision alternatives.

Groupthink is a mode of group thinking that impairs decision making; the desire for group harmony overrides a realistic appraisal of the possible decisions. The primary concern is to maintain group consensus. Pressure is put on group members to go along with the group’s view and external information that disagrees with the group’s view is suppressed, which leads to the illusion of unanimity. Groupthink also leads to an illusion of infallibility—the belief that the group cannot make mistakes.

image
Henry Martin/The New Yorker Collection/The Cartoon Bank

Given such illusory thinking, it is not surprising that groupthink often leads to very bad decisions and poor solutions to problems (Janis, 1972, 1983). The failure to anticipate Pearl Harbor, the disastrous Bay of Pigs invasion of Cuba, and the space shuttle Challenger disaster are a few examples of real-world bad decisions that have been linked to groupthink. In the case of the Challenger disaster, for example, the engineers who made the shuttle’s rocket boosters opposed the launch because of dangers posed by the cold temperatures to the seals between the rocket segments (Esser & Lindoerfer, 1989). However, the engineers were unsuccessful in arguing their case with the group of NASA officials, who were suffering from an illusion of infallibility. To maintain an illusion of unanimity, these officials didn’t bother to make the top NASA executive who made the final launch decision aware of the engineers’ concerns. The result was tragedy.

Sadly, the NASA groupthink mentality reared its head again with the space shuttle Columbia disaster. It appears that NASA management again ignored safety warnings from engineers about probable technical problems. The Columbia accident investigation board strongly recommended that NASA change its “culture of invincibility.” To prevent groupthink from impacting your group decisions, make your group aware of groupthink and its dangers, and then take explicit steps to ensure that minority opinions, critiques of proposed group actions, and alternative courses of action are openly presented and fairly evaluated.

image
The Challenger Explosion | Groupthink has been linked to the Challenger disaster. NASA officials, who were suffering from an illusion of infallibility, ignored the warnings of engineers who opposed the launch because of dangers posed by the cold temperatures to the seals between the rocket segments.
Bruce Weaver/AP Photo

Section Summary

406

In this section, we discussed many types of social influence, how people and the social forces they create influence a person’s thinking and behavior. Conformity—a change in behavior, belief, or both to conform to a group norm as a result of real or imagined group pressure—is usually due to either normative social influence or informational social influence. Normative social influence leads people to conform to gain the approval and avoid the disapproval of others. Informational social influence leads people to conform to gain information from others in an uncertain situation. Several situational factors impact the amount of conformity that is observed. For example, nonconsensus among group members reduces the amount of conformity, and responding aloud versus anonymously increases conformity. In addition, culture and gender impact the amount of conformity observed. Collectivist cultures tend to lead to more conformity than individualistic cultures, and women conform more than men.

In conformity, people change their behavior or attitudes to adhere to a group norm, but in compliance, people act in accordance with a direct request from another person or group. We discussed four techniques used to obtain compliance. Each technique involves two requests, and it is always the second request for which compliance is desired. In the foot-in-the-door technique, a small request is followed by the desired larger request. In the door-in-the-face technique, a large first request is followed by the desired second smaller request. In the low-ball technique, an attractive first request is followed by the desired and less attractive second request. In the that’s-not-all technique, the desired and more attractive second request is made before a response can be made to an initial request. The foot-in-the-door and low-ball techniques work mainly because the person has committed to the first request and complies with the second in order to remain consistent. The door-in-the-face and that’s-not-all techniques work mainly because of reciprocity. Because the other person has made a concession on the first request, we comply with the second in order to reciprocate.

Stanley Milgram conducted 23 experiments at Yale University in the early 1960s in an attempt to study destructive obedience in an empirical setting. Milgram argued that his findings indicated our willingness to commit acts of destructive obedience, bringing harm to others through our obedient behavior. He also identified numerous situational factors that influenced the amount of obedience observed. For example, a very high rate of obedience is observed when the direct responsibility for one’s acts is removed, but less obedience is observed when we view models of disobedience. Milgram’s study, however, has become a contentious classic in that the validity of the study, Milgram’s explanation of his findings, and the accuracy of Milgram’s publications about the study have all been challenged by critics. Many of these criticisms stem from analyses of the experimental materials, such as audiotapes of the experiments, available in the Milgram archives at Yale. In addition, analyses of the results of the recent Milgram partial replication, the original experiments, and some recent related empirical work indicate that the obedience experiments were probably not about destructive obedience but rather about engaged followership (participants’ behavior is motivated by appeals to science, leading participants to become engaged in helping the experimenter achieve his scientific goals). However, obedience studies in the real world, specifically the doctor–nurse relationship, have found varying amounts of destructive obedience, and the overall conclusion is that nurses’ obedience to doctors’ orders can lead to patient harm.

407

Even the mere presence of other people can influence our behavior. This is demonstrated in social facilitation, an improvement in simple or well-learned tasks but worse performance on complex or unlearned tasks when other people are observing us. Some group influences occur when the responsibility for a task is diffused across all members of the group. For example, social loafing is the tendency for people to exert less effort when working in a group toward a common goal than when individually accountable. Social loafing increases as the size of the group increases and decreases when each group member feels more responsible for his contribution to the group effort. Diffusion of responsibility also contributes to the bystander effect, the greater probability of an individual helping in an emergency when there is only one bystander versus when there are many bystanders. Diffusion of responsibility also contributes to deindividuation, the loss of self-awareness and self-restraint in a group situation that promotes arousal and anonymity. The results of deindividuation can be tragic, such as mob violence and rioting.

Two other group influences, group polarization and groupthink, apply to more structured, task-oriented situations, and refer to effects on the group’s decision making. Group polarization is the strengthening of a group’s prevailing opinion following group discussion of the topic. Like-minded group members reinforce their shared beliefs, which leads to more extreme attitudes and behavior among all group members. Groupthink is a mode of group thinking that impairs decision making. It stems from the group’s illusion of infallibility and its desire for group harmony, which override a realistic appraisal of decision alternatives, often leading to bad decisions.

1

Question 9.1

.

Explain the difference between normative social influence and informational social influence.

The main difference between normative social influence and informational social influence concerns the need for information. When normative social influence is operating, information is not necessary for the judgment task. The correct answer or action is clear. People are conforming to gain the approval of others in the group and avoid their disapproval. When informational social influence is operating, however, people conform because they need information as to what the correct answer or action is. Conformity in this case is due to the need for information, which we use to guide our behavior.

Question 9.2

.

Explain how both the door-in-the-face technique and the that’s-not-all technique involve reciprocity.

In the door-in-the-face technique, the other person accepts your refusal to the first request, so you reciprocate by agreeing to her second smaller request, the one she wanted you to comply with. In the that’s-not-all technique, you think that the other person has done you a favor by giving you an even better deal with the second request, so you reciprocate and do her a favor and agree to the second request.

Question 9.3

.

Milgram found that 0% of the participants continued in the experiment when one of two experimenters said to stop. Based on this finding, predict what he found when he had two experimenters disagree, but the one who said to stop was substituting for a participant and serving as the learner. Explain the rationale for your prediction.

If you predicted that the result was the same (0% maximum obedience), you are wrong. The result was the same as in Milgram’s New Baseline experiment, 65% maximum obedience. An explanation involves how we view persons of authority who lose their authority (being demoted). In a sense, by agreeing to serve as the learner, the experimenter gave up his authority, and the teachers no longer viewed him as an authority figure. He had been demoted.

Question 9.4

.

According to the bystander effect, explain why you would be more likely to be helped if your car broke down on a little-traveled country road than on an interstate highway.

According to the bystander effect, you would be more likely to receive help on the little-traveled country road, because any passing bystander would feel the responsibility for helping you. She would realize that there was no one else available to help you, so she would do so. On a busy interstate highway, however, the responsibility for stopping to help is diffused across hundreds of people passing by, each thinking that someone else would help you.