3.4 Experimentation

3-6 What are the characteristics of experimentation that make it possible to isolate cause and effect?

Happy are they, remarked the Roman poet Virgil, “who have been able to perceive the causes of things.” How might psychologists perceive causes in correlational studies, such as the correlation between breast feeding and intelligence?

Researchers have found that the intelligence scores of children who were breast-fed as infants are somewhat higher than the scores of children who were bottle-fed (Angelsen et al., 2001; Mortensen et al., 2002; Quinn et al., 2001). Moreover, the longer they breast-feed, the higher their later IQ scores (Jedrychowski et al., 2012).

What do such findings mean? Do smarter mothers have smarter children? (Breast-fed children tend to be healthier and higher achieving than other children. But their bottle-fed siblings, born and raised in the same families, tend to be similarly healthy and higher achieving [Colen & Ramey, 2014].) Or, as some researchers believe, do the nutrients of mother’s milk also contribute to brain development? To find answers to such questions—to isolate cause and effect—researchers can experiment. Experiments enable researchers to isolate the effects of one or more factors by (1) manipulating the factors of interest and (2) holding constant (“controlling”) other factors. To do so, they often create an experimental group, in which people receive the treatment, and a contrasting control group that does not receive the treatment. To minimize any preexisting differences between the two groups, researchers randomly assign people to the two conditions. Random assignment—whether with a random numbers table or flip of the coin—effectively equalizes the two groups. If one-third of the volunteers for an experiment can wiggle their ears, then about one-third of the people in each group will be ear wigglers. So, too, with ages, attitudes, and other characteristics, which will be similar in the experimental and control groups. Thus, if the groups differ at the experiment’s end, we can surmise that the treatment had an effect.

36

Recall that in a well-done survey, random sampling is important. In an experiment, random assignment is equally important.

To experiment with breast feeding, one research team randomly assigned some 17,000 Belarus newborns and their mothers either to a control group given normal pediatric care, or an experimental group that promoted breast feeding, thus increasing expectant mothers’ breast intentions (Kramer et al., 2008). At three months of age, 43 percent of the infants in the experimental group were being exclusively breast-fed, as were 6 percent in the control group. At age 6, when nearly 14,000 of the children were restudied, those who had been in the breast-feeding promotion group had intelligence test scores averaging six points higher than their control condition counterparts.

With parental permission, one British research team directly experimented with breast milk. They randomly assigned 424 hospitalized premature infants either to formula feedings or to breast-milk feedings (Lucas et al., 1992). Their finding: For premature infants’ developing intelligence, breast was best. On intelligence tests taken at age 8, those nourished with breast milk scored significantly higher than those who were formula-fed. Breast was best.

No single experiment is conclusive, of course. But randomly assigning participants to one feeding group or the other effectively eliminated all factors except nutrition. This supported the conclusion that for developing intelligence, breast is indeed best. If test performance changes when we vary infant nutrition, then we infer that nutrition matters.

The point to remember: Unlike correlational studies, which uncover naturally occurring relationships, an experiment manipulates a factor to determine its effect.

Consider, then, how we might assess therapeutic interventions. Our tendency to seek new remedies when we are ill or emotionally down can produce misleading testimonies. If three days into a cold we start taking vitamin C tablets and find our cold symptoms lessening, we may credit the pills rather than the cold naturally subsiding. In the 1700s, bloodletting seemed effective. People sometimes improved after the treatment; when they didn’t, the practitioner inferred the disease was too advanced to be reversed. So, whether or not a remedy is truly effective, enthusiastic users will probably endorse it. To determine its effect, we must control for other factors.

And that is precisely how investigators evaluate new drug treatments and new methods of psychological therapy. They randomly assign participants in these studies to research groups. One group receives a treatment (such as a medication). The other group receives a pseudotreatment—an inert placebo (perhaps a pill with no drug in it). The participants are often blind (uninformed) about what treatment, if any, they are receiving. If the study is using a double-blind procedure, neither the participants nor those who administer the drug and collect the data will know which group is receiving the treatment.

In double-blind studies, researchers check a treatment’s actual effects apart from the participants’ and the staff’s belief in its healing powers. Just thinking you are getting a treatment can boost your spirits, relax your body, and relieve your symptoms. This placebo effect is well documented in reducing pain, depression, and anxiety (Kirsch, 2010). Athletes have run faster when given a supposed performance-enhancing drug (McClung & Collins, 2007). Drinking decaf coffee has boosted vigor and alertness—for those who thought it had caffeine in it (Dawkins et al., 2011). People have felt better after receiving a phony mood-enhancing drug (Michael et al., 2012). And the more expensive the placebo, the more “real” it seems to us—a fake pill that costs $2.50 works better than one costing 10 cents (Waber et al., 2008). To know how effective a therapy really is, researchers must control for a possible placebo effect.

37

RETRIEVAL PRACTICE

  • What measures do researchers use to prevent the placebo effect from confusing their results?

Research designed to prevent the placebo effect randomly assigns participants to an experimental group (which receives the real treatment) or to a control group (which receives a placebo). A comparison of the results will demonstrate whether the real treatment produces better results than belief in that treatment.

Independent and Dependent Variables

Here is an even more potent example: The drug Viagra was approved for use after 21 clinical experiments. One trial was an experiment in which researchers randomly assigned 329 men with erectile disorder to either an experimental group (Viagra takers) or a control group (placebo takers given an identical-looking pill). The procedure was double-blind—neither the men nor the person giving them the pills knew what they were receiving. The result: At peak doses, 69 percent of Viagra-assisted attempts at intercourse were successful, compared with 22 percent for men receiving the placebo (Goldstein et al., 1998). For many, Viagra worked.

This simple experiment manipulated just one factor: the drug dosage (none versus peak dose). We call this experimental factor the independent variable because we can vary it independently of other factors, such as the men’s age, weight, and personality. Other factors, which can potentially influence the results of the experiment, are called confounding variables. Random assignment controls for possible confounding variables.

Experiments examine the effect of one or more independent variables on some measurable behavior, called the dependent variable because it can vary depending on what takes place during the experiment. Both variables are given precise operational definitions, which specify the procedures that manipulate the independent variable (the precise drug dosage and timing in this study) or measure the dependent variable (the questions that assessed the men’s responses). These definitions provide a level of precision that enables others to repeat the study. (See FIGURE 3.6 for the British breast milk experiment’s design.)

Figure 3.6
Experimentation To discern causation, psychologists may randomly assign some participants to an experimental group, others to a control group. Measuring the dependent variable (intelligence score in later childhood) will determine the effect of the independent variable (type of milk).

To review and test your understanding of experimental methods and concepts, visit Launch Pad’s Concept Practice: The Language of Experiments, and the interactive PsychSim 6: Understanding Psychological Research.

Let’s pause to check your understanding using a simple psychology experiment: To test the effect of perceived ethnicity on the availability of a rental house, Adrian Car-pusor and William Loges (2006) sent identically worded e-mail inquiries to 1115 Los Angeles–area landlords. The researchers varied the ethnic connotation of the sender’s name and tracked the percentage of positive replies (invitations to view the apartment in person). “Patrick McDougall,” “Said Al-Rahman,” and “Tyrell Jackson” received, respectively, 89 percent, 66 percent, and 56 percent invitations.

Experiments can also help us evaluate social programs. Do early childhood education programs boost impoverished children’s chances for success? What are the effects of different antismoking campaigns? Do school sex-education programs reduce teen pregnancies? To answer such questions, we can experiment: If an intervention is welcomed but resources are scarce, we could use a lottery to randomly assign some people (or regions) to experience the new program and others to a control condition. If later the two groups differ, the intervention’s effect will be supported (Passell, 1993).

38

Let’s recap. A variable is anything that can vary (infant nutrition, intelligence, TV exposure—anything within the bounds of what is feasible and ethical). Experiments aim to manipulate an independent variable, measure a dependent variable, and control confounding variables. An experiment has at least two different conditions: an experimental condition and a comparison or control condition. Random assignment works to minimize preexisting differences between the groups before any treatment effects occur. In this way, an experiment tests the effect of at least one independent variable (what we manipulate) on at least one dependent variable (the outcome we measure). TABLE 3.3 compares the features of psychology’s research methods.

Table 3.3
Comparing Research Methods

RETRIEVAL PRACTICE

  • In the rental housing experiment, what was the independent variable? The dependent variable?

The independent variable, which the researchers manipulated, was the set of ethnically distinct names. The dependent variable, which they measured, was the positive response rate.

  • By using random assignment, researchers are able to control for ______________ ___________ , which are other factors besides the independent variable (s) that may influence research results.

confounding variables

  • Match the term on the left with the description on the right.
1. double-blind procedure a. helps researchers generalize from a small set of survey responses to a larger population
2. random sampling b. helps minimize preexisting differences between experimental and control groups
3. random assignment c. controls for the placebo effect; neither researchers nor participants know who receives the real treatment

1. c, 2. a, 3. b

  • Why, when testing a new drug to control blood pressure, would we learn more about its effectiveness from giving it to half of the participants in a group of 1000 than to all 1000 participants?

We learn more about the drug’s effectiveness when we can compare the results of those who took the drug (the experimental group) with the results of those who did not (the control group). If we gave the drug to all 1000 participants, we would have no way of knowing whether the drug is serving as a placebo or is actually medically effective.

39

Predicting Real Behavior

3-7 Can laboratory experiments illuminate everyday life?

When you see or hear about psychological research, do you ever wonder whether people’s behavior in the lab will predict their behavior in real life? Does detecting the blink of a faint red light in a dark room say anything useful about flying a plane at night? After viewing a violent, sexually explicit film, does an aroused man’s increased willingness to push buttons that he thinks will electrically shock a woman really say anything about whether violent pornography makes a man more likely to abuse a woman?

Before you answer, consider: The experimenter intends the laboratory environment to be a simplified reality—one that simulates and controls important features of everyday life. Just as a wind tunnel lets airplane designers re-create airflow forces under controlled conditions, a laboratory experiment lets psychologists re-create psychological forces under controlled conditions.

An experiment’s purpose is not to re-create the exact behaviors of everyday life but to test theoretical principles (Mook, 1983). In aggression studies, deciding whether to push a button that delivers a shock may not be the same as slapping someone in the face, but the principle is the same. It is the resulting principles—not the specific find-ings—that help explain everyday behaviors.

When psychologists apply laboratory research on aggression to actual violence, they are applying theoretical principles of aggressive behavior, principles they have refined through many experiments. Similarly, it is the principles of the visual system, developed from experiments in artificial settings (such as looking at red lights in the dark), that researchers apply to more complex behaviors such as night flying. And many investigations show that principles derived in the laboratory do typically generalize to the everyday world (Anderson et al., 1999).

The point to remember: Psychological science focuses less on particular behaviors than on seeking general principles that help explain many behaviors.