2.4 The Ethics of Science: First, Do No Harm

Somewhere along the way, someone probably told you that it isn’t nice to treat people like objects. And yet, it may seem that psychologists do just that by creating situations that cause people to feel fearful or sad, to do things that are embarrassing or immoral, and to learn things about themselves and others that they might not really want to know. Don’t be fooled by appearances. The fact is that psychologists go to great lengths to protect the well-being of their research participants, and they are bound by a code of ethics that is as detailed and demanding as the professional codes that bind physicians, lawyers, and accountants. That code requires that psychologists show respect for people, for animals, and for the truth. Let’s examine each of these obligations in turn.

Respecting People

During World War II, Nazi doctors performed truly barbaric experiments on human subjects, such as removing organs or submerging them in ice water just to see how long it would take them to die. When the war ended, the international community developed the Nuremberg Code of 1947 and then the Declaration of Helsinki in 1964, which spelled out rules for the ethical treatment of human subjects. Unfortunately, not everyone obeyed them. For example, from 1932 until 1972, the U.S. Public Health Service conducted the infamous Tuskegee experiment in which 399 African American men with syphilis were denied treatment so that researchers could observe the progression of the disease. As one journalist noted, the government “used human beings as laboratory animals in a long and inefficient study of how long it takes syphilis to kill someone” (Coontz, 2008).

What are three features of ethical research?

71

In 1974, the U.S. Congress created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. In 1979, the U.S. Department of Health, Education and Welfare released what came to be known as the Belmont Report, which described three basic principles that all research involving human subjects should follow. First, research should show respect for persons and their right to make decisions for and about themselves without undue influence or coercion. Second, research should be beneficent, which means that it should attempt to maximize benefits and reduce risks to the participant. Third, research should be just, which means that it should distribute benefits and risks equally to participants without prejudice toward particular individuals or groups.

The specific ethical code that psychologists follow incorporates these basic principles and expands them. (You can find the American Psychological Association’s Ethical Principles of Psychologists and Code of Conduct (2002) at http://www.apa.org/ethics/code/index.aspx.) Here are a few of the most important rules that govern the conduct of psychological research:

These are just some of the rules that psychologists must follow. But how are those rules enforced? Almost all psychology studies are done by psychologists who work at colleges and universities. These institutions have institutional review boards (IRBs) that are composed of instructors and researchers, university staff, and laypeople from the community (e.g., business leaders or members of the clergy). If the research is federally funded (as much research is), then the law requires that the IRB include at least one nonscientist and one person who is not affiliated with the institution. A psychologist may conduct a study only after the IRB has reviewed and approved it.

As you can imagine, the code of ethics and the procedure for approval are so strict that many studies simply cannot be performed anywhere, by anyone, at any time. For example, psychologists would love to know how growing up without exposure to language affects a person’s subsequent ability to speak and think, but they cannot ethically manipulate that variable in an experiment. They can only study the natural correlations between language exposure and speaking ability, and so may never be able to firmly establish the causal relationships between these variables. Indeed, there are many questions that psychologists will never be able to answer definitively because doing so would require unethical experiments that violate basic human rights.

Respecting Animals

Not all research participants have human rights because not all research participants are human. Some are chimpanzees, rats, pigeons, or other nonhuman animals. The American Psychological Association’s code specifically describes the special rights of these nonhuman participants, and some of the more important ones are these:

That’s good—but is it good enough? Some people don’t think so. For example, philosopher Peter Singer (1975) argued that all creatures capable of feeling pain have the same fundamental rights, and that treating nonhumans differently than humans is a form of speciesism that is every bit as abhorrent as racism or sexism. Singer’s philosophy has inspired groups such as People for the Ethical Treatment of Animals to call for an end to all research involving nonhuman animals. Unfortunately, it has also inspired some groups to attack psychologists who legally conduct such research. As two researchers (Ringach & Jentsch, 2009, p. 11417) recently reported:

73

We have seen our cars and homes firebombed or flooded, and we have received letters packed with poisoned razors and death threats via e-mail and voicemail. Our families and neighbors have been terrorized by angry mobs of masked protesters who throw rocks, break windows, and chant that “you should stop or be stopped” and that they “know where you sleep at night.” Some of the attacks have been cataloged as attempted murder. Adding insult to injury, misguided animal-rights militants openly incite others to violence on the Internet, brag about the resulting crimes, and go as far as to call plots for our assassination “morally justifiable.”

Some people consider it unethical to use animals for clothing or research. Others see an important distinction between these two purposes.
PAUL MCERLANE/REUTERS/CORBIS
MIKE TWOHY ©THE NEW YORKER COLLECTION

Where do most people stand on this issue? The vast majority of Americans consider it morally acceptable to use nonhuman animals in research and say they would reject a governmental ban on such research (Kiefer, 2004; Moore, 2003). Indeed, most Americans eat meat, wear leather, and support the rights of hunters, which is to say that most Americans see a sharp distinction between animal and human rights. Science is not in the business of resolving moral controversies, and every individual must draw his or her own conclusions about this issue. But whatever position you take, it is important to note that only a small percentage of psychological studies involve animals, and only a small percentage of those studies cause animals pain or harm. Psychologists mainly study people, and when they do study animals, they mainly study their behavior.

Respecting Truth

Institutional review boards ensure that data are collected ethically. But once the data are collected, who ensures that they are ethically analyzed and reported? No one does. Psychology, like all sciences, works on the honor system. No authority is charged with monitoring what psychologists do with the data they’ve collected, and no authority is charged with checking to see if the claims they make are true. You may find that a bit odd. After all, we don’t use the honor system in stores (“Take the television set home and pay us next time you’re in the neighborhood”), banks (“I don’t need to look up your account, just tell me how much money you want to withdraw”), or courtrooms (“If you say you’re innocent, well then, that’s good enough for me”), so why would we expect it to work in science? Are scientists more honest than everyone else?

74

In 2012, psychologist Diederik Stapel was found to have committed massive fraud over the course of many years. He was fired from his job as a professor and dozens of published scientific articles were retracted.
ERIKVAN DERBURGT/VERBEELD/REDUX

Definitely! Okay, we just made that up. But the honor system doesn’t depend on scientists being especially honest, but on the fact that science is a community enterprise. When scientists claim to have discovered something important, other scientists don’t just applaud, they start studying it too. When physicist Jan Hendrik Schön announced in 2001 that he had produced a molecular-scale transistor, other physicists were deeply impressed—that is, until they tried to replicate his work and discovered that Schön had fabricated his data (Agin, 2007). Schön lost his job and his doctoral degree was revoked, but the important point is that such frauds can’t last long because one scientist’s conclusion is the next scientist’s research question. This doesn’t mean that all frauds are uncovered swiftly: psychologist Diederik Stapel lied, cheated, and made up his data for decades before people became suspicious enough to investigate (Levelt Committee, Noort Committee, Drenth Committee, 2012). But it does mean that the important frauds are uncovered eventually. The psychologist who fraudulently claims to have shown that chimps are smarter than goldfish may never get caught because no one is likely to follow up on such an obvious finding, but the psychologist who fraudulently claims to have shown the opposite will soon have a lot of explaining to do.

What are psychologists expected to do when they report the results of their research?

What exactly are psychologists on their honor to do? At least three things. First, when they write reports of their studies and publish them in scientific journals, psychologists are obligated to report truthfully on what they did and what they found. They can’t fabricate results (e.g., claiming to have performed studies that they never really performed) or fudge results (e.g., changing records of data that were actually collected), and they can’t mislead by omission (e.g., by reporting only the results that confirm their hypothesis and saying nothing about the results that don’t). Second, psychologists are obligated to share credit fairly by including as co-authors of their reports the other people who contributed to the work, and by mentioning in their reports the other scientists who have done related work. And third, psychologists are obligated to share their data. The American Psychological Association’s code of conduct states that ethical psychologists “do not withhold the data on which their conclusions are based from other competent professionals who seek to verify the substantive claims through reanalysis.” The fact that anyone can check up on anyone else is part of why the honor system works as well as it does.

  • Institutional review boards ensure that the rights of human beings who participate in scientific research are based on the principles of respect for persons, beneficence, and justice.
  • Psychologists are obligated to uphold these principles by getting informed consent from participants, not coercing participation, protecting participants from harm, weighing benefits against risks, avoiding deception, and keeping information confidential.
  • Psychologists are obligated to respect the rights of animals and treat them humanely. Most people are in favor of using animals in scientific research.
  • Psychologists are obligated to tell the truth about their studies, to share credit appropriately, and to grant others access to their data.

75

OTHER VOICES: Can We Afford Science?

David Brooks is a columnist for the New York Times, a commentator on CNN, and the author of several popular books on behavioral science.
PHOTO: ©JOSH HANER/COURTESY OF THE NEW YORK TIMES

Who pays for all the research described in textbooks like this one? The answer is you. By and large, scientific research is funded by governmental agencies, such as the National Science Foundation, which give scientists grants (also known as money) to do particular research projects that the scientists proposed. Of course, this money could be spent on other things, for example, feeding the poor, housing the homeless, caring for the ill and elderly, and so on. Does it make sense to spend taxpayer dollars on psychological science when some of our fellow citizens are cold and hungry?

Journalist and author David Brooks (2011) argued that research in the behavioral sciences is not an expenditure—it is an investment that pays for itself, and more. Here’s what he had to say:

Over the past 50 years, we’ve seen a number of gigantic policies produce disappointing results—policies to reduce poverty, homelessness, dropout rates, single-parenting and drug addiction. Many of these policies failed because they were based on an overly simplistic view of human nature. They assumed that people responded in straightforward ways to incentives. Often, they assumed that money could cure behavior problems.

Fortunately, today we are in the middle of a golden age of behavioral research. Thousands of researchers are studying the way actual behavior differs from the way we assume people behave. They are coming up with more accurate theories of who we are, and scores of real-world applications. Here’s one simple example:

When you renew your driver’s license, you have a chance to enroll in an organ donation program. In countries like Germany and the U.S., you have to check a box if you want to opt in. Roughly 14 percent of people do. But behavioral scientists have discovered that how you set the defaults is really important. So in other countries, like Poland or France, you have to check a box if you want to opt out. In these countries, more than 90 percent of people participate.

This is a gigantic behavior difference cued by one tiny and costless change in procedure.

Yet in the middle of this golden age of behavioral research, there is a bill working through Congress that would eliminate the National Science Foundation’s Directorate for Social, Behavioral and Economic Sciences. This is exactly how budgets should not be balanced—by cutting cheap things that produce enormous future benefits.

Let’s say you want to reduce poverty. We have two traditional understandings of poverty. The first presumes people are rational. They are pursuing their goals effectively and don’t need much help in changing their behavior. The second presumes that the poor are afflicted by cultural or psychological dysfunctions that sometimes lead them to behave in shortsighted ways. Neither of these theories has produced much in the way of effective policies.

Eldar Shafir of Princeton and Sendhil Mullainathan of Harvard have recently, with federal help, been exploring a third theory, that scarcity produces its own cognitive traits.

A quick question: What is the starting taxi fare in your city? If you are like most upper-middle-class people, you don’t know. If you are like many struggling people, you do know. Poorer people have to think hard about a million things that affluent people don’t. They have to make complicated trade-offs when buying a carton of milk: If I buy milk, I can’t afford orange juice. They have to decide which utility not to pay.

These questions impose enormous cognitive demands. The brain has limited capacities. If you increase demands on one sort of question, it performs less well on other sorts of questions.

Shafir and Mullainathan gave batteries of tests to Indian sugar farmers. After they sell their harvest, they live in relative prosperity. During this season, the farmers do well on the I.Q. and other tests. But before the harvest, they live amid scarcity and have to think hard about a thousand daily decisions. During these seasons, these same farmers do much worse on the tests. They appear to have lower I.Q.’s. They have more trouble controlling their attention. They are more shortsighted. Scarcity creates its own psychology.

Princeton students don’t usually face extreme financial scarcity, but they do face time scarcity. In one game, they had to answer questions in a series of timed rounds, but they could borrow time from future rounds. When they were scrambling amid time scarcity, they were quick to borrow time, and they were nearly oblivious to the usurious interest rates the game organizers were charging. These brilliant Princeton kids were rushing to the equivalent of payday lenders, to their own long-term detriment.

Shafir and Mullainathan have a book coming out next year, exploring how scarcity—whether of time, money or calories (while dieting)—affects your psychology. They are also studying how poor people’s self-perceptions shape behavior. Many people don’t sign up for the welfare benefits because they are intimidated by the forms. Shafir and Mullainathan asked some people at a Trenton soup kitchen to relive a moment when they felt competent and others to recount a neutral experience. Nearly half of the self-affirming group picked up an available benefits package afterward. Only 16 percent of the neutral group did.

People are complicated. We each have multiple selves, which emerge or don’t depending on context. If we’re going to address problems, we need to understand the contexts and how these tendencies emerge or don’t emerge. We need to design policies around that knowledge. Cutting off financing for this sort of research now is like cutting off navigation financing just as Christopher Columbus hit the shoreline of the New World.

What do you think? Is Brooks right? Is psychological science a wise use of public funds, or is it a luxury that we simply can’t afford?

From the New York Times, July 7, 2011 © 2011 The New York Times. All rights reserved. Used by permission and protected by the Copyright Laws of the United States. The printing, copying, redistribution, or retransmission of this Content without express written permission is prohibited. http://www.nytimes.com/2011/07/08/opinion/08brooks.html?_r=0

76