Conducting Media Effects Research

Media research generally comes from the private or public sector—each type with distinguishing features. Private research, sometimes called proprietary research, is generally conducted for a business, a corporation, or even a political campaign. It is usually applied research in the sense that the information it uncovers typically addresses some real-life problem or need. Public research, in contrast, usually takes place in academic and government settings. It involves information that is often more theoretical than applied; it tries to clarify, explain, or predict the effects of mass media rather than to address a consumer problem.

Most media research today focuses on the effects of the media in such areas as learning, attitudes, aggression, and voting habits. This research employs the scientific method, a blueprint long used by scientists and scholars to study phenomena in systematic stages. The steps in the scientific method include:

  1. identifying the research problem
  2. reviewing existing research and theories related to the problem
  3. developing working hypotheses or predictions about what the study might find
  4. determining an appropriate method or research design
  5. collecting information or relevant data
  6. analyzing results to see if the hypotheses have been verified
  7. interpreting the implications of the study to determine whether they explain or predict the problem

The scientific method relies on objectivity (eliminating bias and judgments on the part of researchers); reliability (getting the same answers or outcomes from a study or measure during repeated testing); and validity (demonstrating that a study actually measures what it claims to measure).

529

In scientific studies, researchers pose one or more hypotheses: tentative general statements that predict the influence of an independent variable on a dependent variable. For example, a researcher might hypothesize that frequent TV viewing among adolescents (independent variable) causes poor academic performance (dependent variable). Or, another researcher might hypothesize that playing first-person-shooter video games (independent variable) is associated with aggression in children (dependent variable).

Broadly speaking, the methods for studying media effects on audiences have taken two forms—experiments and survey research. To supplement these approaches, researchers also use content analysis to count and document specific messages that circulate in mass media.

Experiments

“Theories abound, examples multiply, but convincing facts that specific media content is reliably associated with particular effects have proved quite elusive.”

GUY CUMBERBATCH, A MEASURE OFUNCERTAINTY, 1989

Like all studies that use the scientific method, experiments in media research isolate some aspect of content; suggest a hypothesis; and manipulate variables to discover a particular medium’s impact on attitude, emotion, or behavior. To test whether a hypothesis is true, researchers expose an experimental group—the group under study—to a selected media program or text. To ensure valid results, researchers also use a control group, which serves as a basis for comparison; this group is not exposed to the selected media content. Subjects are picked for each group through random assignment, which simply means that each subject has an equal chance of being placed in either group. Random assignment ensures that the independent variables researchers want to control are distributed to both groups in the same way.

For instance, to test the effects of violent films on pre-adolescent boys, a research study might take a group of ten-year-olds and randomly assign them to two groups. Researchers expose the experimental group to a violent action movie that the control group does not see. Later, both groups are exposed to a staged fight between two other boys so that the researchers can observe how each group responds to an actual physical confrontation. Researchers then determine whether or not there is a statistically measurable difference between the two groups’ responses to the fight. For example, perhaps the control subjects tried to break up the fight but the experimental subjects did not. Because the groups were randomly selected and the only measurable difference between them was the viewing of the movie, researchers may conclude that under these conditions the violent film caused a different behavior. (See the “Bobo doll” experiment photos.)

“Writing survey questions and gathering data are easy; writing good questions and collecting useful data are not.”

MICHAEL SINGLETARY,MASS COMMUNICATION RESEARCH, 1994

When experiments carefully account for independent variables through random assignment, they generally work well to substantiate direct cause-effect hypotheses. Such research takes place both in laboratory settings and in field settings, where people can be observed using the media in their everyday environments. In field experiments, however, it is more difficult for researchers to control variables. In lab settings, researchers have more control, but other problems may occur. For example, when subjects are removed from the environments in which they regularly use the media, they may act differently—often with fewer inhibitions—than they would in their everyday surroundings.

Experiments have other limitations as well. One, they are not generalizable to a larger population; they cannot tell us whether cause-effect results can be duplicated outside of the laboratory. Two, most academic experiments today are performed on college students, who are convenient subjects for research but are not representative of the general public. Finally, while most experiments are fairly good at predicting short-term media effects under controlled conditions, they do not predict how subjects will behave months or years later in the real world.

Survey Research

530

In the simplest terms, survey research is the collecting and measuring of data taken from a group of respondents. Using random sampling techniques that give each potential subject an equal chance to be included in the survey, this research method draws on much larger populations than those used in experimental studies. Surveys may be conducted through direct mail, personal interviews, telephone calls, e-mail, and Web sites, enabling survey researchers to accumulate large amounts of information by surveying diverse cross sections of people. These data help to examine demographic factors such as educational background, income level, race, ethnicity, gender, age, sexual orientation, and political affiliations, along with questions directly related to the survey topic.

Two other benefits of surveys are that they are usually generalizable to the larger society and that they enable researchers to investigate populations in long-term studies. For example, survey research can measure subjects when they are ten, twenty, and thirty years old to track changes in how frequently they watch television and what kinds of programs they prefer at different ages. In addition, large government and academic survey databases are now widely available and contribute to the development of more long-range or longitudinal studies, which make it possible for social scientists to compare new studies with those conducted years earlier.

Like experiments, surveys have several drawbacks. First, survey investigators cannot account for all the variables that might affect media use; therefore, they cannot show cause-effect relationships. Survey research can, however, reveal correlations—or associations—between two variables. For example, a random questionnaire survey of ten-year-old boys might demonstrate that a correlation exists between aggressive behavior and watching violent TV programs. Such a correlation, however, does not explain what is the cause and what is the effect—that is, do violent TV programs cause aggression, or are more aggressive ten-year-old boys simply drawn to violent television? Second, the validity of survey questions is a chronic problem for survey practitioners. Surveys are only as good as the wording of their questions and the answer choices they present. For example, as NPR reported, “[I]f you ask people whether they support or oppose the death penalty for murderers, about two-thirds of Americans say they support it. If you ask whether people prefer that murderers get the death penalty or life in prison without parole, then you get a 50-50 split.”16

Content Analysis

Over the years, researchers recognized that experiments and surveys focused on general topics (violence) while ignoring the effects of specific media messages (gun violence, fistfights, etc.). As a corrective, researchers developed a method known as content analysis to study these messages. Such analysis is a systematic method of coding and measuring media content.

Although content analysis was first used during World War II for radio, more recent studies have focused on television, film, and the Internet. Probably the most influential content analysis studies were conducted by George Gerbner and his colleagues at the University of Pennsylvania. Beginning in the late 1960s, they coded and counted acts of violence on network television. Combined with surveys, their annual “violence profiles” showed that heavy watchers of television, ranging from children to retired Americans, tend to overestimate the amount of violence that exists in the actual world.17

The limits of content analysis, however, have been well documented. First, this technique does not measure the effects of the messages on audiences, nor does it explain how those messages are presented. For example, a content analysis sponsored by the Kaiser Family Foundation that examined more than eleven hundred television shows found that 70 percent featured sexual content.18 But the study didn’t explain how viewers interpreted the content or the context of the messages.

531

Second, problems of definition occur in content analysis. For instance, in the case of coding and counting acts of violence, how do researchers distinguish slapstick cartoon aggression from the violent murders or rapes in an evening police drama? Critics point out that such varied depictions may have diverse and subtle effects on viewers that are not differentiated by content analysis. Finally, critics point out that as content analysis grew to be a primary tool in media research, it sometimes pushed to the sidelines other ways of thinking about television and media content. Broad questions concerning the media as a popular art form, as a measure of culture, as a democratic influence, or as a force for social control are difficult to address through strict measurement techniques. Critics of content analysis, in fact, have objected to the kind of social science that reduces culture to acts of counting. Such criticism has addressed the tendency by some researchers to favor measurement accuracy over intellectual discipline and inquiry.19