49.3 Classifying Disorders—and Labeling People

49-3 How and why do clinicians classify psychological disorders, and why do some psychologists criticize the use of diagnostic labels?

In biology, classification creates order. To classify an animal as a “mammal” says a great deal—that it is warm-blooded, has hair or fur, and produces milk to nourish its young. In psychiatry and psychology, too, classification orders and describes symptoms. To classify a person’s disorder as “schizophrenia” suggests that the person talks incoherently, has bizarre beliefs, shows either little emotion or inappropriate emotion, or is socially withdrawn. “Schizophrenia” is a quick way to describe a complex disorder.

But diagnostic classification gives more than a thumbnail sketch of a person’s disordered behavior, thoughts, or feelings. In psychiatry and psychology, classification also aims to

To study a disorder, we must first name and describe it.

A book of case illustrations accompanying the previous DSM edition provided several examples for Modules 49 through 53.

The most common tool for describing disorders and estimating how often they occur is the American Psychiatric Association’s 2013 Diagnostic and Statistical Manual of Mental Disorders, now in its fifth edition (DSM-5). Physicians and mental health workers use the detailed “diagnostic criteria and codes” in the DSM-5 to guide medical diagnoses and treatment. For example, a person may be diagnosed with and treated for “insomnia disorder” if he or she meets all of the criteria in TABLE 49.1.

Table 49.1
Insomnia Disorder

614

In the DSM-5, some diagnostic labels have changed. The conditions formerly called “autism” and “Asperger’s syndrome” have now been combined under the label autism spectrum disorder. “Mental retardation” has become intellectual disability. New categories, such as hoarding disorder and binge-eating disorder, have been added.

Some of the new or altered diagnoses are controversial. Disruptive mood dysregula-tion disorder is a new DSM-5 diagnosis for children “who exhibit persistent irritability and frequent episodes of behavior outbursts three or more times a week for more than a year.” Will this diagnosis assist parents who struggle with unstable children, or will it “turn temper tantrums into a mental disorder” and lead to overmedication, as the chair of the previous DSM edition has warned (Frances, 2012)?

Real-world tests (field trials) have assessed clinician agreement when using the new DSM-5 categories (Freedman et al., 2013). Some diagnoses, such as adult posttraumatic stress disorder and childhood autism spectrum disorder fared well—with agreement near 70 percent. (If one psychiatrist or psychologist diagnosed someone with one of these disorders, there was a 70 percent chance that another mental health worker would independently give the same diagnosis.) Others, such as antisocial personality disorder and generalized anxiety disorder, fared poorly.

Critics have long faulted the DSM for casting too wide a net and bringing “almost any kind of behavior within the compass of psychiatry” (Eysenck et al., 1983). Some now worry that the DSM-5’s even wider net will extend the pathologizing of everyday life—for example, by turning childish rambunctiousness into ADHD, and bereavement grief into a depressive disorder (Frances, 2013). (See Thinking Critically About: ADHD.) Others respond that hyperactivity and depression, though needing careful definition, are genuine disorders—even when the depression was triggered by a major life stress such as a death when the grief does not go away (Kendler, 2011; Kupfer, 2012).

THINKING  CRITICALLY  ABOUT

THINKING CRITICALLY ABOUT: ADHD—Normal High Energy or Disordered Behavior?

49-4 Why is there controversy over attention-deficit/ hyperactivity disorder?

Eight-year-old Todd has always been energetic. At home, he chatters away and darts from one activity to the next, rarely settling down to read a book or focus on a game. At play, he is reckless and overreacts when playmates bump into him or take one of his toys. At school, Todd fidgets, and his exasperated teacher complains that he doesn’t listen, follow instructions, or stay in his seat and do his lessons. As Todd matures to adulthood, his hyperactivity likely will subside, but his inattentiveness may persist (Kessler et al., 2010).

If taken for a psychological evaluation, Todd may be diagnosed with attention-deficit/hyperactivity disorder (ADHD). Some 11 percent of American 4- to 17-year-olds receive the diagnosis after displaying its key symptoms (extreme inattention, hyperactivity, and impulsivity) (Schwarz & Cohen, 2013). Studies also find 2.5 percent of adults—though the number diminishes with age—exhibit ADHD symptoms (Simon et al., 2009). The looser criteria for adult ADHD in the DSM-5 has led critics to fear increased diagnosis and overuse of prescription drugs (Frances, 2012).

To skeptics, being distractible, fidgety, and impulsive sounds like a “disorder” caused by a single genetic variation: a Y chromosome (the male sex chromosome). And sure enough, ADHD is diagnosed three times more often in boys than in girls. Children who are “a persistent pain in the neck in school” are often diagnosed with ADHD and given powerful prescription drugs (Gray, 2010). Minority youth less often receive an ADHD diagnosis than do Caucasian youth, but this difference has shrunk as minority ADHD diagnoses have increased (Getahun et al., 2013).

The problem may reside less in the child than in today’s abnormal environment that forces children to do what evolution has not prepared them to do—to sit for long hours in chairs. In more natural outdoor environments, these children might seem perfectly healthy.

Rates of medication for presumed ADHD vary by age, sex, and location. Prescription drugs are more often given to teens than to younger children. Boys are nearly three times more likely to receive them than are girls. And location matters. Among 4-to 17-year-olds, prescription rates have varied from 1 percent in Nevada to 9 percent in North Carolina (CDC, 2013). Some students seek out the stimulant drugs—calling them the “good-grade pills.” They hope to increase their focus and achievement, but the risks include the development of addiction, depressive disorders, or bipolar disorder (Schwarz, 2012).

Not everyone agrees that ADHD is being overdiagnosed. Some argue that today’s more frequent diagnoses reflect increased awareness of the disorder, especially in those areas where rates are highest. They also note that diagnoses can be inconsistent—ADHD is not as clearly defined as a broken arm. Nevertheless, declared the World Federation for Mental Health (2005), “there is strong agreement among the international scientific community that ADHD is a real neurobiological disorder whose existence should no longer be debated.” A consensus statement by 75 neuroimaging researchers noted that abnormal brain activity often accompanies ADHD (Barkley et al., 2002).

What, then, is known about ADHD’s causes? It is not caused by too much sugar or poor schools. There is mixed evidence suggesting that extensive TV watching and video gaming are associated with reduced cognitive self-regulation and ADHD (Bailey et al., 2011; Courage & Setliff, 2010; Ferguson et al., 2011). ADHD often coexists with a learning disorder or with defiant and temper-prone behavior. ADHD is heritable, and research teams are sleuthing the culprit genes and abnormal neural pathways (Lionel et al., 2014; Poelmans et al., 2011; Volkow et al., 2009; Williams et al., 2010). It is treatable with medications such as Ritalin and Adderall, which are considered stimulants but help calm hyperactivity and increase one’s ability to sit and focus on a task—and to progress normally in school (Barbaresi et al., 2007). Psychological therapies, such as those focused on shaping classroom and at-home behaviors, also help address the distress of ADHD (Fabiano et al., 2008).

The bottom line: Extreme inattention, hyperactivity, and impulsivity can derail social, academic, and vocational achievements, and these symptoms can be treated with medication and other therapies. But the debate continues over whether normal high energy is too often diagnosed as a psychiatric disorder, and whether there is a cost to the long-term use of stimulant drugs in treating ADHD.

Other critics register a more basic complaint—that these labels are at best subjective and at worst value judgments masquerading as science. Once we label a person, we view that person differently (Bathje & Pryor, 2011; Farina, 1982; Sadler et al., 2012). Labels can change reality by putting us on alert for evidence that confirms our view. When teachers were told certain students were “gifted,” they acted in ways that elicited the behaviors they expected (Snyder, 1984). Someone who was led to think you are nasty may treat you coldly, leading you to respond as a mean-spirited person would. Labels can be self-fulfilling. They create expectations that guide how we perceive and interpret people.

Struggles and recovery Boston Mayor Martin Walsh spoke openly about his struggles with alcohol. His story of recovery helped him win the closest Boston mayoral election in decades.

The biasing power of labels was clear in a now-classic study. David Rosenhan (1973) and seven others went to hospital admissions offices, complaining (falsely) of “hearing voices” saying empty, hollow, and thud. Apart from this complaint and giving false names and occupations, they answered questions truthfully. All eight healthy people were misdiagnosed with disorders.

Should we be surprised? As one psychiatrist noted, if someone swallows blood, goes to an emergency room, and spits it up, should we fault the doctor for diagnosing a bleeding ulcer? Surely not. But what followed the Rosenhan study diagnoses was startling. Until being released an average of 19 days later, those eight “patients” showed no other symptoms. Yet after analyzing their (quite normal) life histories, clinicians were able to “discover” the causes of their disorders, such as having mixed emotions about a parent. Even routine note-taking behavior was misinterpreted as a symptom.

Labels matter. In another study, people watched videotaped interviews. If told the interviewees were job applicants, the viewers perceived them as normal (Langer et al., 1974, 1980). Other viewers who were told they were watching psychiatric or cancer patients perceived the same interviewees as “different from most people.” Therapists who thought they were watching an interview of a psychiatric patient perceived him as “frightened of his own aggressive impulses,” a “passive, dependent type,” and so forth. A label can, as Rosenhan discovered, have “a life and an influence of its own.”

615

“My sister suffers from a bipolar disorder and my nephew from schizoaffective disorder. There has, in fact, been a lot of depression and alcoholism in my family and, traditionally, no one ever spoke about it. It just wasn’t done. The stigma is toxic.”

Actress Glenn Close, “Mental Illness: The Stigma of Silence,” 2009

Labels also have power outside the laboratory. Getting a job or finding a place to rent can be a challenge for people recently released from a mental hospital. Label someone as “mentally ill” and people may fear them as potentially violent (see Thinking Critically About: Are People With Psychological Disorders Dangerous?) Such negative reactions may fade as people better understand that many psychological disorders involve diseases of the brain, not failures of character (Solomon, 1996). Public figures have helped foster this new understanding by speaking openly about their own struggles with disorders such as depression and substance abuse. The more contact we have with people with disorders, the more accepting our attitudes are (Kolodziej & Johnson, 1996).

616

THINKING  CRITICALLY  ABOUT

THINKING CRITICALLY ABOUT: Are People With Psychological Disorders Dangerous?

49-5 Do psychological disorders predict violent behavior?

September 16, 2013, started like any other Monday at Washington, DC’s, Navy Yard, with people arriving early to begin work. Then government contractor Aaron Alexis parked his car, entered the building, and began shooting people. An hour later, 13 people were dead, including Alexis. Reports later confirmed that Alexis had a history of mental illness. Before the shooting, he had stated that an “ultra low frequency attack is what I’ve been subject to for the last three months. And to be perfectly honest, that is what has driven me to this.” This devastating mass shooting, like the one in a Connecticut elementary school in 2012 and many others since then, reinforced public perceptions that people with psychological disorders pose a threat (Jorm et al., 2012). After the 2012 slaughter, New York’s governor declared, “People who have mental issues should not have guns” (Kaplan & Hakim, 2013).

Does scientific evidence support the governor’s statement? If disorders actually increase the risk of violence, then denying people with psychological disorders the right to bear arms might reduce violent crimes. But real life tells a different story. The vast majority of violent crimes are committed by people with no diagnosed disorder (Fazel & Grann, 2006; Walkup & Rubin, 2013).

People with disorders are more likely to be victims than perpetrators of violence (Marley & Bulia, 2001). According to the U.S. Surgeon General’s Office (1999, p. 7), “There is very little risk of violence or harm to a stranger from casual contact with an individual who has a mental disorder.” People with mental illness commit proportionately little gun violence. The bottom line: Focusing gun restrictions only on mentally ill people will likely not reduce gun violence (Friedman, 2012).

If mental illness is not a good predictor of violence, what is? Better predictors are a history of violence, use of alcohol or drugs, and access to a gun. The mass-killing shooters have one more thing in common: They tend to be young males. “We could avoid two-thirds of all crime simply by putting all able-bodied young men in cryogenic sleep from the age of 12 through 28,” said one psychologist (Lykken, 1995).

Mental disorders seldom lead to violence, and clinical prediction of violence is unreliable. What, then, are the triggers for the few people with psychological disorders who do commit violent acts? For some, the trigger is substance abuse. For others, like the Navy Yard shooter, it’s threatening delusions and hallucinated voices that command them to act (Douglas et al., 2009; Elbogen & Johnson, 2009; Fazel et al., 2009, 2010). Whether people with mental disorders who turn violent should be held responsible for their behavior remains controversial. U.S. President Ronald Reagan’s near-assassin, John Hinckley, was sent to a hospital rather than to prison. The public was outraged. “Hinkley insane. Public mad,” declared one headline. They were outraged again in 2011, when Jared Lee Loughner killed six people and injured several others, including U.S. Representative Gabrielle Giffords. Loughner was diagnosed with schizophrenia and twice found incompetent to stand trial. He was later judged competent to stand trial, pled guilty to 19 charges of murder and attempted murder, and was sentenced to life in prison without parole.

How to prevent mass shootings? Following the Newtown, Connecticut, slaughter of 20 young children and 6 adults, people wondered: Could those at risk for violence be identified in advance by mental health workers and reported to police? Would laws that require such reporting discourage disturbed gun owners from seeking mental health treatment?

Which decision was correct? The first two, which blamed Loughner’s “madness” for clouding his judgment? Or the final one, which decided that he should be held responsible for the acts he committed? As we come to better understand the biological and environmental bases for all human behavior, from generosity to vandalism, when should we—and should we not—hold people accountable for their actions?

“What’s the use of their having names,” the Gnat said, “if they won’t answer to them?”

“No use to them,” said Alice; “but it’s useful to the people that name them, I suppose.”

Lewis Carroll, Through the Looking-Glass, 1871

617

Better portrayals Old stereotypes are slowly being replaced in media portrayals of psychological disorders. Recent films offer fairly realistic depictions. Iron Man 3 (2013) portrayed a main character, shown here, with posttraumatic stress disorder. Black Swan (2010) dramatized a lead character suffering a delusional disorder. A Single Man (2009) depicted depression.

Despite their risks, diagnostic labels have benefits. Mental health professionals use labels to communicate about their cases, to comprehend the underlying causes, and to discern effective treatment programs. Researchers use labels when discussing work that explores the causes and treatments of disorders. Clients are often relieved to learn that the nature of their suffering has a name, and that they are not alone in experiencing this collection of symptoms.

To test your ability to form diagnoses, visit LaunchPad’s PsychSim 6: Classifying Disorders.

RETRIEVAL PRACTICE

  • What is the value, and what are the dangers, of labeling individuals with disorders?

Therapists and others use disorder labels to communicate with one another using a common language, and to share concepts during research. Clients may benefit from knowing that they are not the only ones with these symptoms. The dangers of labeling people are that (1) people may begin to act as they have been labeled, and (2) the labels can trigger assumptions that will change our behavior toward those we label.