21.1 What Is Intelligence?

For most of the twentieth century, everyone—scientists and the general public alike—assumed that there was such a thing as “intelligence”; that is, they assumed that some people are smarter than others because they have more intelligence than others. This affects people lifelong. Scores on IQ tests can change from childhood to adulthood, sometimes markedly, but they generally predict education, income, and longevity (Calvin et al., 2011), all of which vary from person to person.

604

As one scholar begins a book on intelligence:

Homer and Shakespeare lived in very different times, more than two thousand years apart, but they both captured the same idea: we are not all equally intelligent. I suspect that anyone who has failed to notice this is somewhat out of touch with the species.

[Hunt, 2011, p. 1]

general intelligence (g) The idea of g assumes that intelligence is one basic trait, underlying all cognitive abilities. According to this concept, people have varying levels of this general ability.

One leading theoretician, Charles Spearman (1927), proposed that there is a single entity, general intelligence, which he called g. Spearman contended that, although g cannot be measured directly, it can be inferred from various abilities, such as vocabulary, memory, and reasoning.

IQ tests had already been developed to identify children who needed special instruction, but Spearman promoted the idea that everyone’s overall intelligence could be measured by combining test scores on a diverse mix of items. A summary IQ number could reveal whether a person is a superior, typical, or slow learner, or, using labels of 100 years ago that are no longer used, a genius, imbecile, or idiot.

The belief that there is a g continues to influence thinking on intelligence (Nisbett et al., 2012). Many neuroscientists search for genetic underpinnings of intellectual capacity, although they have not yet succeeded in finding g (Deary et al., 2010; Haier et al., 2009). Some aspects of brain function, particularly in the prefrontal cortex, hold promise (Barbey et al., 2013; Roca et al., 2010). Many other scientists also seek one common factor that undergirds IQ—perhaps prenatal brain development, experiences in infancy, or physical health.

Research on Age and Intelligence

Although psychometricians throughout the twentieth century believed that intelligence could be measured and quantified via IQ tests, they disagreed about interpreting the data—especially about whether g rises or falls after age 20 or so (Hertzog, 2011). Methodology was one reason for that disagreement. Consider the implications of the three methods used for studying human development mentioned in Chapter 1: cross-sectional, longitudinal, and cross-sequential.

Observation Quiz Beyond the test itself, what conditions of the testing favored the younger men? (see answer, page 606]

Answer to Observation Quiz: Sitting on the floor with no back support, with a test paper at a distance on your lap, and with someone standing over you holding a stopwatch—all are enough to rattle anyone, especially people over 18.

Smart Enough for the Trenches? These young men were drafted to fight in World War I. Younger men (about age 17 or 18) did better on the military’s intelligence tests than slightly older ones did.
TIME LIFE PICTURES/US SIGNAL CORPS/TIME LIFE PICTURES/GETTY IMAGES

Cross-Sectional Declines

For the first half of the twentieth century, psychologists believed that intelligence increases in childhood, peaks in adolescence, and then gradually declines. Younger people were considered smarter than older ones. This belief was based on the best evidence then available.

For instance, the U.S. Army tested the aptitude of all draftees in World War I. When the scores of men of various ages were compared, it seemed apparent that intellectual ability reached its peak at about age 18, stayed at that level until the mid-20s, and then declined (Yerkes, 1923).

Hundreds of other cross-sectional studies of IQ in many nations confirmed that younger adults outscored older adults. The case for an age-related decline in IQ was considered proven. The two classic IQ tests, the Stanford-Binet and the WISC/WAIS, are still normed for scores to peak in late adolescence. [Lifespan Link: IQ tests are discussed in Chapter 11.]

605

Longitudinal Improvements

Shortly after the middle of the twentieth century, Nancy Bayley and Melita Oden (1955) analyzed the intelligence of the adults who had been originally selected as child geniuses by Lewis Terman decades earlier. Bayley was an expert in intelligence testing. She knew that “invariable findings had indicated that most intellectual functions decrease after about 21 years of age” (Bayley, 1966, p. 117). Instead she found that the IQ scores of these gifted individuals increased between ages 20 and 50.

Bayley wondered if their high intelligence in childhood somehow protected them from the expected age-related declines. To find out, she retested adults who had been selected and tested in infancy as representative (mostly average, but also including some higher or lower in IQ) of the population of Berkeley, California. Their IQ scores also improved after age 21.

Why did these new data contradict previous conclusions? As you remember from Chapter 1, cross-sectional research can be misleading because each cohort has unique life experiences. The quality and extent of adult education, cultural opportunities (travel, movies), and sources of information (newspapers, radio, and later, television and the Internet) change every decade. No wonder adults studied longitudinally showed intellectual growth.

Earlier cross-sectional research did not take into account the fact that most of the older adults had left school before eighth grade. It was unfair to compare their IQ at age 70 to that of 20-year-olds, almost all of whom attended high school. In retrospect, it is not surprising that the older military volunteers scored lower, not because their minds were declining but because their education was inferior. Cross-sectional comparisons would show younger generations scoring better than the older ones, but longitudinal data might nonetheless find that most individuals increase in IQ from age 20 to age 60.

Powerful evidence that younger adults score higher because of education and health, not because of youth, comes from longitudinal research from many nations. Recent cohorts always outscore previous ones. As you remember from Chapter 11, this is the Flynn effect.

It is unfair—and scientifically invalid—to compare the IQ scores of a cross section of adults to learn about age-related changes. Older adults will score lower, but that does not mean that they have lost intellectual power. Longitudinal research finds that most gain, not lose.

Longitudinal studies are more accurate than cross-sectional ones in measuring development over the decades. However, longitudinal research on IQ has three drawbacks:

  1. Repeated testing provides practice, and that itself may improve scores.
  2. Some participants move without forwarding addresses, or refuse to be retested, or die. They tend to be those whose IQ is declining, which skews the results of longitudinal research.
  3. Unusual events (e.g., a major war or a breakthrough in public health) affect each cohort, and more mundane changes—such as widespread use of the Internet or less secondhand smoke—make it hard to predict the future based on the history of the past.

Will babies born today be smarter adults than babies born in 1990? Probably. But that is not guaranteed: Cohort effects might make a new generation score lower, not higher, than their elders. New data on the Flynn effect finds that generational increases have slowed down in developed nations, albeit not in developing ones (Meisenberg & Woodley, 2013). Thus health and educational benefits may be less dramatic for the next generation than they were 50 years ago.

606

Cross-Sequential Research

The best method to understand the effects of aging without the complications of historical change is to combine cross-sectional and longitudinal research, a combination called cross-sequential research.

The Seattle Longitudinal Study

At the University of Washington in 1956, K. Warner Schaie tested a cross section of 500 adults, aged 20 to 50, on five standard primary mental abilities considered to be the foundation of intelligence: (1) verbal meaning (vocabulary), (2) spatial orientation, (3) inductive reasoning, (4) number ability, and (5) word fluency (rapid verbal associations). His cross-sectional results showed age-related decline in all five abilities, as others had found before him. He planned to replicate Bayley’s research by retesting his population seven years later.

Schaie then had a brilliant idea, to use both longitudinal and cross-sectional methods. He not only retested his initial participants but also tested a new group who were the same age that his earlier sample had been. Then he could compare people not only to their own earlier scores, but also to people currently as old as his original group had been when first tested.

Seattle Longitudinal Study The first cross-sequential study of adult intelligence. This study began in 1956 and is repeated every 7 years.

By retesting and adding a new group every seven years, Schaie obtained a more accurate view of development than was possible from either longitudinal or cross-sectional research alone. Known as the Seattle Longitudinal Study, this was the first cross-sequential study of adult intelligence.

With cross-sequential research, researchers can analyze the impact of retesting, cohort, and experience. Schaie confirmed and extended what others had found: People improve in most mental abilities during adulthood (Schaie 2013, 2005). As Figure 21.1 shows, each particular ability at each age and for each gender has a distinct pattern. Note the gradual rise and the eventual decline of all abilities. Men are initially better at number skills and women at verbal skills, but the two sexes grow closer over time. Schaie found that everyone declined by age 60 in at least one of the basic abilities, but not until age 88 did everyone decline in all five skills.

Age Differences in Intellectual Abilities Cross-sectional data on intellectual abilities at various ages would show much steeper declines. Longitudinal research, in contrast, would show more notable rises. Because Schaie’s research is cross-sequential, the trajectories it depicts are more revealing: None of the average scores for the five abilities at any age are above 55 or below 35. Because the methodology takes into account the cohort and historical effects, the purely age-related differences from ages 25 to 60 are very small.

607

Other researchers from many nations find similar trends, although the specific abilities and trajectories differ (Hunt, 2011). Adulthood is typically a time of increasing, or at least maintaining, IQ, with dramatic individual differences: Some people and some abilities show declines at age 40; others, not until decades later (Johnson et al., 2014; Kremen et al., 2014).

Especially for Older Brothers and Sisters If your younger siblings mock your ignorance of current TV shows and beat you at the latest video games, does that mean your intellect is fading?

Response for Older Brothers and Sisters: No. While it is true that each new cohort might be smarter than the previous one in some ways, cross-sequential research suggests that you are smarter than you used to be. Knowing that might help you respond wisely—smiling quietly rather than insisting that you are superior.

Schaie discovered more detailed cohort changes than the Flynn effect. Each successive cohort (born at seven-year intervals from 1889 to 1973) scored higher in adulthood than did the previous generations in verbal memory and inductive reasoning, but number ability (math) peaked for those born in 1924 and then declined slowly in future cohorts until about 1970, when it no longer fell with each generation (Schaie, 2013). School curricula may explain these differences: By the mid-twentieth century, reading, writing, and self-expression were more emphasized than in the beginning of the twentieth and the twenty-first centuries.

Another cohort effect is that the age-related declines, though still evident, now appear about a decade later than they used to (Schaie, 2013). The likely explanation is that the later-born population, on average, has more education and better health.

One correlate of higher ability for every cohort is having work or a personal life that challenges the mind. Schaie found that recent cohorts of adults more often have intellectually challenging jobs and thus higher intellectual ability. That had a marked effect on female IQ, since women used to stay home or had routine jobs. Now that more women are employed in challenging work, women score higher than did women their age 50 years ago.

Other research also finds that challenging work fosters high intelligence. One team found that retiring from difficult jobs often reduced intellectual power, but leaving dull jobs increased it (Finkel et al., 2009). This depends on activities after retirement: Intellectually demanding tasks, paid or not, keep the mind working (Schooler, 2009; Schaie, 2013).

Many studies using sophisticated designs and statistics have supplanted early cross-sectional and longitudinal studies. None are perfect because “no design can fully sanitize a study so as to solve the age-cohort-period identification problem” (Herzog, 2010, p. 5). Cultures, eras, and individuals vary substantially regarding which cognitive abilities are nurtured and tested. From about age 20 to 70, national values, specific genes, and education are all more influential on IQ scores than chronological age (Johnson et al., 2014).

It is hard to predict intelligence for any particular adult, even if genes and age are known. For instance, a study of Swedish twins aged 41 to 84 found differences in verbal ability among the monozygotic twins with equal education. In theory, scores should have been identical, but they were not. As expected, however, age had an effect: Memory and spatial ability declined over time (Finkel et al., 2009).

Considering all the research, adult intellectual abilities measured on IQ tests sometimes rise, fall, zigzag, or stay the same as age increases. Specific patterns are affected by each person’s experiences, with “virtually every possible permutation of individual profiles” (Schaie, 2013, p. 497). This illustrates the life-span perspective: Intelligence is multidirectional, multicultural, multicontextual, and plastic. Although scores on several subtests decline, especially on timed tests, overall ability is usually maintained until late adulthood.

608

SUMMING UP

Intelligence as a concept is controversial, with some experts believing that there is one general intelligence, that individuals have more or less of it, and that each ability separately rises or falls. Psychometricians once believed that intelligence decreased beginning at about age 20: that is what cross-sectional data revealed. Then longitudinal testing demonstrated that many adults advance in intelligence with age. Cross-sequential research provides a more nuanced picture, finding that some abilities decrease and others increase throughout adulthood. Many factors—including challenging work, a stimulating personal life, past education, and good health—protect intelligence and postpone decline. Individual variations are dramatic, with some people showing no decrements even at age 60.