10-4 Anatomy of Language and Music

This chapter began with the evolutionary implications of discovering a flute made by Neanderthals (see Focus 10-1). That Neanderthals made flutes implies not only that they processed musical sound wave patterns but also that they made music. In our brain, musical ability is generally a right-hemisphere specialization complementary to language ability, lateralized to the left hemisphere in most people.

Section 7-4 surveys functional brain imaging methods; Section 7-2 reviews methods for measuring its electrical activity.

No one knows whether these complementary systems evolved together in the hominid brain, but it is highly likely. Language and music abilities are highly developed in the modern human brain. Although little is known about how each is processed at the cellular level, electrical stimulation and recording and blood flow imaging studies yield important insights into the cortical regions that process them. We investigate such studies next, focusing first on how the brain processes language.

Processing Language

An estimated 5000 to 7000 human languages are spoken in the world today, and probably many more have gone extinct in past millennia. Researchers have wondered whether the brain has a single system for understanding and producing any language, regardless of its structure, or whether disparate languages, such as English and Japanese, are processed differently. To answer this question, it helps to analyze languages to determine just how fundamentally similar they are, despite their obvious differences.

Uniformity of Language Structure

Foreign languages often seem impossibly complex to those who do not speak them. Their sounds alone may seem odd and difficult to make. If you are a native English speaker, for instance, Asian languages, such as Japanese, probably sound especially melodic and almost without obvious consonants to you, whereas European languages, such as German or Dutch, may sound heavily guttural.

341

Even within such related languages as Spanish, Italian, and French, marked differences can make learning one of them challenging, even if the student already knows another. Yet as real as all these linguistic differences may be, they are superficial. The similarities among human languages, although not immediately apparent, are actually far more fundamental than their differences.

Noam Chomsky (1965) is usually credited as the first linguist to stress similarities over differences in human language structure. In a series of books and papers written over the past half-century, Chomsky has made a sweeping claim, as have researchers such as Steven Pinker (1997) more recently. They argue that all languages have common structural characteristics stemming from a genetically determined constraint, and these common characteristics form the basis of universal grammar theory. Humans, apparently, have a built-in capacity for learning and using language, just as we have for walking upright.

Chomsky was greeted with deep skepticism when he first proposed this idea in the 1960s, but it has since become clear that the capacity for human language is indeed genetic. An obvious piece of evidence: language is universal in human populations. All people everywhere use language.

A language’s complexity is unrelated to its culture’s technological complexity. The languages of technologically unsophisticated peoples are every bit as complex and elegant as the languages of postindustrial cultures. Nor is the English of Shakespeare’s time inferior or superior to today’s English; it is just different.

A 1-year-old’s 5- to 10-word vocabulary doubles in the next 6 months and by 36 months mushrooms to 1000 words; see Section 8-3.

Another piece of evidence that Chomsky adherents cite for the genetic basis of human language is that humans learn language early in life and seemingly without effort. By about 12 months of age, children everywhere have started to speak words. By 18 months, they are combining words, and by age 3 years, they have a rich language capability.

Perhaps the most amazing thing about language development is that children are not formally taught the structure of their language, just as they are not taught to crawl or walk. They just do it. As toddlers, they are not painstakingly instructed in the rules of grammar. In fact, their early errors—sentences such as “I goed to the zoo”—are seldom even corrected by adults. Yet children master language rapidly. They also acquire language through a series of stages that are remarkably similar across cultures. Indeed, the process of language acquisition plays an important role in Chomsky’s theory of its innateness—which is not to say that language development is not influenced by experience.

Focus 8-3 describes how cortical activation differs for second languages learned later in life and Section 15-6, research on bilingualism and intelligence.

At the most basic level, children learn the language or languages that they hear spoken. In an English household, they learn English; in a Japanese home, Japanese. They also pick up the language structure—the vocabulary and grammar—of the people around them, even though that structure can vary from one speaker to another. Children go through a sensitive period for language acquisition, probably from about 1 to 6 years of age. If they are not exposed to language throughout this critical period, their language skills are severely compromised. If children learn two languages simultaneously, the two share the same part of Broca’s area. In fact, their neural representations overlap (Kim et al., 1997).

Both its universality and natural acquisition favor the theory for a genetic basis of human language. A third piece of evidence is the many basic structural elements common to all languages. Granted, every language has its own particular grammatical rules specifying exactly how various parts of speech are positioned in a sentence (syntax), how words are inflected to convey different meanings, and so forth. But an overarching set of rules also applies to all human languages, and the first rule is that there are rules.

For instance, all languages employ parts of speech that we call subjects, verbs, and direct objects. Consider the sentence Jane ate the apple. Jane is the subject, ate is the verb, and apple is the direct object. Syntax is not specified by any universal rule but rather is a characteristic of the particular language. In English, syntactical order (usually) is subject, verb, object; in Japanese, the order is subject, object, verb; in Gaelic, the order is verb, subject, object. Nonetheless, all have both syntax and grammar.

The existence of these two structural pillars in all human languages is seen in the phenomenon of creolization—the development of a new language from what was formerly a rudimentary language, or pidgin. Creolization took place in the seventeenth century in the Americas when slave traders and colonial plantation owners brought together, from various parts of West Africa, people who lacked a common language. The newly enslaved needed to communicate, and they quickly created a pidgin based on whatever language the plantation owners spoke—English, French, Spanish, or Portuguese.

342

The pidgin had a crude syntax (word order) but lacked a real grammatical structure. The children of the slaves who invented this pidgin grew up with caretakers who spoke only pidgin to them. Yet within a generation, these children had developed their own creole, a language complete with a genuine syntax and grammar.

Clearly, the pidgin invented of necessity by adults was not a learnable language for children. Their innate biology shaped a new language similar in basic structure to all other human languages. All creolized languages seem to evolve in a similar way, even though the base languages are unrelated. This phenomenon can happen only because there is an innate biological component to language development.

Localizing Language in the Brain

Finding a universal basic language structure set researchers on the search for an innate brain system that underlies language use. By the late 1800s, it had become clear that language functions were at least partly localized—not just within the left hemisphere but to specific areas there. Clues that led to this conclusion began to emerge early in the nineteenth century, when neurologists observed patients with frontal lobe injuries who had language difficulties.

Section 7-1 links Broca’s observations to his contributions to neuropsychology.

Then, in 1861, the French physician Paul Broca confirmed that certain language functions are localized in the left hemisphere. Broca concluded, on the basis of several postmortem examinations, that language is localized in the left frontal lobe, in a region just anterior to the central fissure. A person with damage in this area is unable to speak despite both an intact vocal apparatus and normal language comprehension. The confirmation of Broca’s area was significant because it triggered the idea that the left and right hemispheres might have different functions.

Other neurologists of the time believed that Broca’s area might be only one of several left-hemisphere regions that control language. In particular, they suspected a relation between hearing and speech. Proving this suspicion correct, Karl Wernicke later described patients who had difficulty comprehending language after injury to the posterior region of the left temporal lobe, identified as Wernicke’s area in Figure 10-18.

image
Figure 10-18: FIGURE 10-18 Neurology of Language (A) In Wernicke’s model of speech recognition, stored sound images are matched to spoken words in the left posterior temporal cortex, shown in yellow. (B) Speech is produced through the connection that the arcuate fasciculus makes between Wernicke’s area and Broca’s area.

343

In Section 10-2 we identified Wernicke’s area as a speech zone (see Figure 10-12A). Damage to any speech area produces some form of aphasia, the general term for any inability to comprehend or produce language despite the presence of otherwise normal comprehension and intact vocal mechanisms. At one extreme, people who suffer Wernicke’s aphasia can speak fluently, but their language is confused and makes little sense, as if they have no idea what they are saying. At the other extreme, a person with Broca’s aphasia cannot speak despite normal comprehension and intact physiology.

Wernicke went on to propose a model, diagrammed in Figure 10-18A, for how the two language areas of the left hemisphere interact to produce speech. He theorized that images of words are encoded by their sounds and stored in the left posterior temporal cortex. When we hear a word that matches one of those sound images, we recognize it, which is how Wernicke’s area contributes to speech comprehension.

To speak words, Broca’s area in the left frontal lobe must come into play, because the motor program to produce each word is stored in this area. Messages travel to Broca’s area from Wernicke’s area through the arcuate fasciculus, a fiber pathway that connects the two regions. Broca’s area in turn controls articulation of words by the vocal apparatus, as diagrammed in Figure 10-18B.

Wernicke’s model provided a simple explanation both for the existence of two major language areas in the brain and for the contribution each area makes to the control of language. But the model was based on postmortem examinations of patients with brain lesions that were often extensive. Not until neurosurgeon Wilder Penfield’s pioneering studies, begun in the 1930s, were the left hemisphere language areas clearly and accurately mapped.

Auditory and Speech Zones Mapped by Brain Stimulation

It turns out, among Penfield’s discoveries, that neither is Broca’s area the independent site of speech production nor is Wernicke’s area the independent site of language comprehension. Electrical stimulation of either region disrupts both processes.

Penfield took advantage of the chance to map the brain’s auditory and language areas when he operated on patients undergoing elective surgery to treat epilepsy unresponsive to antiseizure medication. The goal of this surgery is to remove tissues where the abnormal discharges are initiated without damaging the areas responsible for linguistic ability or vital sensory or motor functioning. To determine the locations of these critical regions, Penfield used a weak electrical current to stimulate the brain surface. By monitoring the patient’s responses during stimulation in different locations, Penfield could map brain functions along the cortex.

Section 7-1 links Broca’s observations to his contributions to neuropsychology.

Typically, two neurosurgeons perform the operation under local anesthesia applied to the skin, skull, and dura mater (Penfield is shown operating in Figure 10-19A) as a neurologist analyzes the electroencephalogram in an adjacent room. Patients, who are awake, are asked to contribute during the procedure, and the effects of brain stimulation in specific regions can be determined in detail and mapped. Penfield placed tiny numbered tickets on different parts of the brain’s surface where the patient noted that stimulation had produced some noticeable sensation or effect, producing the cortical map shown in Figure 10-19B.

image
Figure 10-19: FIGURE 10-19 Mapping Cortical Functions (A) Neurosurgery for eligible epilepsy patients who failed to respond to antiseizure medications. The patient is fully conscious, lying on his right side, and kept comfortable with local anesthesia. Wilder Penfield stimulates discrete cortical areas in the patient’s exposed left hemisphere. In the background, a neurologist monitors an EEG recorded from each stimulated area to help identify the epileptogenic focus. The anesthetist (seated) observes the patient’s responses to the cortical stimulation. (B) A drawing overlies a photograph of the patient’s exposed brain. The numbered tickets identify points Penfield stimulated to map the cortex in this patient’s brain. At points 26, 27, and 28, a stimulating electrode disrupted speech. Point 26 presumably is in Broca’s area, 27 is the motor cortex facial control area, and 28 is in Wernicke’s area.
Courtesy Penfield Archive, Montreal Neurological Institute, McGill University

When Penfield stimulated the auditory cortex, patients often reported hearing such sounds as a ringing that sounded like a doorbell, a buzzing noise, or a sound like birds chirping. This result is consistent with later single-cell recordings from the auditory cortex in nonhuman primates. Findings in these later studies showed that the auditory cortex participates in pattern recognition.

Penfield also found that stimulation in area A1 seemed to produce simple tones—ringing sounds, and so forth—whereas stimulation in the adjacent auditory cortex (Wernicke’s area) was more apt to cause some interpretation of a sound—ascription of a buzzing sound to a familiar source such as a cricket, for instance. There was no difference in the effects of stimulation of the left or right auditory cortex, and the patients heard no words when the brain was stimulated.

344

Sometimes, however, stimulation of the auditory cortex produced effects other than sound perceptions. Stimulation of one area, for example, might cause a patient to feel deaf, whereas stimulation of another area might produce a distortion of sounds actually being heard. As one patient exclaimed after a certain region had been stimulated, “Everything you said was mixed up!”

Penfield was most interested in the effects of brain stimulation not on simple sound wave processing but on language. He and later researchers used electrical stimulation to identify four important cortical regions that control language. The two classic regions—Broca’s area and Wernicke’s area—are left-hemisphere regions. Located on both sides of the brain are the other two major language use regions: the dorsal area of the frontal lobes and the areas of the motor and somatosensory cortex that control facial, tongue, and throat muscles and sensations. Although the effects on speech vary depending on the region, stimulating any of them disrupts speech in some way.

Clearly, much of the left hemisphere takes part in audition. Figure 10-20 shows those areas that Penfield found engaged in some way in processing language. In fact, Penfield mapped cortical language areas in two ways, first by disrupting speech, then by eliciting speech. Not surprisingly, damage to any speech area produces some form of aphasia.

image
Figure 10-20: FIGURE 10-20 Cortical Regions That Control Language This map, based on Penfield’s extensive study, summarizes the left-hemisphere areas where direct stimulation may disrupt speech or elicit vocalization. Information from W. Penfield & L. Roberts (1956). Speech and brain mechanisms (p. 201). London: Oxford University Press.

DISRUPTING SPEECH Penfield expected that electrical current might disrupt ongoing speech by effectively short-circuiting the brain. To test his hypothesis, he stimulated different cortical regions while the patient was speaking. In fact, the speech disruptions took several forms, including slurring, word confusion, and difficulty in finding the right word. Such aphasias are detailed in Clinical Focus 10-4, Left-Hemisphere Dysfunction.

Electrical stimulation of the supplementary speech area on the dorsal surface of the frontal lobe (shown in Figure 10-20, that control facial movements. This exception makes sense because talking requires movement of facial, tongue, and throat muscles.

345

CLINICAL FOCUS 10-4

Left-Hemisphere Dysfunction

Susan S., a 25-year-old college graduate and mother of two, had epilepsy. When she had a seizure, which was almost every day, she lost consciousness for a short period during which she often engaged in repetitive behaviors, such as rocking back and forth.

Medication can usually control such seizures, but the drugs were ineffective for Susan. The attacks disrupted her life: they prevented her from driving and restricted the types of jobs she could hold. So Susan decided to undergo neurosurgery to remove the region of abnormal brain tissue that was causing the seizures.

The procedure has a high success rate. Susan’s surgery entailed removal of a part of the left temporal lobe, including most of the cortex in front of the auditory areas. Although it may seem a substantial amount of the brain to cut away, the excised tissue is usually abnormal, so any negative consequences typically are minor.

After the surgery, Susan did well for a few days; then she started to have unexpected and unusual complications. As a result, she lost the remainder of her left temporal lobe, including the auditory cortex and Wernicke’s area. The extent of lost brain tissue resembles that shown in the accompanying MRI.

Susan no longer understood language, except to respond to the sound of her name and to speak just one phrase: I love you. Susan was also unable to read, showing no sign that she could even recognize her own name in writing.

To find ways to communicate with Susan, Bryan Kolb tried humming nursery rhymes to her. She immediately recognized them and could say the words. We also discovered that her singing skill was well within the normal range and she had a considerable repertoire of songs.

Susan did not seem able to learn new songs, however, and she did not understand messages that were sung to her. Apparently, Susan’s musical repertoire was stored and controlled independently of her language system.

image
Postoperative MRI of a patient who has lost most of the left hemisphere.
Courtesy of George Jallo/Johns Hopkins Hospital

ELICITING SPEECH The second way Penfield mapped language areas was to stimulate the cortex when a patient was not speaking. Here the goal was to see if stimulation caused the person to utter a speech sound. Penfield did not expect to trigger coherent speech; cortical electrical stimulation is not physiologically normal and so probably would not produce actual words or word combinations. His expectation was borne out.

Stimulation of regions on both sides of the brain—for example, the supplementary speech areas—produces a sustained vowel cry, such as Oooh or Eee. Stimulation of the facial areas in the motor and somatosensory cortices produces some vocalization related to mouth and tongue movements. Stimulation outside these speech-related zones produces no such effects.

Auditory Cortex Mapped by Positron Emission Tomography

Section 7-4 details procedures used to obtain a PET scan.

To study the metabolic activity of brain cells engaged in tasks such as processing language, researchers use PET, a brain-imaging technique that detects changes in brain blood flow. Among the many PET studies of auditory stimulation, a series conducted by Robert Zatorre and his colleagues (1992, 1996) serves as a good example. These researchers hypothesized that simple auditory stimulation, such as bursts of noise, are analyzed by area A1, whereas more complex auditory stimulation, such as speech syllables, are analyzed in adjacent secondary auditory areas.

The researchers also hypothesized that performing a discrimination task for speech sounds would selectively activate left-hemisphere regions. This selective activation is exactly what they found. Figure 10-21A shows increased activity in the primary auditory cortex in response to bursts of noise, whereas secondary auditory areas are activated by speech syllables (Figure 10-21B and C).

image
Figure 10-21: FIGURE 10-21 Cortical Activation in Language-Related Tasks (A) Passively listening to noise bursts activates the primary auditory cortex. (B) Listening to words activates the posterior speech zone, including Wernicke’s area. (C) Making a phonetic discrimination activates the frontal region, including Broca’s area.

346

Both types of stimuli produced responses in both hemispheres but with greater activation in the left hemisphere for the speech syllables. These results imply that area A1 analyzes all incoming auditory signals, speech and nonspeech, whereas the secondary auditory areas are responsible for some higher-order signal processing required for analyzing language sound patterns.

As Figure 10-21C shows, the speech sound discrimination task yielded an intriguing additional result: Broca’s area in the left hemisphere was also activated. This frontal lobe region’s involvement during auditory analysis may seem surprising. In Wernicke’s model, Broca’s area is considered the storage area for motor programs needed to produce words. It is not usually a region thought of as a site of speech sound discrimination.

A possible explanation is that to determine that the g in bag and the one in pig are the same speech sound, the auditory stimulus must be related to how the sound is actually articulated. That is, speech sound perception requires a match with the motor behaviors associated with making the sound.

This role for Broca’s area in speech analysis is confirmed further when investigators ask people to determine whether a stimulus is a word or a nonword (e.g., tid versus tin or gan versus tan). In this type of study, information about how the words are articulated is irrelevant, and Broca’s area need not be recruited. Imaging reveals that it is not.

Processing Music

Although Penfield did not study the effect of brain stimulation on musical analysis, many researchers study musical processing in brain-damaged patients. Clinical Focus 10-5, Cerebral Aneurysms, describes one such case. Collectively, the results of these studies confirm that musical processing is in fact largely a right-hemisphere specialization, just as language processing is largely a left-hemisphere one.

Localizing Music in the Brain

A famous patient, the French composer Maurice Ravel (1875–1937), provides an excellent example of right-hemisphere predominance for music processing. Boléro is perhaps Ravel’s best-known work. At the peak of his career, Ravel had a left-hemisphere stroke and developed aphasia. Yet many of his musical skills remained intact post-stroke because they were localized to the right hemisphere. He could still recognize melodies, pick up tiny mistakes in music he heard, and even judge the tuning of pianos. His music perception was largely intact.

Skills that had to do with producing music, however, were among those destroyed. Ravel could no longer recognize written music, play the piano, or compose. This dissociation of music perception and music production may parallel the dissociation of speech comprehension and speech production in language. Apparently, the left hemisphere plays at least some role in certain aspects of music processing, especially those that have to do with making music.

347

CLINICAL FOCUS 10-5

Cerebral Aneurysms

C. N. was a 35-year-old nurse described by Isabelle Peretz and her colleagues (1994). In December 1986, C. N. suddenly developed severe neck pain and headache. A neurological examination revealed an aneurysm in the middle cerebral artery on the right side of her brain.

image

An aneurysm is a bulge in a blood vessel wall caused by weakening of the tissue, much like the bulge that appears in a bicycle tire at a weakened spot. Aneurysms in a cerebral artery are dangerous: if they burst, severe bleeding and consequent brain damage result.

In February 1987, C. N.’s aneurysm was surgically repaired, and she appeared to have few adverse effects. Postoperative brain imaging revealed, however, that a new aneurysm had formed in the same location but in the middle cerebral artery on the opposite side of the brain. This second aneurysm was repaired 2 weeks later.

After her surgery, C. N. had temporary difficulty finding the right word when she spoke, but more important, her perception of music was deranged. She could no longer sing, nor could she recognize familiar tunes. In fact, singers sounded to her as if they were talking instead of singing. But C. N. could still dance to music.

A brain scan revealed damage along the lateral fissure in both temporal lobes. The damage did not include the primary auditory cortex, nor did it include any part of the posterior speech zone. For these reasons, C. N. could still recognize nonmusical sound patterns and showed no evidence of language disturbance. This finding reinforces the hypothesis that nonmusical sounds and speech sounds are analyzed in parts of the brain separate from those that process music.

To find out more about how the brain carries out the perceptual side of music processing, Zatorre and his colleagues (1994) conducted PET studies. When participants listened simply to bursts of noise, Heschl’s gyrus became activated (Figure 10-22A), but perception of melody triggers major activation in the right-hemisphere auditory cortex lying in front of Heschl’s gyrus (Figure 10-22B), as well as minor activation in the same left-hemisphere region (not shown).

image
Figure 10-22: FIGURE 10-22 Cortical Activation in Music-Related Tasks (A) Passively listening to bursts of noise activates Heschl’s gyrus. (B) Listening to a melody activates the secondary auditory cortex. (C) Making relative pitch judgments about two notes in each melody activates a right frontal lobe area.

In another test, participants listened to the same melodies. The investigators asked them to indicate whether the pitch of the second note was higher or lower than that of the first note. During this task, which necessitates short-term memory of what was just heard, blood flow in the right frontal lobe increased (Figure 10-22C). As with language, then, the frontal lobe plays a role in auditory analysis when short-term memory is required. People with enhanced or impaired musical abilities show differences in frontal lobe organization, as demonstrated in Research Focus 10-6, The Brain’s Music System.

The brain may be tuned prenatally to the language it will hear at birth; see Focus 7-1.

As noted earlier, the capacity for language is innate. Sandra Trehub and her colleagues (1999) showed that music may be innate as well, as we hypothesized at the beginning of the chapter. Trehub found that infants show learning preferences for musical scales versus random notes. Like adults, children are sensitive to musical errors, presumably because they are biased for perceiving regularity in rhythms. Thus, it appears that the brain is prepared at birth for hearing both music and language, and presumably it selectively attends to these auditory signals.

348

RESEARCH FOCUS 10-6

The Brain’s Music System

Nonmusicians enjoy music and have musical ability. Musicians show an enormous range of ability: some have perfect pitch and some do not, for example. About 4 percent of the population is tone deaf. Their difficulties, characterized as amusia—an inability to distinguish between musical notes—are lifelong.

Robert Zatorre and his colleagues (Bermudez et al., 2009; Hyde et al., 2007) have used MRI to look at differences among the brains of musicians, nonmusicians, and amusics. MRIs of the left and right hemispheres show that compared to nonmusicians, musicians’ cortical thickness is greater in dorsolateral frontal and superior temporal regions. Curiously, musicians with perfect pitch have thinner cortex in the posterior part of the dorsolateral frontal lobe. Thinner appears to be better for some music skills.

Compared to nonmusicians then, musicians with thicker than normal cortex must have enhanced neural networks in the right-hemisphere frontal–temporal system linked to performing musical tasks. But thicker than normal cortex can bestow both advantage and impairment.

Analysis of amusic participants’ brains showed thicker cortex in the right frontal area and in the right auditory cortex regions. Some abnormality in neuronal migration during brain development is likely to have led to an excess of neurons in the right frontal–temporal music pathway of the amusics. Their impaired music cognition is the result.

image
Compared to nonmusicians and amusics, musicians' thicker cortex, shown in the green, yellow, and red areas, contributes to performance. Focus 14-5 describes how playing music can affect sensorimotor maps in the cortex.
Research from P. Bermudez, J. P. Lerch, A. C. Evans, & R. J. Zatorre (2009). Neuroanatomical correlates of musicianship as revealed by cortical thickness and voxel-based morphometry. Cerebral Cortex, 7, 1583–1596.

Music as Therapy

At: https://www.youtube.com/watch?v=eNpoVeLfMKg, watch as Parkinson patients step to the beat of music to improve their gait length and walking speed.

The power of music to engage the brain has led to its use as a therapeutic tool for brain dysfunctions. The best evidence of its effectiveness lies in studies of motor disorders such as stroke and Parkinson disease (Johansson, 2012). Listening to rhythm activates the motor and premotor cortex and can improve gait and arm training after stroke. Musical experience reportedly also enhances the ability to discriminate speech sounds and to distinguish speech from background noise in patients with aphasia.

More on music as therapy in Focus 5-2 and the dance class for Parkinson patients pictured on page 160. Sections 16-2 and 16-3 revisit music therapy.

Music therapy also appears to be a useful complement to more traditional therapies, especially when there are problems with mood, such as in depression or brain injury. This may prove important in the treatment of stroke and traumatic brain injury, with which depression is a common complication in recovery. Music therapy also has positive effects following major surgery, both in adults and children, by reducing both their pain perception and the amount of pain medication they use (Sunitha Suresh et al., 2015). With all these applications, perhaps researchers will decide to use noninvasive imaging to determine which brain areas music therapy recruits.

10-4 REVIEW

Anatomy of Language and Music

Before you continue, check your understanding.

Question 1

The human auditory system has complementary specialization for the perception of sounds: left for ____________ and right for ____________.

349

Question 2

The three frontal lobe regions that participate in producing language are ____________, ____________, and ____________.

Question 3

____________ area identifies speech syllables and words and stores their representations in that location.

Question 4

____________ area matches speech sounds to the motor programs necessary to articulate them.

Question 5

At one end of the spectrum for musical ability are people with ____________ and at the other are people who are ____________.

Question 6

What evidence supports the idea that language is innate?

Answers appear in the Self Test section of the book.