Walter Murch is one of the most highly respected sound and picture editors in modern film history. Doing graduate studies at what was then known as the University of Southern California Department of Cinema, Murch first became a pioneering sound editor and rerecording mixer, beginning with Francis Ford Coppola’s The Rain People (1969) as his first feature film credit. He then went on to earn acclaim for his sound work on former USC classmate George Lucas’s THX 1138 (1971) and American Graffiti (1973), as well as Coppola’s The Godfather (1972) and The Godfather: Part II (1974). Murch received the first of his nine Academy Award nominations for his sound work on Coppola’s The Conversation (1974) in partnership with Art Rochester. In addition, he won an unprecedented double British Academy Film Award (BAFTA) for his picture editing and sound mixing on that film. He then won his first Oscar (shared with three others) as sound designer and lead rerecording mixer on Coppola’s Apocalypse Now (1979).
Murch went on to be a sound designer, rerecording mixer, and sound editor on dozens of projects, and quickly became an authoritative voice on the discipline of sound editing and mixing. Simultaneously, he pursued the art of picture editing, having earned his first feature editing credit on The Conversation before serving as picture editor on Fred Zinnemann’s Julia (1977), for which he received an Academy Award nomination, and Coppola’s Apocalypse Now, which earned him his third and fourth Oscar nominations. He was also the first editor to receive an Academy Award for work using digital nonlinear editing systems (Avid Media Composer) on The English Patient, for which he won an unprecedented double Oscar in sound mixing and picture editing in 1996; and in 2002, he switched editing platforms to Apple’s Final Cut Pro on Cold Mountain, which earned him another Oscar nomination—achievements that put him at the forefront of the industry’s transition to modern digital, nonlinear tools. He has written books and lectured frequently on both picture and sound editing, and generously contributed his time and insights to the chapters on sound (Ch. 10) and editing (Chs. 11–12) in Filmmaking in Action, part of his long-standing commitment to helping enhance the knowledge of the next generation of filmmakers.
Murch’s pedigree as an expert who understands both the importance of sound in motion pictures and the importance of addressing sound and picture jointly—as coequals in telling a cinematic story—is second to none. He had great insights to share about the discipline of sound in particular and the state of the filmmaking industry in general during his conversations with coauthor Michael Goldman in the research phase for Filmmaking in Action, beyond what appears in the text version of Chapters 10, 11, and 12. We hereby offer some of his observations in the following excerpt from these original conversations:
Michael Goldman: Is it true that you were the first person to officially receive the credit “sound designer” on a motion picture, for your work on Apocalypse Now in the late 1970s? And do you think it is useful for today’s students to understand this kind of history about their chosen disciplines—in this case, the discipline of sound design?
Walter Murch: Yes, that’s accurate—that is one of the credits I got on Apocalypse Now, and I think that it was also on the one-sheet, which was the first time that had happened. It was amazing that a sound person was getting credit at all on a one-sheet, and second, that I was being called a sound designer. That was a term that had been used in theater in the early 1970s, I believe, but it was the first time it had been used on a film.
Should students know that history? Well, it can’t hurt. You can certainly be a good musician even if you don’t know all the ins and outs of musical history. Some people can even be talented musicians without knowing how to read music. If you have enough talent, then it is just a question of applying yourself. But you can always go deeper still, and if you do, you will be better for it. So even if you have that innate musical talent, you won’t be hurt by learning to read music and, likewise, by learning the history of music. The same would be true of filmmaking. [Fellow award-winning sound designer and colleague] Ben Burtt is fascinated by the history of sound design. He even knows the path of certain sound effects as they worked their way through film history.
MG: Speaking of the history of cinematic sound, on Apocalypse Now, the sound design work you and your colleagues were honored for, among other things, led to the development of 5.1 surround sound for theatrical presentations because you designed the film’s soundtrack for stereo surround sound, meaning you had to design the sound from a three-dimensional perspective, which I guess led to the “sound designer” designation. How did that eventuality come about?
WM: It was a revelation to [Francis Ford Coppola] when he listened to an eight-track tape by Isao Tomita called The Planets during a time in the mid ’70s when there was a brief surge of interest in home system quadraphonic music. Francis said that this was what he wanted the film to sound like; he wanted the audience surrounded by the sound, to have it come from all four directions in the room. He tossed that ball at me and said, “Figure out a way to make this work.”
The other thing he said he wanted was for explosions to be felt as well as heard—he wanted them to go down into the infrasonic range [below the limit of human audibility], about 15 cycles a second. Film exhibition could not do that at the time; it bottomed out around 60 cycles back then, which was about two octaves too high. As it turned out, the system we designed and used on Apocalypse Now has come to be known as 5.1 sound because that sixth, low-frequency channel only reproduces a tenth (0.1) of the audio spectrum. At the time, we just called it six-track, since four-track quadraphonic—Francis’s original idea—had no location for dialogue. For psychoacoustic reasons, dialogue needs a specific point of origin in the center of the screen—that’s just where the audience best perceives it—so that is how we went from four channels to five. Then, since we wanted infrasonic sound, we needed an additional channel for those super low frequencies, so that meant a sixth track and special speakers in each theater. As the years went by, that six-track became known as 5.1, and it is now the standard for film exhibition.
MG: Creatively speaking, from the point of view of someone who edits both sound and images at a very high level, do you have more options with sound or more options with images when you are trying to achieve your goals? And how do you see those options changing or evolving with the rise of new kinds of digital technologies in both disciplines?
WM: More options with sound than with images, absolutely. There is currently an ongoing revolution in imaging as we speak, of course, with digital cameras and projectors. Where exactly that is going, with developments like 3D and 48 frames per second, is still an open question, but it is certainly going toward higher and higher resolutions, from 2K to 4K. It used to be, years ago, that very few films could be shot in 70mm. But I can see it coming that soon every movie could be shot in the digital equivalent of 70mm, which would be 8K resolution.
But sound did get ahead of picture to begin with because sound is simpler to digitize and easier to work with. That is why sound “went digital” a decade or more before picture did. But also, sound, just by its nature, is not quite as presentational as images are. The image is literally in front of you. And humans are very visual—a third of the brain is dedicated to processing visual information—so we look forward and at things. We know we are being addressed by images. But that is not the case with sound. Sound surrounds you—you can’t turn away from it. You can’t close or blink your ears, even if you put your hands over them—well, maybe a little, but not completely. So sound is able to infuse the mind of the audience in much more complex and let us say benignly insidious ways than pictures can.
MG: Yet despite all the revolutionary capabilities of how we can use sound in motion pictures, the industry is cutting budgets and crew sizes today, and professionals are being asked to do more with less in many cases. What are your views on this trend?
WM: That is true. At the moment, a lot of people are hurting because of this disruptive technology trend. It is especially true in terms of sound design and mixing—there is a huge downward pressure on sound budgets. Maybe 10 years ago, sound for an independent feature motion picture would have been budgeted at around $150,000. Now, for an ambitious independent film, it might be a fifth of that.
Yet the expectations remain very high. They say they want the same excellent results for less money because we now have all this amazing equipment, right? And then [industry professionals] manage to get it done by hook or by crook because they are very talented and resourceful, and that becomes a benchmark, proof it can be done. And so the pressure ratchets up another notch.
That is part of what we were talking about before: one of the strong points of sound is that you often can fix it in the mix, or keep changing it until the last minute—and more economically than you can make picture changes. But that also creates this downward pressure. When sound started in the 1920s or 1930s, there was no mixing. You had to get it right as you recorded it, so the sound recordist was the person who had absolute control. If the sound was not right, the recordist would say “Cut, do it again,” overriding the director or anybody else. But now there are so many ways to do an end run through that particular bottleneck—use of ADR, incredible filters, and so on. It was a narrow bottleneck at the start of sound, and now it is almost completely wide open.
At the same time, that flexibility is a preeminently strong point of sound. And that is why young people interested in sound absolutely must learn all of its aspects—the use of boom mics, radio mics, special filters, baffles, everything.
MG: What basic advice would you give young filmmakers about planning sound design when evaluating a screenplay? What are the most important things they should remember?
WM: Sound is fundamentally engaged in setting mood and atmosphere and helping to tell the story, so an ideal goal would be that the script is interested in using sound to tell the story. Many good films might not be interested in sound, but that means they offer few opportunities for the creative use of sound. You might add backgrounds and car crash sounds, but that is sort of the interior decorating use of sound, rather than sound as a fundamental part of the architecture. If you look at the screenplay for [Orson Welles’s 1958 film] Touch of Evil [which Murch reedited in 1998, when the movie was restored from the original studio version according to specifications outlined by Welles in a famous 58-page memo he wrote to Universal Studios officials after they changed the movie against his wishes], you find out the whole climax of the film is based on sound and, specifically, whether a certain sound is reverberant or not. It’s amazing. The fact that his voice [that of Police Captain Hank Quinlan, the character played by Welles,] is reverberant gives away the fact that he was being surreptitiously recorded, and then Quinlan gets angry, shots are exchanged, and he and his lieutenant, Menzies [played by Joseph Calleia], die as a result. [Here is an article Murch wrote for the New York Times in 1998 about the Touch of Evil reedit project.]
So, as a sound person, you can have the maximum impact on a film when the importance of sound is recognized in the screenplay.
Therefore, the first thing to do is read the screenplay and figure out whether the film concerns itself with sound at all. If not—and many don’t—you adjust your approach accordingly. But if the film is adventurous in sound, one helpful technique is to remember that the screenplay is broken into scenes, and if you think more about transitions from one scene to another and imagine sound as a kind of fabric, like tweed or silk, and sort of mentally run your hands across that fabric and ask yourself what it feels like when you go from a scene inside a steel mill to a scene of someone skiing, with white, icy, empty, cold landscapes—that would elicit the texture of one sound transitioning into another, and that transition has a feel to it: from tweed to silk, as it were. The advantage of this “transitional” approach is that creative ideas for sound will come more quickly to you than if you just thought about the scene in an isolated, self-contained way. Doing that leads more often than not to a kind of “chasing your tail” circularity, which yields up clichés. Also, if the audience is ever aware of sound, it is at transition points, and whatever you do will be appreciated more fully.
So, suppose your screenplay has 126 scenes. Think about every one of those transitions and how you can optimize them with sound. That will at least start the ball rolling, and you will begin to develop great ideas as a result.