Sync tanks The Art and Technique of Postproduction Sound |
by Elisabeth Weis (Cineaste, 1995)
The credits for John Ford's My Darling Clementine (1946) include Wyatt Earp as technical consultant[1] but only one person responsible for all of postproduction sound (the composer). The credits for Lawrence Kasdan's Wyatt Earp (1994) list the names of thirty-nine people who worked on postproduction sound. The difference is not simply a matter of expanding egos or credits.
"An older film like Casablanca has an empty soundtrack compared with what we do today. Tracks are fuller and more of a selling point," says Michael Kirchberger (What's Eating Gilbert Grape?, Sleepless in Seattle). "Of course a good track without good characters and storyline won't be heard by anyone."[2]
With soundtracks much more dense than in the past, the present generation of moviemakers has seen an exponential growth in the number of people who work on the sound after the film has been shot. What do all those people add both technically and esthetically? "When I started out, there was one sound editor and an assistant," says picture editor Evan Lottman (The Exorcist, Sophie's Choice, Presumed Innocent). " As editor for a big studio picture in the early Seventies I usually cut the ADR [dialog replaced in postproduction--EW] and the music as well." Today an editor on a major feature would either hire a supervising sound editor who gathers a team of sound specialists, or go to a company like C5, Inc., Sound One, or Skywalker that can supply the staff and/or state-of-the-art facilities.
Sound is traditionally divided into three elements: dialog, music, and effects (any auditory information that isn't speech or music). Although much of the dialog can be recorded during principal photography, it needs fine tuning later. And almost all other sound is added during postproduction.
How does sound get on pictures? The following is a rough sketch of the procedure for a major Hollywood feature production. But it is not a blueprint; exact procedures vary tremendously with the budget and shooting schedule of the film. Blockbuster action films, for instance, often devote much more time and money to sound effects than is described below. The process certainly does not describe how the average film is made abroad; few other cultures have such a fetish for perfect lip-synching as ours--so even dialog is recorded after the shoot in many countries.
This article can only begin to suggest how digital technologies are affecting post-production sound. For one thing, there is wide variation in types of systems; for another, digital sound techniques are evolving faster than alien creatures in a science fiction movie.
PRODUCTION |
Even the sound recorded live during principal photography is not wedded physically to the image and has to be precisely relinked during postproduction. It is usually recorded on 1/4" magnetic tape (though there are alternatives) and marked so that it can be ultimately rejoined with the picture in perfect synchronization.
On the set the location recordist (listed as production mixer) tries to record dialog as cleanly and crisply as possible, with little background noise (a high signal-to-noise ratio). A boom operator, usually suspending the microphone above and in front of the person speaking, tries to get it as close as possible without letting the microphone or its shadow enter the frame.
An alternative to a mike suspended from an overhead boom is a hidden lavalier mike on the actor's chest, which is either connected to the tape recorder via cables or wired to a small radio transmitter also hidden on the actor. But dialog recorded from below the mouth must be adjusted later to match the better sound quality of the boom mike. And radio mikes can pick up stray sounds like gypsy cabs.
While on the set, the sound recordist may also ask for a moment of silence to pick up some "room tone" (the sound of the location when no one is talking), which must be combined with any dialog that is added during postproduction (with reconstructed room reverberation) so that it matches what is shot on the set. (We don't usually notice the sound of the breeze or a motor hum, but their absence in a Hollywood product would be quite conspicuous.) The set recordist may also capture sounds distinctive to a particular location to give the postproduction crew some sense of local color.
POSTPRODUCTION |
Theoretically, the first stage of sound editing is "spotting," where the editor(s) and possibly the director go through each second of the film with the supervising sound editor in order to generate a list of every sound that needs to be added, augmented, or replaced. This practice has fallen prey to demands for early previews, which have wreaked havoc on postproduction schedules.
Dialog |
Dialog editing is mostly a matter of cleaning up production sound. The work can be as detailed as reusing a final consonant of one word to complete another where it had been obscured or removing an actor' s denture clicks.
Some of the dialog heard in the completed film was not recorded on location. Shooting silent (MOS) is much easier than having to achieve perfect quiet from the crew, the crowd watching the film, or airplanes and birds passing overhead. Even with the compliance of onlookers, nature, and ubiquitous car alarms, however, miked dialog may be unusable because it picked up extraneous noises such as a squeaky camera dolly or clothing rustle.
Despite these difficulties, directors almost always prefer production dialog, which is an integral part of the actors' performances, to looping (rerecording speech in post-production). Although there is a trend in looping sessions toward using booms and the original microphones to mimic the situation on the set, it is nearly impossible to duplicate all the conditions of the shoot. Orson Welles found that out after shooting the festive horseless carriage ride in The Magnificent Ambersons. Because the scene was photographed in an ice plant with tremendous reverberation (which wouldn't be heard outdoors), the dialog of all six characters had to be looped. When Welles heard the original looping, he rejected it because the voices were much too static; they didn' t sound as though they were spoken by people in an automobile. The sound-man's low-tech solution was to redo all the lines with the performers and himself seated on a twelve-inch plank suspended between sawhorses. For a week, says James G. Stewart, "As we watched the picture I simulated the movement of the car by bouncing the performer and myself up and down on the plank."
It is tough, however, for actors to match later the emotional level they achieved on the set. Ron Bochar, who supervised the sound on Philadelphia, describes the powerful scene where Tom Hanks is responding to an opera recording as a case in point. Ideally the aria and the dialog would be on separately manipulable tracks so that the dialog could be kept intelligible. But Hanks wanted both the freedom to move around and the ability to hear and react to the singing of Maria Callas. As a result, both his dialog and her aria are recorded on the same track and the dialog is less than ideal. But everyone involved agreed that the live performance was preferable to looping the scene. "That' s one of those things about 'mistakes' that get put in because you are forced to or they just happen," says Bochar. "They turn out to be things that you could never re-create. You'd ruin the scene by making it cleaner."
Today, one of the first jobs of dialog editors is to split spoken lines (usually from different camera--hence microphone--angles) onto separate tracks. Doing so, says Kirchberger, "makes them as independently controllable as possible, so that we can later 'massage' them in such a way that they fit together seamlessly." This is not to say that filmmakers can't do creative things with dialog.
Robert Altman, most notably, developed with rerecording mixer Richard Portman atechnique for creating his unique multilayered dialog style. During the shoot Altman, who allows a lot of improvisation, mikes each of his simultaneous speakers on separate tracks (sixteen for Pret-a-Porter). Later the rerecording mixer can raise and lower the relative volume of each track to create a weaving effect among the various actors' lines.
Dialog can also be edited to affect characterization. Suppose the director wants to make an arch-villain more domineering. A mixer could raise the volume of his voice and adjust the tonal qualities to make him sound larger than life. It's the aural equivalent of someone invading our space by standing too close to us. The picture editor could enhance the villain's sense of menace by regularly cutting to his voice before we see him. Because he seems to lurk just beyond the edges of the frame, the viewer will feel uneasy about his potential reappearance whenever he is not present.
ADR |
Dialog that cannot be salvaged from production tracks must be rerecorded in a process called looping or ADR (which is variously said to stand for "automated" or "automatic" dialog replacement). Looping originally involved recording an actor who spoke lines in sync to "loops" of the image which were played over and over along with matching lengths of recording tape. ADR, though faster, is still painstaking work. An actor watches the image repeatedly while listening to the original production track on headphones as a guide. The actor then reperforms each line to match the wording and lip movements. Actors vary in their ability to achieve sync and to recapture the emotional tone of their performance. Some prefer it. Marion Brando, for instance, likes to loop because he doesn't like to freeze a performance until he knows its final context. People have said that one reason he mumbles is to make the production sound unusable so that he can make adjustments in looping.
ADR is usually considered a necessary evil but Bochar has found there are moments when looping can be used not just for technical reasons but to add new character or interpretation to a shot. "Just by altering a few key words or phrases an actor can change the emotional bent on a scene."
Sound Effects |
Dialog editors are usually considered problem solvers rather than creative contributors, but there's considerable room for artistic input in choosing and editing sound effects. For one thing, sound effects tracks are normally built from scratch. We would not want to hear everything that really could be heard in a given space, Even if it were possible to record only the appropriate noise on the set while the film is being shot, it wouldn't sound right psychologically. Sound is very subjective and dependent upon the visual context and the mood set up in the image. The soundtrack of real life is too dense for film. In the real world, our minds select certain noises and filter out others. For instance, we mentally foreground the person speaking to us even if the background is louder. On film, the sound effects editors and rerecording mixers have to focus for us.
Focusing on selected sounds can create tension, atmosphere, and emotion. It can also impart personality to film characters. Walter Murch (the doyen of sound designers) once described the character sounds (in a film he directed) as "coronas" which can magnify each character' s screen space. A figure who is associated with a particular sound (often suggested by his or her clothing), has "a real presence that is pervasive even when the scene is about something else or the character is off-screen."
Indeed, sound is a major means to lend solidity and depth to the two- dimensional screen image. Furthermore, new digital release formats allow filmmakers to literally "place" sounds at various locations throughout the theater. Thus sound can expand space, add depth, and locate us within the scene.
A crucial difference between visual and aural manipulation of the audience is that even sophisticated audiences rarely notice the soundtrack. Therefore it can speak to us emotionally and almost subconsciously put us in touch with a screen character. In a film like Hitchcock' s The Birds, for example, any time we see a bird we know we are being titillated. But by merely adding a single "caw" to the soundtrack on occasion, Hitch was able to increase the tension without our being aware of his manipulation.
To understand the manipulability of effects it is useful to know how effects tracks are created. A regular source of effects is a stock library, where sounds are stored on CD. The rest have to be recorded or combined from several sources. Foleying is the "looping" of sound effects by a specialized department in a studio designed for watching the picture and creating the sounds at the same time. The process is named after its developer, legendary sound man Jack Foley of Universal. Because virtually all footsteps are replaced, a foley stage usually includes several pits with different sounding surfaces on which the foley artist will walk in time to the one or more characters he or she is watching. Clothing rustle (another sound we never notice until it's missing) and the movement of props such as dishes are likely to be recorded here as well. Even kisses are foleyed. A steamy sex scene was probably created by a foley artist making dispassionate love to his or her own wrist. The foleycrew will include the artist or "walker," who makes the sound, and a technician or two to record and mix it.
Foleying needn't be a slavish duplication of the original object. The sound crew can characterize actors by the quality of the sounds they attribute to them--say, what type of shoes they wear. To attribute some subtle sleaziness' to Nicolas Cage's lawyer in It Could Happen to You, Michael Kirchberger's foley crew sonically added a squeaky shoe and rattling pocket change as Red Buttons walks around the courtroom. It's the opposite shoe of the one that squeaked in Jerry Lewis movies, says Kirchberger.
Usually the more exotic--less literal--sounds are created by the effects staff. According to Murch, "That's part of the art of sound effects. You try to abstract the essential quality of a sound and figure out the best way to record that, which may not be to use the thing itself but something else." Thus, some sounds have nothing to do with the original source--the real thing would be unconvincing. Mimi Arsham, who worked on Ben-Hur, reports that the sound of a whip cracking was actually a hefty steak being slapped on a thigh.
Most sounds need processing (fiddling with). The most common strategy is to start with a sound made by a source that is the same as or similar to what was photographed and then to distort it. One simple method is to slow it down or speed it up. Two other common processing tricks are to choose just part of the frequency spectrum or to run a sound backwards. As far back as 1933 the original sound man at RKO created King Kong's voice by playing backwards the roar of a lion he recorded at the San Diego Zoo. Today digital editing techniques have vastly expanded the possibilities: a sound editor feeds a sample of a sound into a computer, which can then manipulate it and provide a whole range of sounds from the original. One powerful tool is the Synclavier, which combines a computer sampler and a keyboard that can play a sound (or sounds) assigned to any of seventy-three keys with the stroke of a finger.
New sounds can also be created by mixing disparate sources. In order to accentuate the idea that the pen is mightier than the sword, the final close-up of the typewriter keys pounding out the Watergate expose in All the President's Men combines gunfire with the sound of clacking typewriter keys.
Many of today's sound effects are "stacked"; they are layers of combined sounds from different sources that often begin organically but are processed digitally. Kirchberger reports that he created the roar of the Komodo Dragon in The Freshman by starting with tapes of vultures recorded for Ishtar. The sound was processed, added to other sounds including a pig, and then vocalized through digital sampling. "I knew we had something that was vaguely reptilian. What made it 'talk' was altering the pitch as we played back the stacked sample. That gave it the vocalization we needed, as opposed to its being just a screech or a caw."
Much of the freedom in sound design comes when making horror or science fiction films, where stylization is the norm. Most sonic sources are hard to identify unless we see them--and films of the fantastic have sources we have never heard in real life. So there is great latitude in deciding how something should sound.
However technically sophisticated the equipment that processes sound, the original source can be quite mundane. Gary Rydstrom, the lead sound designer at Skywalker, likes to challenge listeners to a game of "name that sound," that is, to guess the sources of his sounds- -exotic noises he created from prosaic origins. One favorite tool, he says, is air compressed in a can. The source of the "sliming" noise in Ghostbusters, for example, is Dust-Off sprayed into Silly Putty. He is also proud of the sound of the mercury-like character (T-1000) passing through steel bars in Terminator II. Seeking a sound that was part liquid, part solid, Rydstrom came up with the sound of dog food being extruded from a can.
The majority of the sound crew are not brought onto a picture until it is "locked," that is, the image is finalized. On films where sound is considered a major creative element, directors may hire a sound designer like Walter Murch (Apocalypse Now, The Conversation, The Godfather) or Skip Lievsay (who creates sound for the Coen brothers, Martin Scorsese, and David Lynch). "Sound designer" is an elusive term which can refer to a person brought on to create just one kind of effect (for example, Bochar was hired late in the postproduction of Wolf just to create the effects that accompanied Nicholson turning into a beast). In some cases, however, sound designers are thought of as artists who are brought on staff during the planning stages of a film, along with the set and costume designers, and who do their own mixing. In these instances, the sound designer works with the director to shape an overall, consistent soundtrack that exploits the expressive possibilities of the sound medium, is organically related to the narrative and thematic needs of the film, and has an integrity not possible if sound is divided among an entire bureaucracy. A case in point would be Jurassic Park, where Gary Rydstrom first designed the sounds of the dinosaurs and then models were built to match those roars.
On the average A-picture the first postproduction sound person brought onto the film is the supervising sound editor, who not only directs and coordinates the creative contributions of the postproduction sound staff but also must handle all the related administrative duties like scheduling mixes.
Although the supervising sound editors are usually not consulted during shooting, in the best of all possible worlds they are in touch with the location sound recordist during and after the shoot so that their work can be coordinated. Bochar feels strongly that his work should start early on: "To me the whole adage is that postproduction begins the first day of production."
Like most filmmakers, sound personnel work under extreme time constraints. One way for them to get a headstart is to work on a picture one reel at a time. Thus, if a director and editor are satisfied with reels two and three, they can send them on to the sound editors while they are still solving picture problems on other reels.
Scratch Mixes/Temp Tracks |
Today the tendency is to bring the supervising editor on earlier and earlier. The main reason is the changing demands for sound in early screenings. According to Kirchberger, this practice has engendered the greatest changes in the logistics of postproduction sound in the last two decades.
As Lottman describes it, "On most A-pictures a sound editor will come on some time before the picture gets locked. You can't put them on too soon; that's too expensive. But you put them on, say, before the first screening. Now there's this big trend towards scratch mixes at screenings. Most directors don't want to screen a picture for anybody unless it has a complete and full soundtrack--a temp track with temporary sounds, temporary music and dialog to give the audience a preview of what the final, polished soundtrack will belike. They'll try to iron out a dialog scene where the sound shifts dramatically from cut to cut. They didn't use to do this at all. Now they do it on any mid- to high budget film. You try to keep it simple: you have just one sound editor and an assistant, perhaps."
Because of demands for scratch mixes the sound editors are under greater time constraints than ever. By the first scratch mix, the editors must have cleaned up noticeable sound-image problems and supplied the major effects. Yet this is also the best time to introduce their most inventive ideas, while directors and producers are still open to experimentation.
One result of scratch mixes is that they become weeding-out processes. During this stage sound editors, given the time, have a certain amount of latitude to present creative options to the director. One downside, says Kirchberger, is that if the director likes parts of the scratch mix, those sounds may never be refined even though they were just presented as a sketch.
Music |
Like the foley crew, the music personnel are a discrete department. The composer may be brought in as early as the first cut to discuss with the director and editor the general character of the music and its placement in the film.
The person who spends the longest time on the scoring is the supervising music editor. It is the job of the music editor to spot every cue, that is, to make a precise list of timings--to the split second-- for each appearance and "hit" (point of musical emphasis) of the music. In addition, the editor will log all the concomitant action and dialog during each cue. The composer then has about six weeks to come up with a score. The supervising music editor will set up recording sessions, which for, say, thirty minutes of music, take four to five days. Each set of instruments has its own microphone and track so that scoring mixers can balance them.
Apart from esthetic issues, film music composers must deal with particular technical requirements. For the sake of clarity, a film composer must orchestrate with instruments that do not overlap much with the frequency of the human voice or any dominant sound effects to be heard at the same time. In theory, composers keep in mind any anticipated noises for a sequence so that the music and effects aren't working at cross purposes. In practice, music editors often serve as master tacticians caught between the work of the sound editors and the composer who says: "Dump those goddamn sound effects!"
The scoring is also affected by the need for scratch mixes, for which the music editor has had to select temporary music. This may be a counter-productive trend. The editor will probably use music that was composed for an earlier film. As the producers and directors get used to their temporary track they often want something similar, so the composer is inadvertently rewarded for not straying far from what has already proved successful.
One of the more positive changes in scoring practices has been made possible through computer programs and synthesizers for musicians. Instead of presenting their ideas to the director at a piano, composers can now present them in a form "orchestrated" with simulations of different instruments.
Rerecording (The Mix) |
The climactic moment of postproduction sound is called the "mix" in New York and the "dub" in L.A. On the screen the credit goes to a rerecording mixer, but that term is rarely heard in daily parlance, says Lottman; "If we said we were going to a rerecording mix, they' d laugh."
At the mix all the tracks--singly called elements--are adjusted in volume and tonal quality relative to each other and the image. (At some mixes the music editor and effects editors may be sitting at the "pots" controlling their subsets of tracks.) During the mix the director and/or picture editor will decide with the mixer which sounds should be emphasized. A composer can find that a particularly inspired fugue has been dropped in one scene in favor of sound effects or dialog. However much effort the composer and effects editors may have put into their creations, their efforts are sub-sentient to the ultimate dramatic impact of the overall sound plus picture. Asked what makes a good mixer, Bochar says, "The best mixers, like Richard Portman, Lee Dichter, and Tom Fleischman have the ability to leave their egos at the door. No one has to lay claim on the track. Mixing becomes an experience, as opposed to a job and drudgery. When those moments hit, it just Soars."
The top mixers are orchestrators who create a sonic texture. You can' t have wall-to-wall noise, says Rydstrom; like music, the sound effects have pitch, rhythm, and pace which must be varied to create interest and may be manipulated to raise and lower dramatic tensions.
The mixer also has to equalize, blend, and balance the tracks for the seamless, invisible style that characterizes Hollywood style cutting. Thus, at a minimum, the mixer must match sounds created by dozens of technicians in different times and places. The engine roar of a 1954 Chevy may include sound obtained from a stock library, recorded on the set, and augmented with new recordings during postproduction. It may have been "sweetened" with synthesized sound. But it has to sound like one car.
Mixers have a number of tools. Equalizers and filters, for example, can boost or decrease the intensity of low, middle, or high frequencies in order to make dialog or sound effects match those that came from microphones and sources with different characteristics. Filters are also used to eliminate unwanted steady frequencies, such as the buzz of an air conditioner. In dealing with image size, the mixer adjusts perspective (determined mainly by the ratio of direct to indirect or reflected sound), which can be manipulated through the addition of artificial reverberation.
Great rerecording mixers are artists as much as technicians. The mixers' console is their palette: they have an infinite number of choices for blending. Their tools can be used in expressive ways. For example, an annoying voice can be adjusted to sound more screechy, or the roar of an approaching truck can be made more ominous. At the mix some of the many sound effects are heightened and others are lowered or eliminated. Sounds can be emotionally effective even when they are reduced to near inaudibility. (See, for example, the sidebars on Silence of the Lambs and Philadelphia.) And the most eloquent " sound" of all may be silence. In our age of dense soundtracks, the sudden absence of noise can have a stunning impact.
A mix on an average A-picture combines at least forty to sixty tracks, and perhaps hundreds. Therefore for manageability of dense soundtracks there may be any number of premixes, wherein groups of tracks are combined and equalized in relation to each other. For example, twenty- four tracks of foleys may be boiled down to one or two six-track elements. A typical final mix might begin with seven six-tracks: two six-tracks each for effects and foley, and one each for backgrounds, dialog, and ADR. Dialog is usually mixed first. In Murch's words, "Dialog becomes the backbone of the sound and everything else is fit into place around that."
Given that a mix costs from $400 to $800 or more an hour, sound editors do as much in advance as possible so that the mixer can worry about the bigger balance rather than hundreds of small adjustments. With track separation, the remixed tracks need not be permanently wed to one another. If at the final mix of a car crash, the director chooses to emphasize one sound of shattering glass, that specific element can still be manipulated if necessary. Often the director or editor is given a choice among several types of sound for a given effect.
Technology has inevitably affected the esthetics of the mix. A few decades ago, merely pausing to make a correction would create an audible click, so an entire reel had to be mixed in one pass or started over. Then, with the advent of "rock 'n' roll" systems, mixers were able to move back and forth inch by inch. Once consoles became computerized to "remember" all the mixer's adjustments, says Murch, he was able to think in larger units. "You take a sweep through the reel, knowing that there are certain things you're doing that are not perfect. You get the sense of the flow of a ten minute or longer section of film, rather than doing it bit by bit. So you go through in ten minute segments until you've got the basic groundwork for what you want, knowing that there are things wrong in there that you can fix later. It's like a live performance: sometimes there's something that happens spontaneously that way, that you can never get when you're trying to do it inch by inch. Thus automated mixing allows you to work in large sections but it also encourages you to be very finicky about small things and it doesn't penalize you for that."
The end product of the final mix is not just one printmaster from which domestic exhibition prints are struck; effects, dialog, and music are kept discrete to allow for release in different formats ranging from monaural optical 16mm tracks, to multi-channel digital systems, to foreign versions minus the dialog.
Directors and Sound |
The soundtrack is perhaps the most collaborative component of filmmaking. It is created by all the personnel mentioned above plus their assistants. Nevertheless, the editor and ultimately the director do call the shots. How do sound personnel communicate with directors?
There have always been a few directors particularly attuned to the expressive potential of sound; these include Robert Wise, Orson Welles, Robert Altman, and Alfred Hitchcock. Hitchcock, for one, usually prepared a detailed list of sounds and was actively involved in choosing them. (For the sound of the knife entering the body in Psycho's shower scene, Hitchcock did a blind sound test among different types of melon, finally settling on a casaba.) These sound-sensitive directors often incorporate sound as part of the basic conception of their films. For example, Hitch experimented with expressionistic sound (Blackmail), interior monologues (Murder), subliminal sound (Secret Agent), and electronic sound (in The Birds, which orchestrates computer-generated noises and has no underscoring).
Other directors do not think creatively about sound but choose personnel who do. These directors may have unerring instincts for the best sound when presented with several specific options. Most directors, however, do not use the expressive potential of the soundtrack and leave sonic decisions up to their staff.
In general, the younger generation of filmmakers are more savvy than their elders. For one thing, they were part of the revolution in music technologies. For another, they were probably exposed to sound courses in film school. According to Murch the very raison d'etre for Coppola' s team in creating Zoetrope was to have their own sound facility. And a few of today's directors consider sound an equal partner with image. (But even these directors still may have to figure out how to convey their sonic ideas--Jonathan Demme has been to known to ask his sound editors for "something blue.")
The best way to appreciate the expressive possibilities in an American soundtrack is to study in great detail virtually any movie by the sound-sensitive directors, such as Altman, the Coen brothers (try Barton Fink) or David Lynch, among independents. To find the most interesting soundtracks in other Hollywood productions, check the sound credits. The most respected sound designers and supervisors may be called technicians, but their artistry can be heard in all the films they touch.
>> SIDEBAR: CREATING SOUND FOR DEMME
Sound Designer Ron Bochar discusses the creation of sound effects in scenes from Jonathan Demme's Silence of the Lambs and Philadelphia.
Footnotes:
1 Ford had met Earp in the Twenties.
2 The sources of the information and citations in this article are interviews conducted by the author in 1994 (Michael Kirchberger, Ron Bochar, Evan Lottman), 1986 (Walter Murch), and 1975 (Dede Allen, Mimi Arsham, James G. Stewart). The comments by Gary Rydstrom were made at a lecture at the Walter Reade Theatre in February 1992.
Recommended Books and Periodicals on the Art and Technique of Film Sound:
AItman, Rick, ed., Sound Theory/Sound Practice. New York, NY: Routledge, 1992.
Chion, Michel, Audio-Vision: Sound on Screen, ed. by Claudia Gorbman. NY, NY: Columbia University Press, 1994.
Moviesound Newsletter: The State of Film Soundtracks in Theaters and at Home, P.O. Box 7304, Suite 269, No. Hollywood, CA 91603.
LoBrutto, Vincent, Sound-on-Film: Interviews with Creators of Film Sound. Westport, CT: Praeger Publishers, 1994.
Weis, Elisabeth and John Belton, eds., Film Sound: Theory and Practice. New York, NY: Columbia University Press, 1985.
Forlenza, Jeff and Terri Stone, eds., Sound for Picture: An Inside Look at Audio Production for Film and Television. Emeryville, CA: MixBooks, 1993.
Schneider, Arthur, Electronic Po