Encyclopedia > Sound effects

  Article Content

Sound effect

Redirected from Sound effects

Sound effects or audio effects are artificially created or enhanced sounds, or sound processes used to emphasize artistic or other content of movies, music, or other media.

In motion picture and television production, a sound effect is a sound recorded and presented to make a specific storytelling or creative point without the use of dialogue or music. The term often refers to a process applied to a recording, without necessarily referring to the recording itself. In professional motion picture and television production, the segregations between dialogue, music, and sound effects recordings are quite severe, and it is important to understand that in such contexts dialogue and music recordings are never referred to as sound effects, though the processes applied to them, such as reverberation or flanging[?], often are.

Table of contents

History The use of sound effects originated in theater; by some accounts sound effects were already in use in Classical Antiquity[?]. Various simple devices were used to simulate such sounds as thunder or approaching horse hooves off stage. The repertory of early theatrical sound effects became more elaborate in the early modern era, and various mechanical devices were constructed to produce more and better sounds. Large urban theaters often had large collections of such devices. Samples of such vintage sound effects can occasionally be heard in early audio recordings of Vaudeville acts, although by contemporary accounts the effects in the primitve early recording studios were less elaborate than those in theaters.

The field of sound effects advanced considerably in the 1920s, first with the impetus of radio. Most early radio was live, and featured many live theatrical productions which made much use of sound effects. The better radio studios often employed several sound effects men working at the same time on productions. In the mid 1920s, the advances in recording technology with improved electronic microphones allowed for the practice of having pre-recorded repretories of sound effects on 78 records. Actual recordings of motorcars, airplanes, large crowds laughing or shouting, etc could then be added to radio dramas via the discs. In the late 1920s motion picture studios switched from silent film to sound, opening up another venue for sound effects.

Motion Picture Sound Effects In the context of motion pictures and television, sound effects refers to an entire hierarchy of sound elements, whose production encompass many different disciplines, including:

  • "Hard" sound effects (common sounds that appear on screen, such as door slams, weapons firing, and cars driving by.)
  • "Background" or "BG" sound effects (sounds that do not explicitly synchronize with picture, but indicate setting to the audience, such as forest sounds, the buzzing of fluorescent lights, and car interiors. The sound of people talking in the background is also considered a "BG," but only if the speaking is unintelligible and the language is unrecognizable.) These background noises are often called "ambience".
  • "Foley" sound effects (sounds that synchronize on screen, and require the expertise of a foley artist to properly record. Footsteps, the movement of hand props, and the sound of cloth are common foley units.)
  • "Design" sound effects (sounds that do not normally occur in nature, or are impossible to record in nature. These sounds are used to suggest futuristic technology, or are used in a musical fashion to create an emotional mood.)
Each of these sound "food groups" are specialized, and often a sound editor[?] will be known to be a good "Car cutter" or "Guns cutter." Foley artists, Foley editors, and Foley Supervisors are highly specialized and are essential for producing a soundtrack suitable for International distribution and exhibition.

The process of creating sound effects can be separated into two steps: the recording of the effects, and the processing. Large libraries of commercial sound effects are available to content producers, but on large projects sound effects may be custom-recorded for the purpose.

Recording Sound Effects The best sound effects originate from original sources; the best sounds of machine-gun fire are original recordings of actual machine guns, as opposed to a synsthezised or sampled/sequenced effect of a machine gun. When the producer or content creator demands high-fidelity sound effects, the sound editor[?] usually must augment his available library with new sound effects recorded in the field.

When the required sound effect is of a small subject, such as scissors cutting, cloth ripping, or footsteps, the sound effect is best recorded in a studio, under controlled conditions. Such small sounds are often delegated to a foley artist and foley editor[?]. Many sound effects cannot be recorded in a studio, such as explosions, gunfire, and automobile or aircraft maneuvers. These effects must be recorded by a sound effects editor[?] or a professional sound effects recordist[?].

When such "big" sounds are required, the recordist will begin contacting professionals or technicians in the same way a producer may arrange a crew; if the recordist needs an explosion, he may contact a demolition company to see if any buildings are scheduled to be destroyed with explosives in the near future. If the recordist requires a volley of cannon fire, he may contact historical re-enactors[?] or gun enthusiasts. People are often excited to participate in something that will be used in a motion picture, and love to help.

Depending on the effect, recordists may use several DAT, hard disk, or Nagra recorders and large numbers of microphones. During a cannon- and musket-fire recording session for the 2003 film The Alamo, conducted by Jon Johnson[?] and Charles Maynes[?], two to three DAT machines were used. One machine was stationed near the cannon itself, so it could record the actual firing. Another was stationed several hundred yards away, below the trajectory of the ball, to record the sound of the cannonball passing by. When the crew recorded musket-fire, a set of microphones were arrayed close to the target (in this case a swine carcass) to record the musket-ball impacts.

A counter-example is the common technique for recording an automobile. For recording "Onboard" car sounds (which include the car interiors), a three-microphone technique is common. Two microphones record the engine directly: one is taped to the underside of the hood, near the engine block. The second microphone is covered in a wind screen and tightly attached to the rear bumper, within an inch or so of the tail pipe. The third microphone, which is often a stereo microphone, is stationed inside the car to get the car interior. Having all of these tracks at once gives a sound designer[?] or mixer a great deal of control over how he wants the car to sound. In order to make the car more ominous or low, he can mix in more of the tailpipe recording; if he wants the car to sound like its running pedal-to-the-metal, he can mix in more of the engine recording and back off on the interior perspective.

Processing Sound Effects As the car example demonstrates, the ability to make multiple simultaneous recordings of the same subject—through the use of several DAT or multitrack recorders—has made sound recording into a sophisticated craft, and allows the sound effect to be shaped by the sound editor[?] or sound designer[?], not just for realism, but for emotional effect.

Once the sound effects are recorded or captured, they are usually loaded into a computer integrated with an audio non-linear editing system. This allows a sound editor[?] or sound designer[?] to heavily manipulate a sound to meet his needs.

The most common sound design tool is the use of layering to create a new, interesting sound out of two or three old, average sounds. For example, the sound of a bullet impact into a pig (from the above example) may be mixed with the sound of a melon being gouged to add to the "stickiness" or "gore" of the effect. If the effect is featured in a close-up, the designer may also add an "impact sweetener" from his library. The sweetener may simply be the sound of a hammer pounding hardwood, equalized[?] so that only the low-end can be heard. The low end gives the three sounds together added weight, so that the audience actually "feels" the weight of the bullet hit the victim. If the victim is the bad guy, and his death is climactic, the sound designer may add reverb to the impact, in order to enhance the dramatic beat. And then, as the victim falls over in slow motion, the sound editor may add the sound of a broom whooshing by a microphone, pitch-shifted down and time-expanded to further emphasize the death. If the movie is a science-fiction film, the designer may phaser the whoosh to give it a more sci-fi feel. (For a list of many the sound effects processes available to a sound designer, see the bottom of this article.)

Aesthetics of Motion Picture Sound Effects When creating sound effects for films, sound recordists and editors do not generally concern themselves with the verisimilitude or true-to-lifeness of the sounds they present. The sound of a bullet entering a person from a close distance may sound nothing like the sound designed in the above example, but since very few people are aware of how such a thing actually sounds, the job of designing the effect is mainly an issue of creating a conjectural sound which feeds the audience's expectations while still suspending disbelief.

In the previous example, the phased 'whoosh' of the victim's fall has no analogue in real life experience, but it is emotionally immediate. If a sound editor[?] uses such sounds in the context of emotional climax or a character's subjective experience, they can add to the drama of a situation in a way visuals simply cannot. If a visual effects artist were to do something similar to the 'whooshing fall' example, it would probably look ridiculous or at least excessively melodramatic.

The "Conjectural Sound" principle applies even to happenstance sounds, like tires squealing or doorknobs turning or people walking. If the sound editor wants to communicate that a driver is in a hurry to leave, he will cut the sound of tires squealing when the car accelerates from a stop; even if the car is on a dirt road, the effect will work if the audience is dramatically engaged. If a character is afraid of someone on the other side of a door, the turning of the doorknob can take a second or more, and the mechanism of the knob can possess dozens of clicking parts. A skillful Foley artist can make someone walking calmly across the screen seem terrified simply by giving the actor a different gait.

In music and film/television production, typical effects used in recording and amplified performances are:

  • echo - one or several delayed signals are added to the original signal. To be perceived as echo, the delay has to be of order 50 ms or above. When large numbers of delayed signals are mixed over several seconds, the resulting sound has the effect of being presented in a large room, and it is more commonly called reverberation or reverb for short.
  • flanger - a delayed signal is added to the original signal with a continuously-variable delay (usually smaller than 10 ms). This effect is now done electronically using DSP, but originally the effect was created by playing the same recording on two synchronized tape players, and then mixing the signals together. As long as the machines were synchronized, the mix would sound more-or-less normal, but if the operator placed his finger on the flange of one of the players (hence "flanger"), that machine would slow down and it signal would fall out-of-phase with its partner, producing a phasing effect. Once the operator took his finger off, the player would speed up until its tachometer was back in phase with the master, and as this happened, the phasing effect would appear to slide up the frequency spectrum. This phasing up-and-down the register can be performed rhythmically.
  • phaser - the signal is split, a portion is filtered with an all-pass filter to produce a phase-shift, and then the unfiltered and filtered signals are mixed. The phaser effect was originally a simpler implementation of the flanger effect since delays were difficult to implement with analog equipment. Phasers are often used to give a "synthesized" or electronic effect to natural sounds, such as human speech. The voice of C-3PO from Star Wars was created by taking the actor's voice and treating it with a phaser.
  • chorus - a delayed signal is added to the original signal with a constant delay. The delay has to be short in order not to be perceived as echo, but above 5 ms to be audible. If the delay is too short, it will destructively interfere with the un-delayed signal and create a phaser effect. Often, the delayed chorus signals will be pitch shifted to create a harmony with the original signal.
  • equalization[?] - different frequency bands are attenuated or amplified to produce desired spectral characteristics.
  • overdrive[?] effects such as the use of a fuzz box can be used to produce distorted sounds, such as for imitating robotic voices or radiotelephone traffic
  • band-pass or band-stop - all signal content other than a specified frequency band or on a specified frequency band, respectively, is filtered out. Band-pass filters can be used to produce, for example, a similar sound as produced by a phone.
  • tuning - a musical signal can be tuned to the correct pitch using digital signal processing techniques. This effects is used by many less talented pop-music singers unable to hold the pitch.
  • pitch shift - similar as tuning, but with larger steps. For example, a signal may be shifted an octave up or down.
  • time stretching - the opposite of pitch shift, that is, the process of changing the speed of an audio signal without affecting its pitch.
  • resonators[?] - emphasize harmonic frequency content on specified frequencies.
  • synthesizer - generate artificially almost any sound by either imitating natural sounds or creating completely new sounds.
  • modulation - to change the frequency or amplitude of a carrier signal in relation to a predefined signal.
  • compression - the reduction of the dynamic range of a sound to avoid unintentional fluctuation in the dynamics. Compression is not to be confused with audio compression where the sound information is compressed.

See also: Audio signal processing, Digital signal processing, Acoustics, Foley artist, Ambient music, dub music

All Wikipedia text is available under the terms of the GNU Free Documentation License

  Search Encyclopedia

Search over one million articles, find something about almost anything!
  Featured Article
Canadian Charter of Rights and Freedoms

... in the European Convention. This general limitations clause definitely makes the Canadian Charter distinct from its American counterpart. Additional Features of ...

This page was created in 30.3 ms