17 Tips & Tricks For Modern Sound Design 2021

17 Tips & Tricks For Sound Design

Sound design is an ever-growing area for today’s modern media. There are a million reasons why a project usually requires a professional that fits the role of a sound designer. So, if you’re looking to have this spot, it’s essential to get as many tips and tricks as possible to up your game in the field.

Audio has a significant role. It can carry a message, and it can evoke a particular emotion; it all depends on how and why you use it.

As of 2021, we can certainly notice the various vital aspects audio has on media. For movies, video games, short films, trailers, etc., there’s always a need for a sound designer, a person who processes recorded sounds (foley art) or digitally creates sound effects (synthesizers and samplers) for a purposed impact on the project.

Usually, this professional has a great library of sounds, such as footfalls, crowd, fire, and shooting sounds. If it’s for video games, it can even go as far as being abilities sounds, jumping sounds, menu buttons sound; it’s an almost infinite list. Pretty much anything you can perceive as a noise emitter needs to be assigned original audio made by the sound designer of the project.

Now, let’s dive deeper and be aware of some tips and tricks to use in this field!


Related Readings:

29 Best Sound Design VST Plugins In 2021

11 Best Granulizer Plugins 2021 (VST, AU, AAX) for a Future Sound Design


17 Best Tips For Modern Sound Design 2021

1. Try Inventing Your Own Devices (And Showcase Them)

The best way to stand out in this niche is to develop and showcase a creative, original process.

If you want to dive into this field with originality and creativity, try inventing a homemade device that will work as an instrument to produce a unique sound.

I’m not only talking about doing regular Foley Art – recording different sounds and layering them together is already good and original. But instead, create a homemade apparatus out of scratch — a total makeshift device that you play like an instrument, making an unexpected and valuable sample of audio.

This is a tremendous stand-out opportunity for Sound Designers, especially if you plan to showcase the sound creation process you did for a project.

What is a homemade instrument?

For example, in the production of the sound library for the game “It Takes Two,” the Sound Designers of Hazelight Games came up with a different approach to emulate the buzzing hum of a bee. For that, they built their own “Kazoos” using rubber bands.

They recorded the vocalization that it emitted when blown, as in the image below, taken from the video where they explained the whole process of sound designing for the game.

Another great example that needs to be mentioned is the Nightmare Machine, by film composer Mark Korven. The Nightmare Machine is a unique instrument made of rulers, a reverb tank, a small spinning wooden wheel, and a one-of-a-kind complexity created to play scary and spooky sounds.

Here is a video showing its features.

Sounds of the Nightmare Machine

So have at it and create your original real instruments. Remember, it doesn’t need to be the most complex creation, and always keep them somewhere safe. You’ll probably need them for later!

2. Remove Instead Of Add

As a Sound Designer, your work is to help with the overall impact of a scene, and sometimes, you can deliver things a bit over-the-point when steering the audience towards how they should feel. So, a good practice is to try planning what you can add before a crucial part of a scene and only remove unwanted options when the time comes.

Think of it as surprising the scene with subtleness. I’ll give an example.

Let’s say there is a scene in a movie for which you’re doing the audio. In this scene, the main character is in an office with the sounds of computers, printers, and people chatting in the background. As he is sitting there in this distracting environment, his phone rings, and he picks it up only to get some devastating news.

For approaching this scene, you could apply a Low-Pass filter on the background ambiance. You could also add some reverb to the caller’s voice to make it seem like it’s taking more space. But instead, you can make something more efficient by removing things in exchange for adding them.

What if, instead of adding reverb to the voice behind the phone, you add a light reverb to the whole mix that’s playing before the phone rings?

This way, when the main character picks up the phone, you’ll have the opportunity to gradually make the background dry while leaving the caller’s voice on reverb.

In that, you are removing reverb. By not adding more reverb to the voice, you won’t be pointing out how your audience should feel, which results in a smoother delivery.

Also, rather than only gradually cutting the volume of the office noises, you could make the background appear less busy by playing it double layered at the start and removing one layer as the character is listening to the call. This leads to a nice scene with a subtle focal transition.

3. Voice Design

When doing sound design, you’ll need to get around with techniques for designing actor’s voices at some point. If you know the best tools to use, how to work them, as well as how to get the most commonly required effects, then you’re going the right way.

You can create many exciting voice designs with an extensive array of plugins available, like pitch shifters and dehumanizers. Of course, a lot of the work is already in the actor’s performance. But if you’re handy with other plugins we’re going to discuss here, then you’re set to meet good results in different situations.

For pitch-shifting, there are plugins like Alter Boy, Morphoder, Enigma, and more. But concerning human voices, formant shifting plugins may be worth checking; they change the voice pitch but still preserves the fundamental frequency, which will sound more natural.

In addition to that, if you pitch-shift a voice and associate it with a formant shift plugin, you’ll add a bit of genuineness to the voice. You can also use them in conflicting ways by making a voice higher with a pitch shifting plugin for creative effects and applying a formant shifter to make the same voice lower. It’s an experimental way of reaching interesting, less artificial results for a good range of voices. Here is a video clearing up some things about this.

Tremolo and Ring Modulator

Tremolo plugins can work quite well on frequency modulations. One of the results they can give you is that liquid voice you’ll find on the character “Venom,” for example. Tremolos can also cut up a voice creatively if used as amplitude modulations; it’s a good use for alien voices.

If you need a robust electronic structure in a voice, you’ll find ring modulators quite helpful. A ring modulator will work on two signal sources outputting the differences they’ll have. It will output a frequency that is the sum of the original two and another frequency resulting from their subtraction. You can see the use of this plugin on the voices of the character “Daleks” on “Dr. Who.” You can find a thorough explanation of how they work here.

4. Opportunities On Fitting Non-Organic Sounds

Movies, games, and theaters each often require a different approach for sound assignments to objects. If you can see a reason to deviate an entity of its natural sound ultimately, you should consider pointing it out to the team. Try doing your idea and showing it. One thing to look for when having this in mind is the context.
If there is an opportunity to beneficially use a completely inorganic sound in a scene in such a way that it’s portrayed as an authentic sound, then you can go creative. You can usually find these opportunities in scenes that visually depict an adjective to something, how an object must be seen for what it represents.
To illustrate this, in an episode of Simpsons called “El Viaje Misterioso de Nuestro Jomer (The Mysterious Voyage of Homer),” the character Homer receives an offer to taste a unique pepper by Chief Wiggum. When this pepper is presented to Homer, it naturally shakes due to how Chief Wiggum is holding it. When this natural movement happens, the pepper emits a rattlesnake sound, using sound to highlight that it’s a scorching pepper with hazardous potential. The sound designers used this technique for a big part of the episode, and you can watch a full analysis of it here.

5. Personalities Traits Delineating Sounds

You’ve probably already heard of representing the size and mass of the emitter by the pitch of its sound. For example, a big animal will produce a low-pitch sound, and the contrary happens with a small animal.

Taking it literally can sometimes feel like too much of a rule when designing characters. And if it does, you may find yourself having a creative block.

As discussed in this article, realism is different from immersion; and the ideal path for a sound designer is the immersive one. So, with this in mind,

How much more can you add to a sound for it to really stand out?

Besides its appearance and backstory, part of a character design is what it sounds like. I’m not restricting myself to the voice and tone of the character, but how we can sonically represent its unique personality traits. In this sense, you can start by gathering information about the character’s sense of humor, temperament, and the VO actor’s voice. Try your best to profile this character.

For example, in the Pixar movie “A bug’s life,” notice how the Ladybug’s wings don’t make sounds anything like what the other wings do. You can hardly correlate the sound to the action, but even so, it fits perfectly. So, by taking note from this example, you’ll be able to lose the fear of how far you can go by breaking realism to embrace a character’s sonic identity.

6. Approaches For Slow-Motion

When we’re met with a change in motion; meaning a scene going slower (slow-motion) or faster; we’ll need to assign the respective audio sensation to everything that’s playing out. You can raise the quality of your work by knowing some practices for this scenario.

You won’t reach the best result by simply slowing down or speeding up the sounds. For the scene to work cinematically, you’ll need to put some extra colors in the mix. When an original audio source goes through the process of slow-motion, it loses some of its frequency range. A good tip is to replace the sounds with a bit of exaggeration instead of doing them realistically.

To replace the sounds for a slow-motion scene, you could try choosing new audio samples from your library for everything that needs a source. Put those sounds through a reverb plugin and seize most of the wet signal; it should be more prevalent over the dry signal. The key is rendering the wet signal, stretching it by lowering its rate, and matching it with the scene.

 

You can also add a completely wet reverberated signal as a sound to be playing through the whole scene—kind of like a “bed” to be constantly present in the mix at a lower amplitude.

I recommend this interview with Emily Halberstadt, conducted by the Youtube channel “The REAPER Blog.” In the video, she shares some of her processes of making slow-motion sounds.

7. Record Something, Pitch It, And Layer The Fundamental Note

When scoring for Christopher Nolan’s movie “Tenet;” composer and sound designer Ludwig Göransson used an ingenious method. Using the recording of a fire truck, he made a bass instrument for a scene’s soundtrack. In the setting, an actor is using a fire truck to get somewhere.

To align a mere instrument of a score with the visual objects of a scene is an exciting creative decision.

But, how do you track a truck noise’s pitch?

When sharing his process, Göransson said that after finding the fundamental note of the truck’s sound, he pitched the truck noise down and layered it with the said note, clarifying the pitch. This can be done by reading the sound’s fundamental frequency on a frequency analyzer and assigning it with the nearest musical note.

For example, as the Youtube channel, “Wickiemedia” explains in his video, the fundamental frequency of a concert A note is 440 Hertz. That means this will be the fundamental frequency of any instrument playing the same note. The only change is the timbre.  Remember, every sound has a vibration that can outline its pitch.

Where to use this?

The most important detail is how he applied it. It is a very creative decision to use this virtual bass instrument made out of a truck sound in a track that plays on a scene associated with a fire truck. This track, called “Trucks In Place,” fills the whole mix when it plays for a while; and the sound design used for its bass definitely adds character and texture to it all. You can watch an interview of Göransson explaining his work on “Tenet” and in other projects here.

So take this idea to do the same thing in your project. If you have an essential object on the scene that you can record, try to create an instrument out of it. For Göransson, he chose to seize a truck’s sound’s textures to make it part of the score because it was an element of the scene. It’s fascinating how he only used it as an instrument, while the actual firetruck wasn’t assigned any sound to emulate its noises.

8. Shepard Tone

Having some knowledge of this can undoubtedly enable you to experiment and create sounds with a different approach.

Various auditory illusions are present today, especially with the tools available in the digital audio field. One of these that is reasonably helpful for sound design is called the “Shepard Tone.”

To make this illusion, you must superposition notes of a sound, separating them by an octave.

With the sounds organized, a simple method to achieve this illusion would be to play them upwards or downwards chromatically, with specific variations of volume to each pitch. Simply, this means that you must set the highest-pitched tone to gradually lose amplitude after the start of the sound; while keeping the middle pitch’s loudness steady and raising the lowest pitch’s volume.

This technique, when applied, will give the listener the impression that the sound’s pitch is infinitely rising. You can make this out of any source (As you can see here) and implement them if you think your project would benefit from it. Hans Zimmer, for example, used this to create tension in his score for the movie “Dunkirk.” The “never-ending” strings of the piece sounded as if they were constantly rising in a very unpleasant and tense way.

Here is a video explaining the Shepard tone and how it was used in Dunkirk.

The sound illusion that makes Dunkirk so intense

In games, we can find this illusion applied to the sound design of the “D.I.E. Machine” from Call Of Duty: Black Ops – Cold War Zombies. The weapon has one ability, in which, when activated, it draws the souls of zombies to restock ammo. And this ability uses the Shepard Tone, as it sounds like an energy force that never stops rising. You can see more about it here.

9. Tail Recordings And Spatialization

We already know that there are all kinds of reverbs, like Spring Reverbs, Algorithmic Reverb, and Convolution Reverb.

It’s also well known what roles each of these plays in immersion and creating a sense of space. In this sense, especially for realism in video games, putting a sound through an Impulse Response will provide us the best result, right?

Well, it can give us a good or even great result, but that’s not always the best way if your goal is realism. The Impulse Response does produce a tail of its own if you put a sound through it. But even so, it’s essential to be familiar with the role of the natural tail of the recordings and how it applies here.

Take the sound of weapons, for example. Putting the gunshot sound through an IR will provide us with some good quality shooting sounds with realistic reflections and natural textures of the environment. It does the job, but you can’t rely purely on that.

So, how to improve it further?

What you can do, in this situation, is seize the Impulse Response for the most considerable structure of a sound but null its effect on the tail. The reason for that is that the IR will probably have a different decay (usually faster) than what a gunshot would really sound like in that particular environment. You should use the actual tails of the sounds recorded in ideal places for natural and convincing sounds.

Rockstar used this strategy in Red Dead Redemption 2, for instance. They have recorded samples of the weapons being fired in all the game’s scenarios, such as jungles, forests, beaches, urban and hills. Then they used the raw tails of these recordings more predominantly than the IR tail for the sound effects.

So, when you fire a gun in Red Dead Redemption 2, you listen to the IR in the early reflections of the sound, but at the tail end, you hear how it actually sounded when recorded in the appropriate setting.

You can see a thorough explanation of the gun tail system of Red Dead Redemption 2 in this video.

So, especially for games, consider giving preference to the genuine tail of the sound!

10. Automate Bands, Shelves, And Filters For Atmosphere

One good activity is to spend time automating frequencies on a windy sound sample!

Especially if your goal is to set realistic atmospheres when you don’t have the most appropriate recording; or if you want to make it more unique.  The use of automation that I often do is setting EQ on a wind sound that I have recorded and applying automation on a band or shelving filter. When I record the automation, the frequency range and/or gain will keep changing, which means attenuations or gains will be traveling in the 800Hz-3000hz range, for example.

These changes in the sound characteristics will add texture and life to the noise, especially in a windy environment.

Another automation with EQ in this same situation is on High-Pass and Low-Pass filters. It is using the same method applied on a single band. Be it on a filter or a band, automating the EQ (We made a list of the best free Equalizers!) will give you frequency motion, which builds character to the sound.

11. Avoid Confusion On Sound Sources

If you’ve ever heard about The Law of Two and a Half by Walter Murch, then you’re aware that applying too many sounds on a scene will be detrimental and confusing.

Murch explains that we can only be aware of two people’s footsteps simultaneously, but the moment a third person joins the group, our brain stops noticing the step sounds of each individual and starts hearing them now as a single sound coming from a group. And this happens with every other noise. This way, sound synchronization is not that relevant anymore.

So, when making sounds for a large group of sound emitters, it’s fine to avoid meticulous detail. So, not every emitter needs individually assigned sound because it will turn into a confusing group anyway.

12. Use Parents And Children Channels On Your Mixes

You can do some things to benefit your computer and your experimentation freedom when assigning plugins and trying effects.

Sound design mixes can look like a mess sometimes. And to add to that, there are times where we could have discovered good sounds if only we had built an organized effect chain for a sound effect.

One very resourceful piece of advice is to set a parent track with some assigned nested tracks beneath it. This will enable you to run the entire effect chain on as many source materials there are in the child tracks. This way, you’ll avoid duplicating tracks in the standard process of layering sounds to produce a sound effect. You will also have the liberty of adding plugins on the child tracks.

In the picture below, “player hit” is the parent track. The image shows the hierarchy accordingly, meaning tracks below “player hit” are under the same plugins that the parent track uses.

This is especially indicated for sound designers to experiment with sounds because it enables shuffling the tracks below the parent track. This will also benefit processing power from the computer, even more so if you create a resample track beneath the nested tracks, which will record the output of the tracks above.

13. Organically Deviating From Realism

To make an absolute perfect delivery for a project, you need to see things with the same importance that you hear them. Sound Design does not revolve around emulating realistic sounds; it depends on immersion.

For example, in “Terminator 2” there’s a scene where the Terminator uses a shotgun to fend T-1000 off from trying to capture John Connor. In this particular scene, the Sound Designer decided to give the shotgun a huge exaggerated sound that still could appear organic. But the interesting detail is that he did this without using any shotgun gunshot sound. Even with the resources available to have the sound from the actual object, he creatively decided not to.

So instead, he layered four samples consisting of a .38 pistol gunshot, the echo of a rifle gunshot in a canyon, the recording of a cannon firing, and a sped-up cannon recording.

Gary Rydstrom, the sound designer who made the sound, commented on it. He said that the plan for the movie was to make things have this “sense of hyper-reality” but still sounding believable. He made the sounds by taking into consideration the emotional weight of the visual scene. In his own words, “Sound effects with a lot of testosterone.”

Be attentive

Try observing if your project has these opportunities. Do some freestyle and take into consideration the emotional spice you can pour into sound. As time passes, with practice, you’ll be able to recognize these moments where you can highlight your creative and unique sonification value.

You can find a good breakdown of the shotgun sound design from Terminator 2, as well as some of Gary Rydstrom’s comments in this video:

Shotgun sound design deconstructed from Terminator 2

14. Organize Your Library (And Foley Organizer Recommendations)

This is critical. Save all the foley recordings you do in organized folders.

I’ve seen many sound designers (myself included) have a bunch of sound effects and audio samples all together in a single folder. But organizing them in specifically named files and keeping all the recordings you do will serve you well in the future.

It’s a safe practice to have many folders for different projects and name them something identifiable like “Sound effects for 2021 project”. Inside this folder, you can have a lot of other folders with classifications for the specific sounds, like “Abilities sounds,” “Menu sounds,” “Motor sounds,” and keep assigning a new folder for every sound emitter you work with; it’s also a good idea to separate Foley productions from Synthesis productions.

This will save time and keep your files ready for any backup you might need to do in the future.

Another good thing about this practice is having the ability to use sounds you didn’t in the current project on future ones. Since some recorded audio did not make it to a project, having them saved will probably give them some use in a future project.

There have been many times where I recorded a sound and trashed it because I had other ideas, only to find myself remembering them in a future project and having to recreate them.

Foley and sound effects organizers

To make it easier, you can find many platforms designed to help you keep your sound effects library organized. These platforms usually have their own Search Engine, which you can use on your local files.

Platforms such as BaseHead have streaming engines that allow you to use their Cloud services to purchase, deliver, and access audio files. SoundMiner and Soundly are also pretty popular platforms that offer excellent service.

15. Diegetic, Non-Diegetic, Meta-Diegetic And Trans-Diegetic sounds

Especially for games that work with non-linear situations and interactive gameplay, your role as a sound designer will require you to share some creative ideas with the team. So it’s good to have some theoretical knowledge about the concept of diegetic sounds to develop your ideas and decision-making.

But what do they mean?

There is a thorough explanation you can find here, but in short:

  • Diegetic sounds are noises that belong in the universe with the characters. People in the scene’s environment can hear those—examples: Thunder sounds, Car sounds, Crowd sounds.
  • Non-Diegetic sounds are not represented in the universe and can’t be heard by any scene’s characters. It’s meant to be heard only by the audience. Examples: Soundtracks, Scary notes, “Game Over,” “Round start.”
  • Meta-Diegetic mean sounds that only one character can hear and no one else from the same universe. Example: Thoughts.
  • Trans-Diegetic sounds would be the ones that start as a Non-Diegetic sound and becomes a Diegetic sound, or vice-versa. Example: Music playing in the soundtrack (Non-Diegetic) turns into the same music that a band plays on a show in the next scene (Diegetic).

And here is an illustrative video about Diegetic and Non-Diegetic sounds.

The Sounds of Springfield - How The Simpsons Uses Diegetic and Non-Diegetic Sound

Use Non-Diegetic sounds in unique situations in games

Having these concepts in the back of your mind is a good idea if you want to improve your creative insights. In addition to knowing their meaning, applying them in games requires some knowledge, especially Non-Diegetic sounds as it’s a form of User Interactive experience.

On this note, besides music, you can also use non-Diegetic sounds to affect the gameplay. Grand Theft Auto: San Andreas makes a slight sound when a new mission is highlighted on the map — the very same found in the menu.

Non-Diegetic sounds can be helpful for the sound designer to let the player know something happened and how they should feel about it. For example, an interesting tip here would be to create a sound that plays when the player discovers a secret passage or when an “easter egg” is activated somewhere on the map. Some quick effect noise hints to the player that this is not a regular passage but a special one. Or that there is something sublime on the map.

16. Save Almost Every Idea For Revision

Save samples you thought were good. Sometimes a sound we did can make us tired or convince us to delete it because it’s not that great. Well, you could save this sample for later; you might be surprised with the outcome when you get back after a time. Our brain naturally attenuates some characteristics of a sound if it’s often repeating in our ears.

The sound that was annoying you before may actually be fitting, so it’s a good idea to save those drafts and compare them in the end. The sound you replaced can turn out to be more pleasing than the new one.

Even so, it’s common to be surprised when the team picks up the first sound you made out of many. So rather than deciding on a sound and discarding the first samples right away, try saving them.

17. Do Sound Re-design And Showcase Difficult Sounds Recreations

More specifically, do sound re-design without knowing how the original sounds like.

This is a practice I’ve taken to. Choose something to do a sound re-design and try for yourself!

But first,

What is sound re-design?

It’s when you pick a scene that’s already produced and available to the public, mute it, and put your own sounds replacing the original.

But inside this activity, I find that if you pick a scene, never listen to the original audio and go straight to doing the sound re-design, you educate yourself differently. The result is excellent! At the very end, you can compare yours with the original made by a professional.

Maybe you’ve put sounds the professional did not, and perhaps he made sounds you didn’t, either way, you are working on your creativity and getting some feedback on the comparison. You’ll be able to point details you missed for the next ones and grow as a Sound Designer!

Try recreating sounds

If you ever find unique and curious sounds while listening to something, try to recreate them. Use the opportunity to test the different plugins you have on as many layers as you want. And if you’re struggling to recreate something, there are fantastic sound designers in communities such as “r/sounddesign on Reddit that offer feedback and advice on how to reach a particular sound effect.

While you’re on it, do showcase your process of recreating the sound as well as your re-design process. If you’re on the field, showcasing your approach will illustrate your passion and creativity for others to see, which are not overlooked traits of sound design and prompts people to notice your work.

Bonus Tip: Know Implementation

I know many sound designers don’t have time to implement the sounds and solely create them. But, especially for video games, if you’re on the run to be selected as part of a team and be responsible for audio, it’s a big plus to have some idea of audio implementation.

It doesn’t even need to be complex, you can focus just on the audio part, but that can involve some coding, specifically in sound designing for video games. So, try and learn at least the fundamentals of implementation to better discuss audio ideas with the team and know the limitations audio middleware software has.

Important languages to know the fundamentals and some engines you’ll often use:

You’ll probably have to work with many engines that I didn’t write here; this is just a mention of the most popular ones.

Wwise and FMOD are important middlewares often used for implementing sounds for games; Wwise has an excellent free course on Audiokinetic’s website (with a certification fee). Both middlewares serve as great tools to facilitate the sound designer’s work!

Scroll to Top