This article will discuss how loud you should record vocals and the tips & tricks to follow to get the best results!
When you record vocals, everything from the noise floor, choice of interface, sample rate, bit depth, DAW settings, and most importantly, the mic gain you set matters. So first, let’s understand the difference between gain and level. Both are used interchangeably and are terms for volume only, but the contexts in which they are used differently.
At any stage of audio production, the gain is the volume of the audio at the input stage, and the level is the volume of the audio at the output stage. Hence, the gain is the mic’s volume you set during recording from the interface, mixer, or microphone itself. Once the audio reaches the DAW, the level is the volume of the waveform.
The meaning of the same terms changes during mixing. So essentially, the question you are asking is what should be the gain of the mic when you record vocals, or at what level should your recorded vocals be? So now that we have a common vocabulary let’s answer the main question of concern.
How loud should you record vocals?
The best practice is to record vocals at a loudness level between -12 dB and -20 dB, with -16 dB being the sweet spot, as this value minimizes the noise floor, pushes the distortion ceiling, allows you to have sufficient headroom for a dynamic, and powerful vocal performance, and also leaves optimal headroom for mixing and production.
Some engineers and vocal producers record at -12 dB, some at -18 dB, etc. So there’s no hard-and-fast rule. However, plenty of other factors to consider when answering the question. So let’s discuss all those factors and the best tips and tricks for recording vocals. We will investigate why -12 to -20 dB is the appropriate level range for recorded vocals.
Bit Depth or bit rate is the amount of information measured in bits in each audio sample. For example, when you record vocals, the audio interface or the audio-to-digital converter converts the analog vocal signal from the microphone to binary data the computer can read and understand.
During the process, there’s a step called quantization, in which the steps in the analog waveform are assigned some digital data, the size of which is determined by the bit rate. In simple words, higher bit depth means more resolution and more headroom for the audio.
The bit depth of 24-bit is considered optimal and high resolution, and the bit depth of 16-bit is the information per audio sample in a CD. The higher the bit depth, the higher the audio quality, but more processing and memory are required.
Higher bit depth allows for better post-processing of the vocals, allowing more headroom and pushing the distortion ceiling. Hence, you can record audio louder with higher bit depth, as the audio won’t lose its quality when distorted or processed at any step of audio production.
However, recording too loud may clip your audio. So, -12 to -20 decibels is a good range for recording vocals at a bit depth of 24-bit.
The sample rate is the number of audio samples or audio snapshots captured in one second of recording/processing the audio. For example, consider a camera capturing a visual event and taking pictures every second. The number of pictures taken every second are stitched together and converted to a video.
That is how the interface or the ADC takes the audio snapshots and converts the analog signal to a digital signal. So it’s like the audio equivalent of frame rate. So higher sample rate means the analog waveform is captured more frequently in one second.
That leads to more precise information, so the higher the sample rate, the better. However, there’s something called a Nyquist frequency, below which the audio cannot capture some frequencies precisely. Hence, the sample rate should be set at a minimum of 44.1 kHz.
The human hearing range is limited to 20 kHz, so a sample rate of anything above 40 kHz is considered optimal. However, if a frequency can’t be heard doesn’t mean it does not exist. Higher sample rates ensure that the harmonic content is also processed well.
You must have seen the option to upsample the audio in distortion and saturation plugins. Distortion occurs in almost every step of the audio production process, no matter how insignificant, subtle, or inaudible. But, especially while using harmonics processing or dynamics processing plugins, distortion can alter the audio quality.
Higher sample rates will increase the chances of that quality not getting altered. So, while 48 kHz of sample rate is optimal, recording at 196 kHz or 98, kHz is also a good idea if your system allows it.
The noise floor is the decibel value of the audio recording when the sound source is silent. Essentially, it tells you how loud the room is. A higher noise floor (>40 dB) may ruin your recordings. It could be the sound of your room, traffic in your area, noise from the surroundings, computer noise, or anything.
It’s recommended to record softer or colder when the noise floor is high because the mic captures the background noise less at softer levels. It’s also recommended to increase the proximity or decrease the distance between the vocalist and the mic when the noise floor is high.
However, cold recording may also increase the noise floor when you increase the audio level in the post or add compressors or another dynamic processor. That is because cold recording may lead to your inability to recognize, detect or monitor during the recording.
So, the noise floor is important to manage. Increase the gain to detect the noise floor, but record at the lower gain and higher proximity if you need to do more about it. Lastly, note that for every extra 6 dB of the noise floor, the effective bit depth of the Analog-to-Digital Converter and Digital-to-Analog Converter reduces by 1 bit. Hence, noise floor also affects the audio resolution.
Secondly, it’s important to consider that while recording audio, two types of noise floors can affect the quality of the recording: internal and external.
Internal noise floor, also known as self-generated noise, refers to the noise generated by the recording equipment, such as the preamp, cables, and other components in the signal chain. Internal noise is usually constant and can be minimized using high-quality equipment and properly setting the gain levels.
To minimize self-generated noise floor and maximize the dynamic range of the recording, it is important to use high-quality equipment with low self-noise, properly set gain levels through gain staging, and choose an appropriate bit depth for the recording. A good rule of thumb is to set the gain level as high as possible without introducing noticeable noise or distortion and to use the highest bit depth possible for the given recording format.
External noise floor, on the other hand, refers to the ambient noise in the recording environment, such as traffic, HVAC noise, and other sounds that are not part of the desired recording, which type of noise can vary greatly depending on the recording location and time of day and can be minimized by choosing a quiet recording location, using soundproofing materials, and using directional microphones to reject unwanted sounds.
In general, it is important to minimize internal and external noise floors as much as possible to achieve a high-quality recording.
Headroom & Dynamics
Dynamic range is the difference between the loudest and softest part of the audio signal. Headroom is the difference between the ceiling of digital distortion (0 dB) and the loudest part of the audio signal. A headroom of -12 to -20 dB is important for vocals. That is why recording vocals at the same range of gain is recommended.
Hot recordings or recordings with louder gain may make the audio distorted or deteriorate in quality in the process, and soft recordings may lead to noise floors not getting detected during the recording. So a gain ceiling of -12 dB also allows the vocalist to improvise and change their dynamics in the middle of the performance without clipping the audio.
I recommend recording vocals at about -18 dB for neutral singing at a fairly low note since singers increase their singing volume greatly at higher notes, going up to as far as -6 dB. Alternatively, you can monitor the loudest parts of the recordings before recording, so you can gain-stage the mic at -12 dB or below at that part, so the peak loudness doesn’t exceed -12 dB.
The audio interface or soundcard is the device that acts as an intermediary between the microphone and the computer and does amplification, audio-to-digital conversion, and digital-to-audio conversion. Hence, you can set the sample rate, bit depth, MIDI i/o, and audio gain.
Simply put, the audio interface determines the quality and resolution of your audio as much as your mic does. Some of the best audio interfaces are SSL 2+, Audient iD4 MkII, Focusrite Scarlett, UAD Apollo Twin, etc.
The choice of microphone greatly determines how good your recording is. Recording with a Condensor microphone with a cardioid polar pattern and a sturdy build will improve your output to a huge extent. Plus, it makes a huge difference if your room is acoustically treated or if you use acoustic panels while recording.
Rode NTK, SM7B, AKG C414, and other great microphones for recording vocals. Secondly, the vocal position is also important. It’s a rule of thumb to keep a distance of six inches between the mic and the mouth. You can get closer to the mic for more bass and proximity or away from it for louder deliveries.
Also, keep the headphone’s volume loud enough to capture the performance in its essence but not so loud that the track or metronome bleeds into the vocals. Lastly, a positive ambiance and a friendly environment will affect how the vocalist’s performance (singer, rapper, performer, etc.) is captured.
How loud should vocals be in a mix?
Vocals are one of the loudest elements in the mix and are usually kept at -12 to 20 dB LUFS of loudness range. So if you talk about a post-mix and pre-master track, a typical pop or rap mix has about -20 dB LUFS of the vocal track, a rock mix generally has about -22 dB LUFS of loudness, an EDM song has roughly -14 dB LUFS of loudness, and a singer-songwriter track usually has about -12 dB LUFS of loudness.
Vocal Loudness (Integrated)
True Peak of the vocal track
-14 dB LUFS
-2.5 dB TP
-20 dB LUFS
-7 dB TP
-12 dB LUFS
0 dB TP
-24 dB LUFS
-10 dB TP
-16.5 dB LUFS
-1 dB TP
Note that the difference between true peak and LUFS ratings is between 12 and 14 dB in every case, which means that the dynamic range is more or less similar for all tracks. Secondly, for the vocal to hit a true peak of 0 dB, the LUFS rating of the vocals should be between -12 and -14 dB. So that’s how loud the vocals are in the mix.
Remember, there are no rules in music, and the metrics mentioned above are just observations from existing tracks. You can read the entire breakdown here. Now let’s talk about LUFS. LUFS stands for Loudness Unit Full Scale and is an alternative to decibels as a loudness measurement.
When I observed my tracks (pop, hip-hop, EDM, R&B, etc.), the vocals were around -18 dB loud in decibels. However, loudness is a relative term. In a typical pop and hip-hop arrangement of 2022, vocals and drums are the loudest in the mix.
Vocals should be loud enough to cut through the mix and be clear & audible in every single sound system but not so loud that it masks the rest of the track and sounds separate from the mix. Any other choice of balance is a creative choice.
We have discussed the gain settings and the level balancing of vocals for recording and mixing. We have also discussed all the other factors, best practices, and everything you need to be mindful of while recording vocals. Of course, similar guidelines apply to any other instrument, but the post kept only the vocals in mind.
One thing I want to talk about is USB Mic. Many options are available today, and a USB Mic is one of them. They have an analog-to-digital converter built into the mic circuitry only. And that is why they can’t deliver the same resolution as a condenser microphone connected to an audio interface.
An audio interface gives you more control and flexibility and has better resolution. However, many USB Mics available today are of decent quality and can be used to record vocals or podcasts. Lastly, when it comes to different genres and vocalists, you may also need to be careful about your gain settings.
For example, female soprano voices are not as loud, so the range and pitching of the vocalist also determine how you set the microphone gain. Lastly, if you have a good performance captured, that matters most, even if the technicals aren’t quite met. Vocalists like Bono of U2 and John Lennon preferred recording on an SM57 and SM58, respectively.
All that matters, in the end, is that the recording is clean and the performance is captured. I hope the article was of help. Thank you for reading.
Vocal Related Topics:
LALAL.AI Review: Separate Your Vocals With Neural Network
How To Isolate Vocals From a Song In Logic Pro X?
Best Way To Record Acoustic Guitar And Vocals At The Same Time
How To Record Vocals Without Clipping & Distortion?
Can I Run Vocals Through The Guitar Pedals? Answered
Top 12 Harmonizer Pedals For Vocals 2023
12 Best Condenser Mics For Live Vocals 2023
Top 12 Delay Pedals For Vocals 2023 From Top Brands
12 Best Pop Filters For Recording Vocals 2023
Top 12 Reverb Pedals For Vocals 2023 From Top Brands
6 Ways to Isolate Vocals from a Song in Ableton 2023
Top 11 Waves Plugins For Mixing Vocals 2023
Top 10 Doubler Plugins For Vocals, Guitars & More 2023
Top 7 Compressor Plugins For Smooth Vocals 2023
How Loud Should You Record Vocals? Tips For Best Results
Top 6 Antares VST Plugins 2022 (For Vocals & More)
12 Best Mic Preamps For Warm Vocals 2023
Best Plugins For Mixing Vocals: 11 Picks For Clean Voice 2023
Top 6 Cheap Dynamic Mics For Vocals Under 50$, 100$, 200$ & 300$
6 Best Ribbon Mics 2023 On Any Budget (For Vocals, Drums & Guitars)
6 Best Autotune Plugins 2023 To Improve & Enhance Your Vocals
Top 7 Enhancer Plugins 2023 (For Bass, Drums, Vocals & Harmonics)
Top 6 Reverb Plugins For Your Vocals 2023
Should I Sidechain Vocals & When? – It Depends
How Loud Should Vocals Be In a Mix? Answered With Images
How To Record Your Vocals in Ableton Live – Step By Step
How To Pitch Vocals Without Chipmunk Effect in FL Studio?
Best De-Breath Plugins For Vocal Breath Removal 2023
Should I Use a Limiter on Vocals? If Yes, When? Answered
Is FL Studio Good For Recording Vocals? Answered
How To Mix Vocals On A Premixed Beat?
How to Mix Hip-hop & Rap Vocals in FL Studio Lke a Pro
Top 7 De-Esser Plugins In 2023 For Better Vocals (And 4 FREE Plugins)
Top 12 Websites To Isolate Your Vocals & Instrumentals 2023
Shaurya Bhatia, is an Indian Music Producer, Composer, Rapper & Performer, who goes by the stage name MC SNUB, and is also 1/2 of the Indian pop music duo, called “babyface”. A certified Audio Engineer & Music Producer, and a practicing musician & rapper for more than 6 years, Shaurya has worked on projects of various genres and has also been a teaching faculty at Spin Gurus DJ Academy.