If you are listening to music on your standard mp3 player or streaming on iTunes on a flat mix, the music will probably sound quite different from listening to the same tracks on the radio. Unlike music players, radio stations process their music for an enhanced listening experience.
So what do radio stations do to make the music sound better? Radio stations process the sound quality of their music specifically for radio play. The sound editing process involves minimizing peak-to-average audio levels. It also involves eliminating clipping distortions and using appropriate sound equalization settings.
Editing sound for radio play is a whole new art and process. You need to use the right audio format for radio play as well as understand how sound compression works. In addition, you need to use the correct equalization settings for radio play.
- 1 Things Radio Stations Do to Make Music Sound Better
- 2 Which Audio File Format Is Best For Broadcast Radio?
- 3 How Is Music Compressed To Play On Radio Without Losing Quality?
- 4 What Are Standard Equalization Settings For FM Broadcasting?
- 5 Final Word
Things Radio Stations Do to Make Music Sound Better
One of the things radio stations do to make music sound better is sound compression. This involves compressing and amplifying sound from audio tracks, so they sound louder than they actually.
During the sound compression and amplification process, the quieter parts in an audio track are punched up while the louder sections are made even louder. The entire process is done without creating distortions that will cause audio tracks to lose sound quality and clarity of the original recording. Once the process is complete, listeners can listen to audio tracks from their radio speakers without having to adjust the volume significantly.
Editing sound for radio play aims to achieve full and clear sound quality. Many people listen to radio stations at work, when driving, or when working at home. Without sound editing, it will be difficult to hear the quiet parts of an audio track without having to adjust the volume constantly. For instance, without sound editing for radio play, it may be difficult to listen to the quiet parts of an audio track over the sound of your car when driving.
Engineering sound for broadcast radio is not just a science, but it is also an art. The science part involves exploiting available audio bandwidths from transmission channels while eliminating distortions in sound quality.
As an art, sound engineering for broadcast radio involves modifying sound quality without affecting the original quality of the audio track. Some sound engineers change audio sound quality altogether to create a distinct audio signature for the radio station.
Sound engineers in radio stations aim to create a balance between the artistic and scientific elements of sound engineering. However, the overall goal of sound engineering is to increase the perceived volume of audio tracks without affecting the original track quality.
One of the best things about the entire process is there is no right or wrong way to do it provided the broadcasted sound signal meets regulatory requirements. Modifying sound quality for broadcasting is almost entirely subjective. In the end, the success of the audio modification for broadcast media is determined by the audience.
Engineers increase the perceived volume of sound for broadcasting by tweaking the peak-to-average ratio of music. This process can be effectively achieved by reducing distortions during the sound editing process.
Which Audio File Format Is Best For Broadcast Radio?
The best audio file format for broadcasting on the radio is MP2. Audio quality standards are essential factors in radio production. Therefore, broadcast sound engineers need to comply with the standards to reach a broader audience. There are several tactics for modifying audio file formats for radio broadcasting.
It is advisable to start with WAV file formats when you begin producing for broadcast. WAV file formats are uncompressed and often render the best sound quality. In addition, it is much easier to edit WAV audio files than other types of files. However, you will need to convert the WAV audio files if you want to air them in a radio station.
Understanding the MP2 File Format
MP2 is not the most popular audio file format. In fact, many people outside broadcast media do not know much about this file format. Many sound engineers start working with the MP2 file format when editing sound for broadcast media.
A considerable of MP2 file formats is they have been reduced to smaller sizes while maintaining high sound fidelity. If you convert a WAV file into an mp3, the latter will be about a tenth of the size of the original WAV files. However, when you convert WAV to MP2, the converted file will be half the size of the WAV file.
Mono vs. Stereo Files
When working with audio files for broadcasting, consider using two distinct channels of sound. Once you have converted your WAV files to MP2, make sure the converted files are stereo files before broadcasting them. Many radio stations prefer to work with an MP2 file format known as intensity encoded joint stereo.
Like the standard stereo file, the intensity encoded joint stereo allows you to have two distinct sound channels. However, you can save disk space with the intensity encoded joint stereo, which will enable you to check for similarities in the two channels during encoding.
How Is Music Compressed To Play On Radio Without Losing Quality?
Sound compression helps to create a balance between the loud and quiet sounds in an audio file. Compression increases the perceived loudness of an audio track for broadcast media by making quiet sounds seem louder.
Compression helps to improve sound density, which is the extent to which loud and quiet audio signals are made uniform. The two main sound compression techniques include peak limiting and clipping.
Peak limiting is a form of sound compression technique that features fast attack and release time and a significantly high compression ratio. Modern peak limiters limit the entire peaks of the audio file instead of individual peaks. Clipping allows sound engineers to control the individual peaks of an audio file.
Contrary to compression, which minimizes sound dynamic range, peak limiting aims to prevent overload in a sound channel. On the other hand, clipping eliminates any parts of a sound wave that exceed a specific range. This technique should be used sparingly because it can cause sound distortion when overused.
Limiting aims to increase audio density, which makes loud sounds seem louder. However, this technique should also be used sparingly because it may cause audio to sound flattered and unappealing. It is essential to understand how to use audio clipping and peak limiting because they may cause sound distortions when overused.
Selective Limiting and Multi-Band Compression
Sound engineers use these techniques to improve sound density without causing distortions. Selective limiting and multi-band compression divide sound frequencies into individual bands allow sound engineers to compress or limit each band separately. Selective limiting and multi-band compression is necessary to prevent either the voice or the instrument frequency from dominating the other.
What Are Standard Equalization Settings For FM Broadcasting?
Radio stations use equalizers to modify the spectral balance of audio signals. Equalizers are sound filters placed on the path of an audio signal. The primary function of equalizers is to apply peaking curves to audio signals. Some radio stations use on-line equalizers to create unique sonic signatures for their broadcast.
An equalizer isolates some sound frequencies and boosts them, reduces their fidelity, or leaves them unchanged. The best equalizer setting for broadcasting is a subjective matter; you need to determine the desired sound effect and the best way to achieve that sound quality.
Equalizers control different sound frequencies, including:
- Super low
- Lower mids
- Upper mids
- Super high
Super low frequencies are within the 20 Hz and 60Hz frequency. In most sound settings, the super lows can be heard in the:
- Low-pitched drums
The low frequencies can be heard from a distance and often cause objects near the sound frequency to shake. This sound frequency requires moderate boosting; otherwise, it may cause the audio track to sound muddled and undefined. The human ear is incapable of picking out individual notes in low sound frequencies. Therefore, avoid amplifying the low frequencies significantly.
The lower mids are within the 60 Hz and 250 Hz range. The sound frequency is usually pleasing to the human ear. Many sound engineers boost this sound frequency to make the audio track pop. Some of the instruments that produce sound frequencies in this range include:
- The cello
- Low note guitar
The mids are in the 250 Hz to 1500 Hz frequency. The human ear hears this sound frequency most clearly. Therefore, consider boosting sounds in this frequency to boost the overall volume of the audio track for broadcasting.
The upper mids are in the 1500 Hz to 6600 Hz range of the sound frequency. Consider boosting this frequency sparingly because this sound frequency can damage the ear. Once boosted to the appropriate levels, the upper mids should produce a chime-like sound.
The super high mid sound frequency is in the 6600 Hz to 20000 Hz frequency. This is one of the highest frequencies that the ear can perceive. Many sound engineers boost the super high mids to create the perception of an atmosphere.
Audio tracks played in a radio station sound better than the same tracks playing an mp3 player or another device. Radio stations use sound editing techniques to boost the sound quality and loudness of the audio tracks they play. Some of the sound editing techniques radio stations use to amplify sound include compression, file conversion, and sound equalization.
Started as a rapper and songwriter back in 2015 then quickly and gradually developed his skills to become a beatmaker, music producer, sound designer and an audio engineer.