top of page

What is the best loudness

for online streaming services?

online streaming loudness normalization
How does this work ? 
same reference level and normalization methods
streaming services normalization methods

Why do online streaming services use loudness normalization ?

Most leading streaming services implemented some form of loudness normalization,

 

The goal behind loudness normalisation was never to force or encourage artists or music producers to aim at a specific level, loudness normalisation in streaming services is purely for the benefit of the end user, It is there so that when an end user is listening to music from a variety of sources (e.g a playlist) they don’t have to constantly reach to adjust their volume control, that’s it.

 

How does this work ? 

The loudness of the music is measured, compared to the reference level of the streaming service, and a precise gain offset is applied to match the measured loudness to the reference level.

 

Do all online streaming services use the same reference level and normalization methods ?

Different streaming services uses difference normalization methods, 

Some use LUFS integrated, others use Replay Gain while others have developed their own normalization methods, like Apple’s Soundcheck for example.

 

There’s nothing to prevent any of the streaming services from changing either their reference level, normalization preferences, or both.. in fact, Spotify has already done this in the past and stated that in the future they plan to change their normalization method and reference level again, how ? when ? nobody knows.

 

Major online streaming services reference level and normalization methods chart:

Screen Shot 2020-06-15 at 9.18.42.png

Tools like Loudness Penalty can show you precisely what each service will do:

https://www.loudnesspenalty.com/

 

Out of Spotify, Apple Music, YouTube, Pandora, and Tidal, only YouTube and Pandora use track normalization exclusively.  For platforms like these where users predominantly listen to singles or radio type streams, this makes some sense.

 

Spotify and Apple Music, on the other hand, both have an album mode.

 

The technique employed for album normalization is to use either the level of the loudest song on an album (or EP), or the average level of the entire album, and set that equal to the platform reference level. 

Then the same gain offset is applied to all other songs on the album. 

 

For Spotify and Apple Music this kicks in Shen two or more songs from an album are played consecutively.

 

Is mastering to an average target loudness of the major streaming services a good thing for my music? 

Once we let go of the idea of a target that we’re responsible for hitting and accept that each platform will adjust the gain appropriately, it frees us to master to the level that best suits the music.

 

If you want to really crush something, feel free to do that ! it will just get turned down more in volume.

If you want to leave a higher dynamic range (peak-to-average ratio) you’re free to do that too.

 

The extra punch of the wider peak-to-avarage ratio may help the impact of the music, on the other hand, 

the extra density of the narrower peak-to-avarage ratio may give your music the right amount of intensiveness. 

 

We shouldn’t let normalization reference levels dictate how we level the songs, but rather let the artistic intent and natural flow of the album be our guide.

 

The one caveat is that if you’re below the reference level of a particular platform, your song may get turned up (depending on the service). Spotify will apply limiting of its own design to do this, while Pandora will allow clipping.

 

Why is my song not as loud as other songs on a streaming service if all songs are being normalized ?

Here’s the interesting part, one of the two most popular loudness normalization algorithms is Replay Gain so it will make a good example to explain how loudness normalization works.

 

Replay Gain is calculated in three steps:

  1. A loudness filter that emulates the sensitivity of the human ear is applied, rolling off bellow 150Hz and accentuating frequencies around 3-4Khz (an inverted approximation of Fletcher-Munson curves).

  2. The audio file is sliced into 50ms long blocks and the RMS level of each block is calculated and stored.

  3. The RMS levels are sorted from softest to loudest on a scale from 1-100% and the value at 95% is chosen as the representative loudness of the whole file (!!)

If this seems a bit confusing to you, don’t worry, you’re not alone.

In practice, the upshot is that it only takes the loudest 5% of a song to offset the entire song’s level.

 

For example, songs with soft verses, or even songs which are relatively soft throughout with the exception of one big chorus, may initially sound quiet compared to a song that maintains a consistent level throughout because Replay Gain is only reading the loudest 5% of your song and only the loudest 5% of your song is effecting the streaming service normalization, so a song with substantial differences between soft and loud passages may sound quieter than desired,

mastering to an average target loudness
my song not as loud as other songs
bottom of page