It should all be so simple. 'The higher the number the better the sound' is an easy message to communicate. So, a 24-bit/192kHz recording must sound better than a 16-bit/44.1 kHz CD rip, right?  Not quite, unfortunately – things aren’t as straightforward as that.

Even before you start listening there are a number of factors to consider.

1) What are the recording’s origins? We’ve come across so-called ‘high-resolution' recordings that are touted as 24-bit/96kHz or even 24-bit/192kHz, but are little more than up-sampled CD masters sold at rip-off prices. These are a con, pure and simple.

2) A high-quality original master recording is a must. If that is engineered poorly it doesn’t matter how high a resolution the recording is, it just won’t sound good.

3) Much depends on the playback equipment used. If that isn’t transparent enough to reveal the differences, you’ve got no chance of hearing them.

4) An open mind is useful, too.

Standard resolution files are pegged at 16-bit/44.1kHz. This is the level of CD. Anything higher than this in terms of bits or kHz is considered a high-resolution recording.

What isn’t made clear from the ‘high-resolution’ tag is whether the music file is exactly the same as the original. This is why some companies prefer to use the label ‘Studio Master’ instead (where it applies, of course).

Making fair comparisons between high resolution/Studio Master files and CD quality alternatives isn’t as easy as you might think. I've talked to a number of people in the recording industry, and it looks like the two types of files aren’t necessarily treated the same.

It’s likely that the studios take more care over high-resolution files as they will tend to be heard and bought by more discerning users. The CD-spec file will usually be a down-sampled version of that file.

Not only are there losses involved in stepping down the resolution, but it may well be engineered for less discriminating uses such as commercial broadcast or car use, and so sound different too.

If we get past these issues (somehow), surely there’s a technical case for high-resolution recordings being better, right? Once again the answer isn’t as obvious as we’d like.

A few bits and pieces 

It’s best to split bit depth and sampling rate (the kHz part), and talk about them individually.

The more bits you have, the more accurate the measurement of the original waveform, so 24-bit looks like a good thing compared with 16-bit. Consider 16-bit has a little over 65,500 steps to measure a waveform, while 24-bit takes that to more than 16 million. Impressive.

What bits buys you is dynamic range – the difference between the quietest and loudest sounds on the recording: 24-bit gives a 144dB range, 16-bit 96dB.

It should be noted that these are theoretical figures, compromised to a certain degree depending on the noise generated inside the hardware used and the other signal processing the file goes through. It’s possible to lose as much as 10-20dB of dynamic range due to these reasons. 

The very best classical recordings have a dynamic range of around 60dB, while it’s not unusual to have pop recordings hovering around the 15dB mark.

That means, for playback purposes, old fashioned 16-bit has enough capacity to more than cope with any recording we’re likely to play.

Any issues with the greater measurement (technically referred to as quantisation) errors suffered by 16-bit over 24-bit are certainly reduced by using dither (intentionally added random noise) during the digital processing. Yes, adding the right kind of noise is a good thing.

More after the break

How can high-res make a difference?

The argument for 24-bit makes much more sense in the recording process as there are so many 'lossy' processes involved.

While a single track of 24-bit recording has a large dynamic range, it reduces notably when multiple tracks are used. A 48-track recording could lose as much as 36dB of dynamic range – that’s around 5-6bits of information, even before losses involved in other signal manipulation come into play.

System noise and other factors such as the need to prevent overload eat away at the dynamic range of a recording significantly. Using 24 bits gives the excess capacity to allow this while maintaining decent sound

The case for increased sampling rates is stronger. 44.1kHz was chosen for CD because it allowed an upper frequency limit of just over 20kHz – the upper limit of what humans can hear. You’ve got to be pretty young and have pristine ears to do it though.

The way digital works means that there are an awful lot of unwanted signals generated above that upper frequency limit. These have to be filtered aggressively; otherwise they’ll result in more distortion in the audible range.

That filtering introduces its own distortion which folds back into the audible range. Raising the sampling rates ever higher means that the filters can be set to work at far higher frequencies taking them and their unwanted effects further away from the audible frequencies.

The raised upper-frequency limit also means that the upper harmonics of instruments can be represented better, even if science strongly suggests we can’t actually hear them.

Hearing is believing

I’ve noticed the odd thread on our forums suggesting high-resolution recordings are little more than a con. But, having heard a fair few examples, I can’t agree.

The case for higher sampling rates certainly looks stronger on a technical level than the argument for 24-bits (remember I’m talking about for playback rather than for the recording process).

Many high-resolution files I’ve heard sound gorgeous, making conventional CD-spec versions of the same music sound crude in comparison.

Whether that’s due to the increased bit depth, higher sampling rate or some outside factor such as the care taken in the mastering I’m not sure. It’ll be fun trying to find out though.

MORE: High-resolution audio: everything you need to know

 

by  Ketan Bharadia

Follow whathifi.com on Twitter

Join whathifi.com on Facebook

Find us on Google+