What’s the weakest link for HD music?
With 2014 being the year of ‘HD’ tracks and 24-bit music becoming more prevalent, it occurred to me: what is in charge of my music quality? I have an iMac, a network streamer and a NAS drive for storage. I’ve been playing my music through iTunes using AirPlay; I know my streamer actually receives the music through a cable for more reliable transport, but is my computer’s sound card influencing the audio prior to my stereo’s DAC and therefore dumbing it down?
Is the dock/connected stereo/ Zeppelin in control of the DAC side of things or is the cheap sound card in my tablet or computer taking the first bite of the cherry? Steve Bush The limiting factor is actually your ears. There is a lot of nonsense regurgitated on the web about the supposed fidelity benefits of 24-bit music with 96kHz sampling rates. The truth is that these benefits are entirely illusory. Human ears have a maximum frequency response of 20Hz to 20kHz, and most adults have hearing ranges well below that. 44.1kHz sampling is already capable of capturing this with perfect fidelity. Similarly, modern 16-bit encoding has a maximum dynamic range of 120dB. This volume range runs from sounds as quiet as a mosquito to a pneumatic drill right next to your face. Double-blind trials published in reputable, peer-reviewed journals consistently show that no one can distinguish 16-bit/44.1kHz audio from 24-bit/96kHz.
The reason that the higher sampling frequencies and bit-depth are used in the recording studio is because they are mixing lots of different tracks with different gain and frequency ranges. Using 24/96 gives the engineer some headroom, so that distortions aren’t introduced along the way. But whether the final mix is output as 24/96 or 16/44.1 should make no difference to you. With 24/96 audio, your ears simply cannot detect any difference, better or worse. It’d be like paying extra for a monitor that outputs light in the infrared or X-ray portion of the electromagnetic spectrum.