What is Hi-Res Audio and does it make a difference?

Adam Molina / Android Authority

You’ve probably been told that bigger is better, more is more, and that 24-bit audio is better than 16-bit audio. But what is Hi-Res Audio and is it better than other types of audio? We’ll go over what, if anything, changes as you move up the bitrate ladder.

What our ears can hear

in the ear empty ear comparison

Rita El Khoury / Android Authority

My ear

Almost any discussion about audio has to start with the human ear. We know that the frequency range we can hear is 20 Hz to 20 kHz, which decreases with age. That means anything above or below that is inaudible, but other factors are at play beyond that. There is also the concept of auditory masking. This means that loud sounds mask quiet sounds. You can’t hear someone else whispering at a loud party, even if they’re standing nearby, for example. Even the classiest pair of listeners will not change these basic facts.

Therefore, there is a limit to what we can hear in any given recording. Theoretically, we could capture a wider range of frequencies and record them, but that wouldn’t do us much good. Sound recording technologies therefore use these facts to create high-quality reproductions of virtually anything.

Some basics of digital audio

Cut audio on Audacity 3

When recording audio on a computer, mobile phone or any other device, the device must convert it from an analog signal to a digital signal. This is done using an analog-to-digital converter (ADC), which takes the sound waves and represents them as a series of numbers. That’s an oversimplification, but that’s the basic idea we need here.

For digital audio to work, we need a method to perform this conversion. If we take the value of a sound wave at one time and write it down, called “sampling”, we can then use those values ​​to recreate the wave later. But how many times should we do this? What time intervals should we use to make sure we capture all the details? This is where some math known as Nyquist-Shannon sampling theorem Between.

The calculations and algorithms behind Nyquist-Shannon are complex. However, we know that it suffices to sample a sound wave twice in a limited frequency range in a given period to reproduce it perfectly. Let’s go slightly above the range of human hearing, just to be sure. Some basic calculations show us that 2 x 22,000 Hz = 44,000 Hz. Keep this number in mind; it comes back later.

But remember, all the digital magic in the world can’t change what the human ear is capable of.

The other piece of this puzzle is bitrate. These are the digits you see floating around, with 16-bit or 24-bit being the most common. Most trumpets around Hi-Res Audio tend to emphasize this number. But what is a little? In its most basic form, a bit (short for binary digiyou) is what computers use to represent two possible values: 0 or 1. The more bits you have, the more data you can integrate.

Regarding audio, 16 bits is more than enough to capture the detail to reconstruct the sound waves later. Due to some historical technologies, including early digital audio tapes and compact discs, we end up with a 16-bit, 441.1 kHz standard. You’ll notice it’s slightly higher than the 44kHz we need, so we’ve got everything we need to recreate high-quality sound. If you don’t believe us, you can check how much the recording industry itself freaked out about digital audio of this quality. If consumers could record audio to such high standards, they feared, then piracy would be rampant.

It didn’t really happen, but it is true that 16-bit, 44.1kHz is of exceptional quality. So why did high resolution come about, is it better and how does it work?

Hi-res audio says hello

Deezer Android app

When the CD took off in the late 1980s and 1990s, it represented cutting-edge technology. It seems trivial today, but the components and computing power saved to perform the digital to analog conversion (DAC) in home audio products were once expensive and complex. That didn’t matter much, however, as the prices quickly dropped and the audio quality experience was indeed exceptional.

The 1990s also saw the rise of MP3 audio file format, and in the 2000s we had iPods and iTunes, and even later audio streaming services like Spotify caused a stir. But as computing power became cheaper and more accessible, Hi-Res Audio was also born from Deezer, Qobuz, Tidal, etc. This is the short version of how we ended up with the media landscape we find ourselves in today.

High-Resolution Audio is also referred to as “Ultra-HD” audio, with 16-bit, 44.1 kHz sound referred to as “HD” or “CD quality”. High resolution files can reach 24 bits and frequency ranges of 192 kHz or more. What, if anything, does this mean for your listening experience?

Higher frequency ranges don’t matter much

Let’s start with the frequency ranges. We know that human hearing peaks at 20 kHz, and so the corresponding Nyquist-Shannon bounded frequency range is 44 kHz, so 16-bit, 44.1 kHz is more than enough to cover that range.

At 192kHz, for example, we could go up to 96kHz! But our ears can’t hear such high frequencies, so that doesn’t mean much in the end. Even if you reproduce these frequencies perfectly anyway, you will never come close to hearing them.

But proponents of Hi-Res Audio will argue that doing more samples means more data points, and fewer “jagged” or “smoother” curves are the results. The short answer to this is that Nyquist-Shannon already produces perfectly smooth audio curves, but let’s take a closer look at that claim.

Nyquist-Shannon already allows us to recreate flowing sound waves.

Yes, an ADC produces a series of numbers with finite values, not the gently sloping curves you find in a vinyl record. However, the corresponding DAC does not produce anything like a bar graph or a staircase. If you want to see for yourself, connect the output of one to an oscilloscope. The result: smooth and beautiful sound waves. Only the cheapest DAC would do a one-to-one mapping of discrete data points to particular sound values. In contrast, any consumer DAC will perform the Nyquist-Shannon calculations necessary to perfectly reproduce the sound wave as it was recorded. You can include even more data points, but they will end up on the same recreated sound curve anyway. It won’t change anything at the end of the day.

So why are so many hi-res audio services showing us graphs of supposedly “best” digital representations? Well, on the one hand, it’s great for marketing. It took a long time to describe all the nuances of digital audio above. It’s much faster to sell the idea that more is more than to explain the reality behind it all.

More Bits Doesn’t Mean Bigger Sound

Sony WH 1000XM4 ipad noise canceling headphones

Adam Molina / Android Authority

Fair enough, the higher frequency ranges don’t really make a difference, but surely having more bits available must be good, right? Yes, but in fact, no. It’s technically true that capturing more data per sample leads to wider dynamic range – the difference between quiet and loud sounds in a given track – but again we have to consider the pesky human ear.

As mentioned, the loudest sounds drown out the quieter ones, so even if a file contains data for sounds in a range of volumes, only the loudest are perceptible to our ears. This is all the more true if they are close in frequency.

More bits don’t mean much to your ears, even if you turn up the volume.

How about increasing the volume? This is where more pieces will surely shine, right? We will amplify quiet sounds and they will reach our ears. Except loud sounds will get louder too. Plus, there’s a limit to how loud we can listen to music before it literally damages our hearing permanently. Even if you increased the button to 11, you somehow have to put up with listening to music that loud for a whole minute and have perfect ears, like a microphone. It’s also your brain that does a lot of masking, so even if your ears pick up the sound of a mosquito’s wings at the same time as a gunshot goes off in a particular room, for example, you’ll never end up notice it. And now your hearing is destroyed.

So theoretically, yes, a 24-bit hi-res file can have greater dynamic range than a 16-bit file, but you’ll never hear the difference.

Is the high resolution worth the high cost?

A woman wears the technica ath m50xbt2 audio wireless headphones in a coffee shop.

Despite all the mathematical, physical, and psychological limitations that show us that Hi-Res Audio isn’t worth much, why are more and more streaming services offering it? In short, because they can charge more.

A slightly fairer answer might be because streaming services have legitimately been below CD quality for a while. Makes sense. The type of large, lossless file formats required to send CD-quality audio over the Internet consume a lot of bandwidth and consume a lot of storage space. But as both become cheaper, it is easier to accomplish the task. So why not claim to be better than CDs, which are old anyway?

However, as we explored above, CD quality is quite good, and the recording industry knows it (and has freaked out about it before). All in all, Hi-Res Audio won’t hurt anything – except maybe your wallet – but it doesn’t exactly help.

Source link

Comments are closed.