The Battle of the Bitrates
How much is enough? Or, to put it another way, how much is too much?
As I had discussed in a previous entry (see What the Market Will Bear), I firmly believe that there is a law of diminishing returns when it comes to audio equipment, and there are many self-proclaimed audiophiles out there who simply buy expensive equipment just to somehow prove their “audiophileness.”
Well, the same can also be said for digitally encoded music. At what point do bit-rates yield diminishing or even completely insignificant benefits?
For the uninitiated readers, most of the music formats you hear of these days such as MP3 and WMA are encoded using “lossy” compression. Essentially, this means that music is taken from a CD, and frequencies and sounds deemed unnecessary for the average human ear are stripped out in order to reduce the file size. For the average person, the full fidelity of a CD-quality music track is lost anyway, so it makes some sense to reduce this to a more manageable size.
To put this in perspective, a CD-quality audio track is encoded at a “bitrate” of 1,411 kilobits per second. This means that for every second of music that you hear, 1,411 kilobits of data are required to produce that sound. Simple math can turn this into a reflection of file size: Knowing that there are eight bits in a byte, then each second of music actually requires 176kb of data. One minute of music (60 seconds) would require 10Mb of data, and therefore a full-length 70-minute audio CD is 700Mb of data. This, in fact, is the direct correlation between the amount of music that can be stored on a CD (in duration), and the amount of data that can be stored on a computer CD.
Needless to say, however, at these bit-rates you really can’t carry the kind of music around with you that most of us have become accustomed to, and downloading tracks at these bit-rates, even over modern high-speed links could take a long time. Hence the magic of MP3 and the other related “lossy” compression formats. The most common size of an MP3 file is about 10% of a CD-quality music file. This is because the most common bit-rate is 128kbps, which means that each second of music takes up 128 kilobits (as opposed to 1,411). I won’t bore you with more math, but this essentially means smaller files.
Of course, to reduce a file to that small of a percentage of the original, something has to go. This is where the various encoders do their magic — by analyzing the song file and trying to figure out which frequencies and other noises will not be audible to the human ear, and then just throwing that information away. Since fewer sounds mean fewer bits of data, the result is a smaller file.
Now, because this is a lossy compression format (ie, you’re losing sound information), there is no argument whatsoever that an MP3 file is of a theoretically lower quality than the original CD. The question, however, comes down to whether the human ear can hear the difference.
Further, although 128kbps is arguably the most common bit-rate, obviously higher bit-rates can be used when encoding the music. Therefore, a 320kbps MP3 file would have more sound data in it, and therefore be of theoretically higher quality.
Most consumer software that produces MP3 files will produce 128kbps by default (although this is configurable), and many of the online music services only provide music for sale at 128kbps. This seems to be based on a question of what most people’s expectations are, and what they can tolerate. There have been some arguments that lower bit-rates are practical with certain more advanced formats such as Windows Media Audio (WMA) or Advanced Audio Codec MPEG-4 files (commonly used by iTunes), but as a rule the industry seems to accept that 128kbps provides the best trade-off between storage and quality.
However, there are many that prefer to encode their music at higher bit-rates, as they don’t want to sacrifice audio quality. A simple trip over to the Digital Audio Formats forum on iLounge will reveal much debate over which bit-rate is the best one to use, and even the differences between different formats and encoders.
Don’t get me wrong here, bit-rates DO make a difference, even on a purely theoretical level. The mathematics don’t lie — if a song has a lower bit-rate, there is less audio data available for each second of playback, and therefore the quality is going to be lower. This is true in the same manner that converting a 24-bit colour image to a 16-bit colour image is going to produce an image with fewer colours.
The question, however, comes down to the ability to notice the difference.
I recently actually decided to take the plunge and pick up a pair of Shure e2c earphones, just to see what the fuss was about, as well as do a bit of testing with regards to bit-rates and general audio quality.
Firstly, I do need to admit that the Shure e2c’s are a nice pair of earphones, and I am impressed by the sound quality. Whether that quality is worth $120 to me, I haven’t decided, since to be fair I’m not sure they’re that much better than the $35 Koss earbuds that I was using previously.
However, my logic is that if bit-rates are going to make a difference, it is going to require a set of earbuds like these to hear it. So, I set out to re-encode a few of my tracks in three different bit-rates just to see what I was dealing with.
The first two sample tracks I used were Rush’s 2112, and Tchaikovsky’s Piano Concerto in B flat minor, both from digitally remastered CDs. I encoded them at AAC 128kbps, AAC 320kbps, and Apple Lossless (a lossless compression format that should retain the original CD quality). I then placed them on my iPod with duplicate information in the ID3 tags, and built playlists of the three tracks so I could do a blind comparison.
On went the earbuds, and I listened to all three of each several times over, using the ratings option on the iPod to identify which one I thought was better. There may have been a few subtle differences, but when the dust settled, I couldn’t tell the difference. In fact, in one case I rated the 128kbps AAC file higher than the Apple Lossless.
Now, to be fair, these are my ears I’m talking about, and moving into my mid-thirties I imagine that I don’t have the audible range that I once did (in fact, a recent article on The Register highlights a ring-tone that adults aren’t supposed to be able to hear, including a sample audio file. As expected, I couldn’t hear the ring tone, although my wife, who is in her late twenties, still could). Obviously, this proves that my ears have become sub-standard with age, and therefore it’s expected that I am not going to notice the difference.
However, this highlights an interesting point, which is really what I’m getting at with this article. A quick search on the web will reveal a large number of people jumping on the high-bitrate bandwagon simply because everybody else is telling them that the sound quality is much better. Again, while this is theoretically true, it may have no relevance on reality if your ears can’t tell the difference.
This also brings up the question of older CDs. Just because a CD uses 1,411kbps to encode music doesn’t mean that the audio that is recorded is actually using all of those bits. To put it another way, if you were to make a CD-quality recording of nothing, within a soundproof room, you would end up with 1,411 kilobits per second of nothing. Reducing that to a 128kbps MP3 file isn’t likely going to result in any quality loss, since there was nothing there to begin with (10% of nothing is still nothing).
Older CDs, even the so-called “digitally remastered” ones may not actually be using such high quality recording or digital transfer methods in the first place that a lossy compression to 128kbps is going to make much of a difference (admittedly, this could also be part of the situation in my testing above — while the Tchakovsky recording was fairly recent on a higher-quality CD, the Rush album was an older mid-90’s recording).
Finally, the question must also be asked as to what equipment is being used to play this music. Chances are that subtle differences between bit-rates will only be noticeable on a high-quality digital music player with a decent pair or earphones. I would suggest that no matter how high the quality on your stereo system is, the ambient noise and room features are going to render any subtle differences inaudible.
So, should you use a higher bit-rate? Well, that’s an extremely subjective question, and as I’ve suggested to many on iLounge and elsewhere, something everybody should decide for themselves. The best thing to do is to encode some of your favourite music at a few different bit-rates, and then listen to it using the equipment and conditions that you normally would (ie, if you only listen to your music in traffic or in louder environments such as in an outdoor urban area, you’re probably not going to notice a difference there either). Once you’ve done this, you can make an informed decision for yourself as to what the best bit-rate is for your needs. Remember as well that if you’re ripping from original CDs, you’ll always have the original non-compressed music available. Ripping to a higher bit-rate just because sound reproduction technology might increase in five years is still a waste of storage space, and chances are that the compression technology will have changed by then anyway, and you’d want to re-rip your files from CD regardless.
The bottom line, though is that one shouldn’t just jump on a higher bit-rate bandwagon though because others say it’s better. Like with high-end audio equipment, I’m willing to be that the majority of people really can’t hear enough of a difference to make it worthwhile.