Skip to content

Creative Degradation

LEON'S BROTHER HELP THYSELF BENEFIT
photo by spike55151

So I took the first six seconds of “So Emotional” and looped it eleven times, each time at a lower bit rate. I call it “The Degradation of Whitney in Eleven Stages,” and you can listen to it on SoundCloud. (My apologies to Whitney Houston. This is just intended to demonstrate how lowering the bit rate affects sounds quality.) I can’t embed the clip without bringing the whole thing down to a bit rate of 128 kbps, which would kind of defeat the purpose.

Here are some technical notes: The initial sample was extracted from an Apple Lossless MPEG-4 audio file with a bit rate of 787 kbps. The subsequent downsamples were generated with Audacity, then joined together in GarageBand and exported to MPEG-4. Since the export maxed at 320 kbps, the high rate of the original sample is not preserved, and the first two samples should be indistinguishable. The bit rates in sequence are: 787, 320, 250, 200, 180, 128, 80, 40, 24, 16, 8.

The degradation doesn’t register to my ears until the bitrate hits 80. What’s truly bizarre is that at the lowest bit rate I find I actually like the sound better than the original.

And so here’s the full “So Emotional” downsampled to eight kilobits per second. Again, apologies to Whitney. This is merely intended to demonstrate the audio effects of an extremely low bit rate. I think it sounds cool. Your mileage may vary.

I realized in retrospect that the extreme effect of this process isn’t solely a result of exporting at a low bit rate. It also has to do with sample rate. The lower bit rates don’t support the 44.1 kHz sample rate of the original, so I had to resample the music. I chose a rate of 22.05 kHz and got the results posted above. But when I did it again and chose the lowest sample rate of 8 kHz, the results weren’t nearly so dramatic. It was definitely low-fidelity, but the song was much more recognizable. That’s right, the lower sample rate produced a higher fidelity result. So clearly there’s some sort of cross-effect between sample and bit rate that I’m not understanding.

To understand the difference between sample and bit rate, I found the following explanation on Helium:

It is easiest to think of sample rate as how often the audio signal is sampled and bit rate as the amount of information recorded for a unit of time.

So perhaps interference patterns can emerge between the two — kind of like audio moiré? I’m just guessing and probably wrong.

Published inGeekyMusic & Audio

3 Comments

  1. Ian Zamboni Ian Zamboni

    Maybe the higher sampling rate at that low of a bitrate is just more room for those transcode compression artifacts to live?

    I kind of wish there was some way to standardize lossless audio and images on the web now, there’s no need to still be using JPGs and MP3 anymore, outside of mobile versions of sites. Media could probably just be transcoded for the mobile version, right? Or is that just out of our league dedicated processing right now?

  2. Jack Schick Jack Schick

    Must be nice to be so Schmartt!
    Reminds me of the 70’s battle of the Stereo Decks–
    super-tweeters and sub-woof and audio fidelity math to
    the point that one commentator wrote:
    “You’d have to be a fucking Bat to tell the difference”
    in quality.
    But my experimental music class discussed the recording-studio
    and stage-system board manipulation of inaudible frequencies to
    Color the overall wall-of-sound–sympathetic resonances and
    as you noted, interference patterns–superheterodynes–
    I’m just too much old fart to like the ear-buds.
    I want body-shakin’ viber-ations!

Leave a Reply

Your email address will not be published. Required fields are marked *