The A-Zed of Audio
2300 years ago, a Greek mathematician named Euclid wrote what would become one of the most successful and influential books of all time. It didn't contain characters, a plot, or even morals, but was filled with mathematical descriptions of the way our world is composed. Although his phrasing was far more technical and arcane, many are familiar with the postulate of his Geometry which states that two parallel lines will never intersect. Hurtling down the highway or aboard high speed trains, we at least hope he was right.
Well, well, well, Euclid. As you were figuring out other earth-shattering truths such as the roundness of circles and the pointiness of triangles, who would have thought that your famous unintersecting lines would years later also lend their name to one of the most successful and influential studio techniques?
If you recall, back in the early alphabet we discussed compression. Just as the name makes it sound, it's in the business of squeezing. Squeezing what, you ask? The dynamic range: that is, the 'distance' between the loudest and the quietest notes. If your quietest note is 2 points of volume and your loudest note is 9 points of volume, there's a dynamic range of 7 points (actual sound is measured in decibels, which came right after compressor in our audio alphabet, but let’s keep going with this point system).
If you've followed this wild math, you know we've compressed our dynamic range from 7 points to 5 points. If we had a higher compression ratio (4:1, for instance) then we'd compress our dynamic range even more.
Too, you may realize that we've not only compressed our dynamic range, but we've also decreased our volume. The final step is to add two points to everything so our loudest sound is back at 9 points, but now our quietest sound is at 4 points, and we maintain our compressed dynamic range of 5 points.
The reason we do a thing like this is that when we hear that quieter sounds are louder (which they are in this case), we perceive the whole thing as being louder. And our perception is really what we’re going for – it's kind of like the winter temperature versus the winter temperature with the wind chill. We need to dress for the temperature with the wind chill because as far as our experience goes, that's the only real temperature. Same with compressed audio: our experience tells us the compressed version is louder than the uncompressed version, even if some instrument still tells us that the sound is only 9 points of volume at both of their loudest points.
When we hear sounds, we can most easily tell what they are by their loudest part. Think of a piano, for example. Inside that great hulking piece of wood, a felt-covered hammer strikes a metal string when you press a key on the keyboard. At the moment of impact there’s a distinct sound that tells you, "this is not a flute. It's a piano," and a lot of the distinction comes from what we hear in that impact. It lasts for only a split second, and then the rest of the sound is the string vibrating in a way that's much more similar to other instruments than different.
When we take away or mess around with that initial sound (called a transient) we start messing around with the characteristics of the instrument. Because compressors generally work with only those loudest parts of a sound, they can have a noticeable impact; the way certain analogue compressors colour the sound is considered pleasant. In some cases, very pleasant. This is why certain pieces of gear are so desirable, like the Fairchild 670 which goes for a paltry $30 000 on the very rare occasions that one of them actually goes up for sale. Yet even in the case of the almighty Fairchild 670, there's no doubt: compressors colour sound.
Most simply, the reason we use compression is to make things sound louder. We saw above that when we apply compression, the quiet points become louder than they were, and the result is an overall perception of an increase in volume.
In that example, all the most important sound information gets messed up by the compressor – that is, everything above the threshold. In the 1960s and '70s, Dolby incorporated a design into its noise reduction technology which would later be jumped upon as the template for parallel compression. In their Dolby A noise reduction circuits, the incoming signal was split and part of it was sent to a compressor with an extremely low threshold and an extremely high ratio. The result is that that sound would be squashed beyond all recognition. On its own it would be completely unacceptable to listen to. But then that signal was recombined with the original 'dry' signal, and thus was parallel compression born.
What happens is that the original signal preserves all the information-rich transients, while the hyper-compressed signal serves to raise the level of the lower parts, just like in an ordinary compressor would. To get a clear idea of this, imagine a mountainous landscape. Suppose the distance from the top of the higher mountains to the bottom of the valleys is 2500 metres. Now let's fill those valleys with water, so we've got some lakes which are 500 metres in depth. The distance from the top of the mountains to the lowest level (which is the water level, now) would only be 2000 metres. If we sonify this landscape, the dry landscape is our dry signal, and the water of the lakes is the hyper-compressed signal. The two of them together make up parallel compression. The nice part in the case of our hills and valleys is that we've preserved the majestic peaks of these mountains, whereas with traditional (or downward) compression we would have reduced the distance between the high and low points by squashing the tops of the mountains, thereby changing the key features of the entire landscape.
The parallel compression trick works primarily to make things sound punchier and fuller, while still preserving the distinct feel of the original sounds. You might hear it called parallel compression, or occasionally New York compression, because a number of mixing and mastering engineers working in the Big Apple used this technique, and although it's commonly used on a number of instruments, its most famous use is on drums.
In the case of a studio using analogue compressors, a setup using parallel compression can quickly become taxing. Compressors are physical units, and although they don’t all cost as much as the Fairchild 670, decent ones start at about $1000, and they take up about the space of two shoeboxes. If you want to get a really extreme parallel compression effect with analogue gear, you might end up running your drum signal or whatever it is through multiple compressors – 3, 4, maybe even 5 – in addition to the dry signal.
Although the tools aren't as rich or subtle, a digital workflow really shines here because you can keep creating compressors until the cows come home, and you don't need to sell your car to be able to afford them, or ask your neighbour if he doesn't mind storing your bed and sofa for you while you try to find space for this audio gear of yours. The concept of how to create the parallel compression setup is exactly the same, but it's a hell of a lot easier when it involves working with a Mac and a mouse than tubes, transformers, and steel.
A few paragraphs back, I called parallel compression a trick, and I think it’s important to remember that that's exactly what it is. Parallel compression is a great example of something that would never be possible in live performance – and that's because recordings aren't live performances. A great recording is a great bunch of illusions to give you the impression that nothing strange at all is going on. Putting it this way makes it sound a lot like modern life.
Jordan Mandel is a Digital Media Lab Instructor at the UW Stratford Campus, and writes for this blog regularly. His hobbies include sorting change, dishtowel restoration, and discus throwing. More of his work can be found at jordanmandel.com/blog, which is home to the award-winning satire rag, The Outa Times.