The A-Zed of Audio
Every space has a sound, like every person has a fingerprint.
We've talked a lot about the way that sound happens physically, and by now you probably can recite in your sleep the way that anything vibrating causes all sorts of disturbances amongst the air molecules surrounding it, and those air molecules then bump into their neighbours, and those into theirs, until the collisions eventually reach your ear.
At its most basic level this is where the conversation ends, because we haven't discussed what happens to the 99.999% of those air molecules that don't move in a direct line from the sound source to your ear drum. What happens is reverb.
Short for reverberation, this distinct phenomenon describes the sound you hear that comes to your ears indirectly. If you're indoors, the energy from the sound source bounces off the walls and ceiling and floor, and then arrives at your ear slightly later and more quietly than the direct sound. Remember that sound moves at a constant speed, and if you're standing 3 metres away from the sound source and 2 metres away from the nearest flat surface (most likely the floor), the energy that has to bounce in order to reach your ear is going to get there later, and those are only the first reflections.
But there's still a lot more wall. So slightly after the first reflections arrive, you have the vibrations that bounce off the side wall and then the back wall and then the other side wall and arrive at your ears. Or the ceiling and then the back wall and then the floor. You get the idea.
It's like a giant car rally from Stratford to Toronto, where each car can only take the same route, and each is only able to travel at the speed limit. Yet five of these cars need to make one stop to pick up some snacks, and five more cars need to pick up snacks, fill up gas, and also pick up a hitchhiker. Some of the other cars still have an even longer list of errands to run. If you're sitting beside the 401 in the west end of Toronto you’ll see the first group of cars that arrive altogether – these are the ones that traveled directly there. They'l be followed by the cars who ran 1 errand along their way, and those will then be followed by those who made more stops, and finally you'll have people trickling in who decided to do a whole laundry list of items on their trip. If you can imagine that the cars are sound vibrations, you've got an idea of what reverb's all about.
Everyone knows reverb instinctively, even if we're not aware of it. This is why a recording in a stairwell will sound completely differently than a recording in a soundproofed studio or the trunk of a car. In both cases, the direct signal is the same – it's the first group of cars that make the trip directly. Yet the stragglers are the ones that give a space its unique sound. If you're singing or speaking, the Notre-Dame Cathedral's stone ceilings 35 metres above your head will have a very different effect on the sound bouncing back at you than the absorbent furnishings of your carpeted living room, only 2 or 3 metres away from your ears.
If you're thinking this description sounds a lot like echoes, you're right on the ball. Reverberation and echo are cousins, the only difference being how quickly they occur. If the reflections arrive within 50-100 ms we don't hear them as being distinct from the original event, and we call this reverb. If the reflections are spaced out further and we can hear the reflection as a distinct event, we call this an echo.
Though I've never tried it, I don't imagine renting out the Notre-Dame Cathedral for a weeklong recording session is the easiest thing to do. But what if it's possible to still get your recording to sound like it happened in Notre-Dame? Where we're at today with our computers, all you need is about 1 quiet minute in the cathedral, a cap gun, and a good microphone.
In 1947 Bill Putnam – a creative engineer who invented a number of the tools we use today in the sonic trickery that we call recording – set up a loudspeaker and microphone in a studio washroom, and played a version of the song Peg O' My Heart by Jerry Murad's Harmonicats. His microphone captured the reflections from the porcelain and tile and this 'wet' recording was then mixed with the original (or 'dry' one) to create the impression of having been recorded in this strange space. Thus was artificial reverb born.
Numerous examples quickly emerged, including Duane Eddy's legendary guitar tone from the song Rebel-Rouser, which was achieved with the same speaker-and-microphone technique, but this time the pair were placed in a 7500-litre water tank.
While this technique of creating artificial reverb was significantly easier than the alternative of recording in a giant cathedral or fitting a guitarist into a large water tank, it did require access to a silent, reflective, and often large space. Mechanical alternatives were developed, where the original signal would be sent through a spring or a sheet of metal, and the ensuing vibrations would generate a reverb-esque effect which was recorded and mixed with the original.
As you can probably easily guess from this trajectory that started in cathedrals (or caves, more likely), moved into studio washrooms, then saw the development of small mechanical inventions, the final (or at least current) stage is in the domain of computers.
With digital reverbs, there are two types: algorithmic and convolution. In the case of algorithmic reverbs, rapid calculations are performed on the sound to make it act in the way that sound acts in the physical world. Convolution reverb is a newer approach that’s possible because of computers' increasing power, and in this case you take a recording of a sudden sound (a cap gun, a clap, a real estate bubble bursting, etc.) and the software will analyze the way the sound bounces around, and then extract this precise pattern as a template which can be used with any sound that your heart desires. This is how we can get an hour-long recording to sound like it happened in Notre-Dame only by spending about a minute in there. That is, assuming nobody minds you shooting off rounds of your cap gun in one of the world’s most famous cathedrals.
Our digital reverbs act a lot like our other digital tools: we've programmed certain features of the world into our computers so they can generate some Wizard of Oz circus show that reminds us of the world that exists behind, above, and around the screens by which we’re transfixed.
Simulating the world is one approach to using our digital tools, but I think this is selling ourselves short. I see an opportunity, as in so many other digital applications: instead of attempting to use our computers to make things sound like they do in the real world, why not use our computers to do things that we can't do in the real world? We've already programmed the rules of the way things are supposed to be; now let’s let them run wild.
Jordan Mandel is a Digital Media Lab Instructor at the UW Stratford Campus, and writes for this blog regularly. His hobbies include marble counting, pen trading, and chess. More of his work can be found at jordanmandel.com/blog, which is home to the award-winning satire rag, The Outa Times.