Pink Floyd: The Making Of Money & Directionality
On the surface this might seem a wee bit off topic, but bear with me as I will get there eventually. My thanks for Crunch Gear’s John Biggs for pointing out this You Tube clip documenting the making of the song “Money” by Pink Floyd. I’m a fan of the band and have also followed their individual solo careers in more recent years.
Further, I’m a big fan of Alan Parsons, who engineered a number of the bands recordings. Alan appears in the clip as well. He’s a master recording engineer which is something that I especially admire, having spent a portion of my youth in such endeavors myself.
In describing how they recorded the tracks and mixed the song this clip points out something that may eventually be valuable to those of us who consider IP telephony, and conference calling particular. That is, the nature of stereo and it’s relationship to how we understand directionality. I have a number of thoughts on this that will come out over a short series of posts.
Now’s a good time to watch the clip….
It may be true that most of the equipment commonly used in a recording studio from the 1980’s or 1990’s can now be found implemented in some software on a computer. Even so, there’s just nothing like open reel recorders and half an acre of knobs to make me feel like we’re going to have some fun! Now back to the point I was hoping to make.
In playing some of the tracks in isolation the good Mr Parsons points out something about modern pop recordings. He illustrated very clearly that they don’t record any of the directional or localization information that was present at the time the music was played.
For example, the singer sings into a microphone in the business end of a recording studio. Great effort is made to ensure that such spaces are acoustically inert. They deliberately “close-mic” all the instruments to minimize the sound of the room itself. In the case of keyboards or bass guitars they may take them directly into the recording console. Thus there is no inherent directionality in the recorded sounds. In fact, they are often mono sources. Not stereo at all.
Later on in the mixing process they will use the “pan pots” in the mixing console to selectively control the relative level of that signal in the two channels that comprise the stereo mixdown. “Panning” a signal from left to center or right is a simple manipulation of amplitude.
All this flies in the face of the fact that we know the human physiological mechanism for determining the direction of sound involves a complex mix of factors. The level of right vs left has a role to play, but so do temporal factors. A sound originating on your left side reaches your right ear later than your left ear. Further, the shape of your head and pinae alter the tone of the sound so that it sounds slightly different at each ear. The common pan pot does nothing to address these mechanisms.
A close mic’d instrument or voice usually sounds unsatisfactory on it’s own. Lacking the reflections and reverberation of a performance space (a room) they tend to sound very thin. The music industry approach to solving this, and controlling everything is to use artificially created electronic delay and reverberation. So they can selectively add a sense of spaciousness to each voice or instrument.
The use of synthesized reverberation is incredibly unnatural, most especially when different sources have wildly different reverb effects applied. It’s not uncommon to apply very large room reverb to drums, making them sound very large and full. The same reverb setting would be considered inappropriate for voice or rhythm guitar. It would muddy the sound with too many late reflections.
Sidebar story: When I was a teenager a band I used to like (Triumph) was known for recording their drum tracks in a large empty warehouse space beside their studio. Once they actually placed the drums inside an empty swimming pool. The large spaces with hard reflective surfaces resulted in naturally occurring big reverberation with long decay times.
All of what I’ve been describing is dead common in recording pop music, even today. The major point to be made here is that there’s often no attempt to capture the acoustic reality of the performance of a song. It’s not about how the music sounds when it’s played live. It’s about isolating each sound and them bringing them together in a controlled way. It’s 100% completely synthetic and our ability to determine the directionality of sound is not even considered.
In contrast, some recordings of live performances are done very simply, taking care to accurately record the acoustic event as it occurs. These projects don’t involve many microphones or sophisticated mixing and adding effects after the fact. This is a more puritanical, minimalist approach to recording music.
If you care to hear a good example of such a recording I suggest you listen to a copy of “The Trinity Sessions” by the Canadian band “The Cowboy Junkies.” It’s a hauntingly beautiful album recorded in an old church in downtown Toronto using one very special microphone capable of capturing all the directional detail of the performance.
If your tastes run to classical music than I suggest you try anything from the catalog of Nimbus Records. This English company was a proponent of a form of surround sound recording that evolved from the commercial failure of the 1970’s quadraphonic technology. Their recordings of various English string orchestras are simply outstanding.
Reflecting all this back into IP telephony there’s one simple idea to grasp for the moment. There are two possible approaches to dealing with the directional localization of sound:
- You may be concerned about the accurate recording and reproduction of an original acoustic event or performance
- You may wish to use various technologies to synthesize directionality as an effect, with no relationship to the original acoustic event
That’s a good start for now. I’ll consider this subject further and take it deeper into VoIP-space in another post.