Korgscrew
Group: Super Admins
Posts: 3511
Joined: Dec. 1999 |
|
Posted: Jan. 05 2006, 11:49 |
|
Did someone call?
Let me first put this whole thing my own words. I think you've already covered a few bits of it, but perhaps I can fill in a few gaps.
We get directional information from four basic things: difference in volume, time delay, phase difference and frequency difference.
If a person stands next to your right ear and shouts, it's obviously going to be far louder in your right ear than your left. We can simulate that electronically using the panpot on a stereo mixing desk - pan a sound completely to the right, and it becomes increasingly louder in the right channel, until it's completely in the right channel and there's none of it at all in the left. If you now consider that being listened to on a pair of headphones, you'll probably notice what's wrong with just doing that. Get someone to stand next to your right ear and shout, and plug your right ear. You'll still be able to hear them quite well. That could partly be because you've not plugged your right ear properly, but even if you did, you'd still be hearing that person shouting. You'll note that some of the sound is reaching the left ear too. In fact, with someone shouting, this really isn't a surprise, and it brings us neatly to the second thing... When we listen on headphones to our recording with the shouting person panned right, the left ear is getting none of it. This clearly isn't what's happening in the real world, and it's one of the reasons why the recording won't sound at all like the person is really there shouting. We can of course pan our person further towards the centre, and so mix some of the shouting into the left channel, but that'll actually just seem to move it into the centre of your head, rather than bringing the sound outside. So, the second consideration is time delay. Actually, the sound of our shouting person could arrive at the same level at both ears. You might think that would automatically make it seem like the person is in front, but it won't necessarily. If the sound arrives at both ears at the same level and at the same time, then the sound will seem central (but still not necessarily in front), but if it arrives at one ear earlier than the other, we will assume it has come from that direction. The kind of delay necessary is far smaller than is noticeable as an echo (as short as 0.6 miliseconds, which is the time it takes sound arriving on one side of the head to reach the other. It can be as long as 35 miliseconds, at which point, we start to be able to detect it as an echo).
Now, phase...that's actually a variation on time difference. If an identical sound is arriving at both the right and left ears, the brain can compare the two and decide how far through the cycle each sound wave is. That is, if you imagine the sound waves wiggling up and down (which isn't really how they're arriving at the ears at all - they're differences in air pressure), the brain analyses what's arriving at each ear and decides whether they're both up and both down at the same time, or whether one is up while the other is down. From the frequency of the sound (from which we can find the wavelength - all sound waves travel at the same speed, so the higher the frequency, the shorter the wavelength) it's then possible to use this information from comparing the waves to again help in knowing where the sound is coming from (it's mostly of use with continuous sounds).
You'll note that so far all of the kinds of information have been relying on this two point system which, by all rights, should only be able to tell whether something is to the right or to the left. After all, a sound coming from above is going to arrive at both ears at the same volume at exactly the same time, just the same as it will if it's coming from directly in front or behind. A whole lot of extra clues come from changes in frequency. As things move around our heads, the outer ears and the head itself alter the frequency content of the sounds which they make. For example, as something moves from in front to above you, its sound will get stronger at around 8kHz. If you happen to have a pair of headphones which have a peak at 8kHz, then you might just think that a lot of the sounds played through them are coming from above...
Some things are based on previous knowledge - planes, for example, tend not to fly underneath us when we're stood on the ground, so we'll usually interpret their sound as coming from above us.
I think I'll leave that there, so this doesn't get too long.
It would be true to say that most music is mixed predominantly with speakers in mind. Headphones fulfil a very useful role, and a lot of engineers use them while mixing, but there are very few who mix exclusively with headphones. They're generally not regarded as being a terribly accurate indicator of the final balance - even the very best headphones give a different picture to what speakers give. It's not necessarily that one is better than the other, but I would say that music almost always has to sound good on speakers (there might be a few instances where mixes are being done that will be heard on headphones only, but I really can't think of what those would be right now). A good mix should work on both headphones and speakers, really. So, most engineers will be aware of how their mixes sound on headphones, but they will have been focused on getting something which works on your home stereo speakers. They'll certainly have paid attention to perspective, but placing things above, or behind, or anywhere else other than somewhere between left and right when you listen on headphones, is something that's not on the minds of most engineers. It's not even close to being an exact science yet. There are processes like binaural recording (putting two microphones in a dummy head, or even in someone's ears) which can give the impression of sounds coming from outside the headphones, but they don't work for everyone (some people hear everything as coming from behind when they hear a binaural recording), and binaural isn't really compatible with speakers (it not only loses the enveloping effect, but also gives a rather weak stereo image). There are some electronic processes, but so far none have been very successful, and again the mixes can run into problems when played on speakers. Some more adventurous engineers may experiment with techniques to aid localisation of sounds in their mixes, but I don't think there are any experienced engineers out there who think that all listeners are listening equally. They'll just be aiming to create something that works for everybody, no matter what they listen on and how their brains are wired.
|