• The forum software that supports hummy.tv has been upgraded to XenForo 2.3!

    Please bear with us as we continue to tweak things, and feel free to post any questions, issues or suggestions in the upgrade thread.

Mono from Stereo

Black Hole

May contain traces of nut
As a matter of curiosity I have been contemplating the process of taking a stereophonic audio signal and down-converting it to monophonic.

The obvious strategy is to sum the stereo channels (L+R)/2, but it seems to me that if the signals on L and R happened to be anti-phase, the result would be silence when the actual result of presenting the L and R signals to the L and R ears would not be silence.

FM stereo radio uses a system where a mono signal is sent on the main carrier, and a difference signal is sent on a sub-carrier. This means a mono-only receiver (or a stereo receiver switched to mono) picks up a mono signal by ignoring the sub-carrier. If I call the main signal S and the difference signal D, then if S = (L+R)/2 and D = (L-R)/2, L = S+D and R = S-D. Something similar is used on records: the sum signal is the left-right wiggle of the needle, and the difference is the up-down wiggle (or something like that) so that a mono player can easily obtain a mono signal by only taking the left-right wiggle.

The consequence of this is that in FM radio broadcasting (and record production) the sound engineers have to pay attention to how the sum and difference signals are created from the left and right signals. It's no good transmitting a signal which a mono receiver would pick up as silence.

I don't know how this is done, or even if it is a real problem at all - maybe these circumstances never arise in practice. Anybody know? T'web is unhelpful on this point.
 
The obvious strategy is to sum the stereo channels (L+R)/2, but it seems to me that if the signals on L and R happened to be anti-phase, the result would be silence when the actual result of presenting the L and R signals to the L and R ears would not be silence.
That's what's done and silence is what happens if it's recorded wrongly.
If I call the main signal S and the difference signal D
M and S are the industry standard terms.
if S = (L+R)/2 and D = (L-R)/2, L = S+D and R = S-D.
Yep, it's just a linear matrix for code and decode.
The consequence of this is that in FM radio broadcasting (and record production) the sound engineers have to pay attention to how the sum and difference signals are created from the left and right signals. It's no good transmitting a signal which a mono receiver would pick up as silence.

I don't know how this is done, or even if it is a real problem at all - maybe these circumstances never arise in practice. Anybody know? T'web is unhelpful on this point.
You use some form of metering e.g. L,R,M,S or a phase meter or detector. All pro. kit will have some way of doing it. Then there are the flappy things on the side of your head. Anyone involved in pro. audio can spot a phase problem a mile off.
 
My guess is that there is no universal mathematical solution.

(Deleted silly solution!)

Is there even a way of converting a sound source to stereo that is unique? You can stick two microphones at average head-width apart, but where do you stick them? There are infinitely many solutions.
 
You can stick two microphones at head-width apart, but where do you stick them?
That's a technique called binaural, and they use a dummy head with the mics in the ears so that the sound field is modified in the same way that a real person listening to the live performance would hear it. The only way to listen to a binaural recording is through headphones, and obviously the dummy head was stationary...

The microphones for a stereo recording shouldn't be a head width apart, the reproduction will be through speakers that are (say) 6' apart so the sound field for each microphone (feeding each speaker) needs to be the equivalent. In practice many unidirectional microphones are used to pick up individual performers, and then mixed down into right and left channels to place the performer wherever the sound engineer wants in the stereo field. This process is not going to result in anti-phase signals, and may be one reason it is done like this. Just having two microphones to capture the whole stereo field "live" could easily produce anti-phase cancellation in the L+R signal, at specific frequencies and specific placement of the sound source relative to the microphones. This again is a good reason for minimising the bleed of other sound sources into a particular performer's mic by using high quality unidirectional microphones.

I think I'm beginning to understand this!

M and S are the industry standard terms.
Standing for what - Mono, and Stereo difference? I was using Sum and Difference.
 
If it's of any use, I have generated some Out Of Phase stereo files you can experiment with, if you get ffmpeg to convert this file to mono you do indeed get no sound

OOF.jpg

1Khz -6db 180 deg out of phase 3 second duration file here :- http://ge.tt/9joYkIo1/v/0?c
 
Do you know if any processing is generally done in an amp, to convert a stereo signal designed for speakers to one suitable for the headphone jack? I had always assumed the L and R signals were passed directly to the headphones but now I am beginning to doubt it. :confused:

BH: I think you meant binaural, not monaural?
 
If it's of any use, I have generated some Out Of Phase stereo files you can experiment with
Thanks, very kind, but I am fluent in Audacity. I have a sound file containing sine waves of continuously rising pitch through a stationary frequency envelope (band-pass filter with long slopes), just to see what happens... if you are interested!
 
Ta. The task of finding relevant info on t'web is made much easier by knowing the right search terms.
 
That's a technique called monaural, and they use a dummy head with the mics in the ears so that the sound field is modified in the same way that a real person listening to the live performance would hear it
Wasn't that the method favoured by Deutsche Grammophon when making all those excellent classical recordings from the 50s and 60s?
 
Do you know if any processing is generally done in an amp, to convert a stereo signal designed for speakers to one suitable for the headphone jack? I had always assumed the L and R signals were passed directly to the headphones but now I am beginning to doubt it. :confused:
This is what my old Sony AV amp has to say :-

headphones.jpg headphone2.jpg
 
the waveform over 3 seconds looks like this

Yeah, A=1000*2πt, B= 1001* 2πt,

(sin A + sin B)/2 = sin ((A+B)/2) cos ((A-B)/2)

The first term gives the rapid waveform sin ((A+B)/2) = sin (2001*πt) and the latter the slowly varying amplitude. cos ((A-B)/2) = cos (πt).
It's trigonometry.

The cancelling one is just

sin A + sin -A = sin A - sin A = 0.
 
As a matter of curiosity I have been contemplating the process of taking a stereophonic audio signal and down-converting it to monophonic.


Many years ago I wanted mono from stereo for reasons I don't recall exactly (but I think I just had a single extension speaker in another room).
I just connected the L & R channels together through some resistors (this was at the pre-amp level). The sound came out the other end without apparently losing anything, so I don't think any out of phaseness occurred.
 
Back
Top