I was wondering if I could capture the response from my stereo-monitors in such a way that the cross-talk is included.
Purpose is to create an impulse that would give me the same sound and stereo-relation when listened to in headphones.
Basically; a transfer of the sound I hear through my monitors into my headphones.
I know how to record a single speaker's impulse, but not how to record the interactions between left and right channel.
When using monitors the left and right signal is obviously not heard strictly in proportion, but rather by both ears at the same time.
When using headphones left and right signal is separated much more fully (not talking about inter-cranium interference here, although that may also play some part if we get really scientific about it)
So, is it possible to create a simulation/emulation (in the headphone environment) of the sound as it is experienced through monitor-speakers, (including the left/right cross-reference speakers create)?
I don't believe a mono-sweep tone will suffice, since it doesn't really measure how left and right channel interferes with their counterpart channel (left speaker is heard also in right channel, and vice versa)
Should I create an impulse of each speaker (measured from my usual listening-position, or perhaps even from where my ears would generally be when listening through speakers), and then combine the 2 impulses into 1 impulse? (mixing the 2 impulses together)
And if so; how do I achieve exact sample-precision between the 2 impulses? (how can I know if they are overplayed exactly correct, so there is no time-delay between them)
I hope my question, and sorry for it being so lengthy, makes sense :)
The approach you need is called "true stereo". For this you need to record each monitor using stereo pair e.g. in X/Y arrangement. Mono sweep will suffice. But then you'll need to process the impulse responses in "true stereo" capable convolver.
An easier approach is to use PHA-979 plugin which offers phase rotation which is crucial for replication of monitor stereo field.
Using 4 channels in pristine-space (effectively 2 ImpulseResponses on the same stereo-input) will be 'true stereo'?
So I would create a stereo-IR for the left-speaker, and another for the right speaker. Load IR-left into the first stereo-slot on Pristine-Space, and IR-right into slot2. Set both slots up to act on the input left-right audio-track and export their reverb-output on the same left-right track. Similar to when I overlay any other type of 2 IRs (basically like having 2 Pristine-Spaces running at the same time on the same audio, blending the reverb-effect together into 1 combined reverb-effect)
The reason I would prefer using the IR-method is so that I can also record the EQ-response from my listening-position (and because I have no idea how much I would need to rotate the phase to match what I get from the speakers)
Ok, SUPER! :)
Just a follow-up for potentially interested readers:
I managed to capture the EQ of the speaker-monitors, and the reduced stereo-field when using 2 stereo-impulse-responses in PristineSpace.
I had a stereo-recorder placed where my head generally is when listening to monitors.
I recorded 2 stereo-files of the sweep coming first through one monitor, then the other.
I convolved them and then had 2 IR-files (2 stereo files as usual)
In PristineSpace I loaded the IR-file for the left monitor into slot 1, and the other into slot 2.
Settings: Slot/chn: 1L 1R 2L 2R
Settings: Aud in: in L in L in R in R
Settings: Aud out: out L out R out L out R
I muted the direct out and only listened to wet out.
Technically this worked as I expected.
However, the sound in the headphones is still very different. It's much closer in EQ (which is certainly helpful in its own right), but it still doesn't 'feel' anything like listening to the speaker-monitors.
Apparently achieving a realistically sounding simulation of speaker-sound inside headphones is much more involved.
Anyway, as an experiment it was good to try :)
What you try to do is impossible!! Why? - On headphones you have two very separated Channels (Left ear isolated from right ear and vice versa by more than 80dB) with loudspeaker reproduction you have very bad channel separation (max -15dB) because the transmitting media is Air coupled to each ear and speaker. that's why (you might be lucky for one point / one plain in the room to keep some kind of cross talk cancellation - but it will collaps emidiatly when you leave this point.
Reverse polarity on one speakers is a kind of crosstalk cancellation for M ;-)
Hence why I didn't get it to work :)
Well, it was worth a try. The EQ is actually quite close, but yeah you're right; not the positional feeling of how it sounds on speakers.
This can be done except you can't model the so called HRTF this way. In headphones the HRTF is effectively bypassed in comparison to speaker monitoring.
You may also try A/B stereo recording to better model the separation between ears.
"True stereo" convolution technique does exactly the required thing: it models the decreased channel separation/bleed of speakers as they reach the ears. But to be exact, you have to also model the isolation effect of the head - so, for best results you'll have to use A/B stereo recording with a dummy head.
Yes, I read about that. Microphones you can actually put in your ears like ear-plugs, that will record the sound as it is reflected by the physical shape of the ears of the person wearing them.
Very specialized and individual though, since the sound would depend greatly on the exact shape a person's ears had.
What I read was that if a person recorded such sound, they would hear the correct location (or close to it) when listening to the recording in head-phones. The location would sound very realistic (3 dimensional). But if a different person heard the same recording, they would actually hear a different 3D location, since their ears were shaped different, which would have taught them to recognize 3D location differently. Apparently placing a sound in 3D space is something we learn as we grow up, and ear-shape makes this a very individual and personal experience that can't easily be recorded with a single dummy-head ear-shape.
The wiki has more detail on this: http://en.wikipedia.org/wiki/Head-related_transfer_function
It's very fascinating, but does require modeling ones own ears (and then create a dummy-head that matches this modeling) for complete precision. A bit beyond what I had in mind originally :)
However, the true-stereo convolution is something that can be useful in creating more advanced reverbs I discovered, so that is something I'll be experimenting more with (not to replicate speakers, but to create more special reverbs)
This topic was last updated 180 days ago, and thus it was archived. Replying is disabled for this topic.