Purpose of autopan-
for mono audio input and using truestereo send setup in PS-instead of L and R IRs-imagine you have 2 variations of same position IRs/generated in IM with different random seed value or captured in real space from same position,but maybe with sligtly different angle of speaker/
all you need is to mix them with autopan positioned before PS-result is VERY similar to real reverb-only slight moving of frequency response
problem is-all vst autopanners I know have stereopanlaw -6db,which makes them unusable for this purpose becouse you got -3 db in central pan position...
I must stress, to be perfectly clear, having it as a process at either side of an ir is not the same thing.
Modulation only sounds good when it’s inside of an algorithmic reverb, applied to various taps within the signal path, not the whole thing.
There is something non static about real space. I don’t know why, this is the way it sounds compared to an ir.
What if we were to record an identical impulse into a real room, would repeated recordings of the reverb be absolutely identical? So that the reverb wave forms looked the same, and if we phase reversed one of two simultaneous play backs they would null each other out?
I haven’t tried this, but i would expect you would never get a null.
at what rate is this change?
If you use static ir’s in a mix they blend in and it’s hard to hear them until they are so loud it’s too much.
The fact is, no impulse comes close to sounding as good as my live recording room. I am unable to get the sound of the room in my drums by applying reverb on dry drums, not even close.
Here is an example of a long lex chamber ir, where you can hear the ‘eq’ change in the tail, and while this helps, it is clear that alone it does not produce the desired result indicating tonal change alone is not enough. If this tonal change was created by moving comb filtering the real reverb would sound a lot different than than just the eq result the ir captured.
and even when i combine 4 ir’s the increased complexity doesn’t help at all
lex hardware (not same settings) with lots of ‘spin’ etc;
which may not be apparent when soloing like this, but in real music (with realistic proportions of mod) these differences make the reverb ‘work’
Here is lex concert hall real hardware;
An ir of the same setting;
this is not to show what real space sounds like, but to point out the information ir’s can not recreate.
Importantly, this is the information excluded from real space ir’s.
Continuing to experiment with Liquidsonics, my conclusion is the chorus and eq sections do not work. They just sound like.. a ‘chorus’ and lfo eq.
Not sure yet about the two ir things yet, i think i need to create two irs for this purpose.
‘quickquack rayspace’ just uses simple mod, but sounds more effective to me. Here is an excessive example to demonstrate;
I feel it is phase changes that are the effect we want to hear, and can be very subtle to be effective.
The purpose in these samples is that eq and gain change are part of the sound but in my experience do not provide the missing factor.
I am using artificial reverb examples not because they are relevant to real space, but they are relevant in application to ps, and it’s a lot easier to do than real space.
I would like to create a test for a very large room to clarify this. Recordings with the same impulse capture mics so we can directly compare room recording to ir all things being equal.
To me i imagine the best sound would be varying the ir itself.
What about nebulas ‘vector’ approach. What if capturing the impulse in a variety or multiple ways
Aleksey, what about your idea for; “'time varying' convolution
...no impulse comes close to sounding as good as my live recording room. I am unable to get the sound of the room in my drums by applying reverb on dry drums, not even close...
did you recorded these drums from same position in anechoic room,and then used IRs of your room? If not,it can NEVER sound close to reality.
To me /Im interested in organ music and so using IRs of big churches/any Lexicon cannot come close to good set of IRs,becouse it CANNOT perform exact hi freq.rolloff and so it always sounds harsh for organ pleno,or dull,but not exactly as needed.Maybe its not possible for digital filters to use same steepness of fast filters without artifacts,I dont know.Also Voxengo IM cannot nail it,becouse it perform freq.dumping instead of hi freq.rollof,which is incorrect for big churches.
But for drums maybe Lexicons are better than for pipe organ,becouse it needs shorter tails and smaller spaces-here there is not needed to use hi freq. rolloff with same steepness as in churches.
Finally,I must agree with Ian,than IRs are lacking something-used normally they are not hearable until used too much,some movement is NEEDED.I also agree that chorus and modulated eq didnt help much,but combination of 2 or more IRs if used correcly/=mostly for hi freq.region,resonant modes for low freq.must not move-same as in real church standing waves/sounds much better to my ears.Much better than Nebula with Lexicon carthedral IRs.And its not as CPU hog as Nebula.
p.s.-Im not sure if its possible to modulate IRs during convolution to get some movement as asked Ian-can Aleksey prove this??
...Modulation only sounds good when it’s inside of an algorithmic reverb, applied to various taps within the signal path, not the whole thing...
Not matter if mod.is applied to whole audio or various single taps,it still more or less obviously do the same thing-detune sound.Which is usually not what musicians needs.
And Im not talking about aliasing feature associated with all VST choruses I know,which is the second problem,not so obvious.
If you want to apply mod.to various taps as in Lex.its easily possible in PS too.You only need IM too.
IRs cannot be modulated in real-time if it's not a Volterra kernel of some sort. Any modulation you apply is applied pre or post.
So, what you really want is not some modulation, but a real space. The difference between recording in real space and convolution is that sound source is constantly moving - vocalist can't stay straight, drums vibrate and move around a bit, etc. This can't be realistically captured by IRs. But if we are talking about constant performer/mic positions in space, convolution captures it all.
I understand your praise of various modulation approaches, but they are nothing more than modulations, they are not realistic.
The only approach I'm interested in myself is EQ/phase modulation. Well, if this is not what market needs, then I'm simply losing customers. If you want a "better" sound, simply use an algorithmic reverb, or some modulations that make convolution "bigger than reality". But I'm not interested in implementing that.
No I can not,becouse freq.rolloff filters in PS have small and fixed steepness/12 db oct./,real churches have typically around 24 db or bigger and this steepnes in real space is NOT constant during whole IRs!! But after 2 years of research I have finded solution how to solve this problem when creating artificial IRs with IM,so its not a problem now.
Using any freq.modulation if needed is not a big problem too /but-do everybody knows of any VST chorus with zero aliasing and not dull sounding??-I personally do NOT/.
Using combination of 2 variations of same position IRs was the problem untill now as I wrote about earlier-becouse not existing usable autopanner.This will be solved soon too,I have asked one clever software creator from my country to do such a plugin capable of variable stereopanlaw.And it will be much better than that implemented in Reverberate,becouse it allows freehand modulation curves.
i don't know why, i'm only speculating and i am no expert in this area and don't understand the physics involved. but my premise is valid, because this difference is more than subjective opinion, i would say the difference is fact.
Second to that is a convo verb being your only verb for recreations of all reverbs real or algorithmic, where you would have available more variety than you could practically maintain with multiple hardware boxs. This is not so important to me.
Why can’t we do that?
Is this a computation practicality, or is it impossible within the ps model.
note that in ‘rayspace’, an ir is constructed in response to your virtual space (kind of like IM), and you can automate a ‘walkthrough’ of your space in real time as the ir is altered effectively in real time
It's clearly not enough to vary IRs from sample to sample. Reality is much more complex than that - when performer moves even by 1 inch, this produces a considerably different impulse response from microphone's perspective: early reflections get shifted most obviously. It's a lot more complex than just modulation between two IRs, because performer moves while he performs - he does not move in random or sinusoidal manner.
Lexicon reverbs are in no way "real" - they are a bad example to compare to convolution reverbs. Convolution should only be compared to real-world reverb recording.
Maybe you've simply have not tried convolution enough to find your best IR impulse set.
This topic was last updated 180 days ago, and thus it was archived. Replying is disabled for this topic.