I've been playing with the pristince space demo lately and tried to simulate a situation where a person enters a cathedral, goes to a small room inside that and closes the door behind him. Therefore I rendered four impulse responses with the impulse modeller (outside the cathedral, in the big main room, in the small room with door open, door closed) applied them to different slots and inversely coupled the gain knobs of the wet signals via automation in bidule. (Would be nice to be able to do such inverse couplings natively in pristine space in order to "blend by hand" as it's difficult to turn two knobs at the same time in different directions with one mouse :) ) The result sounds not bad so far but the bottom line is that I'm just mixing the signals with different levels. So couldn't you somehow integrate the impulse modeller directly into pristine space so that one could move the mic around in realtime? (Perhaps one could have a "live" mode that works with a reduced number of rays and another one where one defines the path of the mic first and then pristine space precalculates some impulses in between with greater precision (than possible in live mode). Then it could either morph between these impulses or simply blend between two convolutions - shouldn't be a great difference if the mic only moves small distances per step - don't know what's more "expensive" CPU wise)
This approach is not practical I think. I've heard somebody does such calculations in real-time on 3D graphics cards (but I have not heard the actual results, though).
For real time example you may check out RaySpace VST plug-in which is similar to Impulse Modeler in concept. Of course, its sounds is far from optimality and density of Impulse Modeler, but you may reposition and adjust it in real-time.
Just check out the scene of the Incredibles, where Mr Incredible visits Edna Mode ... The 30 seconds from the front door to the big marble space - oooohhh !
I don't think you need morphing to do this, but it would be interesting to know if they actually used IRs.
I played with this a bit too. I found that simulating movement by morphing convolutions works fine for the reverbarant wash, but the early reflections get "damaged" during the crossfade. Unless the two impulses to be blended are very close together, of course. If the two impulses are distant, it can create an almost nuaseating, disorienting moment during the crossfade, especially on any percussive material. It works, but you need lots and lots of impulses for this to sound convincing.
This topic was last updated 180 days ago, and thus it was archived. Replying is disabled for this topic.