Synthesized Convolution Reverb by Raytracing #43549

Closed
opened 2015-02-03 17:13:20 +01:00 by Sebastian Schwank · 8 comments

The Programm I’ve implemented as a Prototype in Python is meant to generate a synthesized reverb in an artificial room modeled in Blender3D.
So it calculates a reverb cararacteristic for a given Signal (eg. an Impuls) and a given set of Data (eg. size of the room, topology, position of the signal source, position of the reciver … ).

The Programm I’ve implemented as a Prototype in Python is meant to generate a synthesized reverb in an artificial room modeled in Blender3D. So it calculates a reverb cararacteristic for a given Signal (eg. an Impuls) and a given set of Data (eg. size of the room, topology, position of the signal source, position of the reciver … ).

Changed status to: 'Open'

Changed status to: 'Open'
Sebastian Schwank self-assigned this 2015-02-03 17:13:21 +01:00

Added subscriber: @schwenk

Added subscriber: @schwenk

Here’s my fist attempt to extend Blender’s functionallity with a new feature I would call:

Synthesized Convolution Reverb by Raytracing

For everyone who isn’t familiar with convolution reverb in AUDIO technology, i recommend to read the wikipedia article for a brief introduction.

So lets start:

What it does ?

The Programm I’ve implemented as a Prototype in Python is meant to generate a synthesized reverb in an artificial room modeled in Blender3D. So it calculates a reverb cararacteristic for a given Signal (eg. an Impuls) and a given set of Data (eg. size of the room, topology, position of the signal source, position of the reciver … ).

How it works ?

To answer this question I want to show some snippets of my code and explain what it does. But first I want to sketch the idea of the whole process:

The whole programm coud be seen as a hybrit of a Pathtracer and a Delay-Effect. Basically it does, atm, nothing else then:

  1. Tracing Rays in the scene FROM the reciver(which is a Mesh) TO the source(which is a Point-Object, in my case a Lamp-Object)

  2. Calculating the length of the Rays reaching the reciver-object.

  3. Delaying the Signal for each ray according to it’s length.

For the moment that’s all it does. But lets have a look at the Blender-Scene, the sourcecode and the results:

Bas5cScene.png

The image above shows a basic scene made with Blender3d. The selected objects in the scene are the audio-source (highligted) and the reciver (dark highligted). The non selected object is the geometrie in which the rays are reflected. For my model I assume that each soundray is reflected by the geometrie.

The next image showas a result for a given (highligted) signal, in my case a rect-impulse, which is the darkly highligted graph.

Notice that it’s delayed and noisier and finish slowly after some time. (Y-Axis: Deflection ; X-Axis : Time)

exampleresult.png

At the end I want to show you an example for a set of paths, reflected by geometrie, which reach the audio-source-mesh.

-aths.png

Above the current .blend :
SoundRayAlpha1.blend

Here’s my fist attempt to extend Blender’s functionallity with a new feature I would call: Synthesized Convolution Reverb by Raytracing For everyone who isn’t familiar with convolution reverb in AUDIO technology, i recommend to read the wikipedia article for a brief introduction. So lets start: What it does ? The Programm I’ve implemented as a Prototype in Python is meant to generate a synthesized reverb in an artificial room modeled in Blender3D. So it calculates a reverb cararacteristic for a given Signal (eg. an Impuls) and a given set of Data (eg. size of the room, topology, position of the signal source, position of the reciver … ). How it works ? To answer this question I want to show some snippets of my code and explain what it does. But first I want to sketch the idea of the whole process: The whole programm coud be seen as a hybrit of a Pathtracer and a Delay-Effect. Basically it does, atm, nothing else then: 1. Tracing Rays in the scene FROM the reciver(which is a Mesh) TO the source(which is a Point-Object, in my case a Lamp-Object) 2. Calculating the length of the Rays reaching the reciver-object. 3. Delaying the Signal for each ray according to it’s length. For the moment that’s all it does. But lets have a look at the Blender-Scene, the sourcecode and the results: ![Bas5cScene.png](https://archive.blender.org/developer/F140699/Bas5cScene.png) The image above shows a basic scene made with Blender3d. The selected objects in the scene are the audio-source (highligted) and the reciver (dark highligted). The non selected object is the geometrie in which the rays are reflected. For my model I assume that each soundray is reflected by the geometrie. The next image showas a result for a given (highligted) signal, in my case a rect-impulse, which is the darkly highligted graph. Notice that it’s delayed and noisier and finish slowly after some time. (Y-Axis: Deflection ; X-Axis : Time) ![exampleresult.png](https://archive.blender.org/developer/F140701/exampleresult.png) At the end I want to show you an example for a set of paths, reflected by geometrie, which reach the audio-source-mesh. ![-aths.png](https://archive.blender.org/developer/F140703/-aths.png) Above the current .blend : [SoundRayAlpha1.blend](https://archive.blender.org/developer/F140706/SoundRayAlpha1.blend)
First result: [VirtualReverb(SoundRayProject).wav](https://archive.blender.org/developer/F140731/VirtualReverb_SoundRayProject_.wav) IR: [IR_ReverbStereo.wav](https://archive.blender.org/developer/F140732/IR_ReverbStereo.wav)

Added subscriber: @AdamKalisz

Added subscriber: @AdamKalisz

Changed status from 'Open' to: 'Archived'

Changed status from 'Open' to: 'Archived'

Added subscriber: @ideasman42

Added subscriber: @ideasman42

These kinds of projects are interesting, but not really something we can include in our releases in its current state (blendfile & script).

Maybe some 3D/Audio simulation addon could be included with Blender eventually but there would need to be some good practical use-case.

Archiving, however you might like to post this on user-forums and see if others are interested to use/collaborate.

These kinds of projects are interesting, but not really something we can include in our releases in its current state (blendfile & script). Maybe some 3D/Audio simulation addon could be included with Blender eventually but there would need to be some good practical use-case. Archiving, however you might like to post this on user-forums and see if others are interested to use/collaborate.
Sign in to join this conversation.
No Milestone
No project
No Assignees
3 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender-addons#43549
No description provided.