- User Since
- Feb 4 2009, 8:52 AM (485 w, 1 d)
Mon, Apr 30
I'm not sure if the fixes are necessary for the 2.7x series? The fixes certainly make sense for 2.8 and there might be even more changes that we should make here, since a lot of the ffmpeg functions we use are marked deprecated already. Internal audaspace is not used in Blender 2.8 anymore, but we can certainly apply these fixes if we want to support ffmpeg 4 with blender 2.7x. For 2.8 I'll fix audaspace together with the deprecated ffmpeg function calls as soon as my distribution releases the ffmpeg 4 update (currently in staging).
Apr 21 2018
- You have to know what you want to do... If you can do the audio editing in Blender, do it there, if not use some other program.
Apr 17 2018
Looking at T54115 it sounds like system settings and not workspace settings are where the current user settings should go to, since these settings are hardware specific.
Apr 16 2018
Apr 15 2018
If your source audio file isn't using the full dynamic range of the amplitude, it is basically wasting quality. Most audio files have a dynamic range of 16 bit. If you have to double the amplitude (volume in blender = 2), you are already wasting one of these bits. At 10, you are already wasting more than 3 bits. So for any recording engineer the goal is to utilize the whole dynamic range to not lose any quality during recording. When you have a file that doesn't use the full dynamic range, then increasing the volume in audacity instead of blender doesn't make any difference in terms of outcome, so you could just do it in blender directly. The problem is of course that you don't by how much, so using audacity makes sense after all. As soon as you normalized the volume in audacity, you are using the full dynamic range, going higher in blender then leads to clipping.
Apr 10 2018
Ok, then I'll close this. There is no difference in processing of the strips between mixdown and rendering. Not even to playback, the only difference there is that the user settings are used in terms of channels and sample rate.
Ok, changing the number of channels is another story. Blender supports surround sound (=more than two channels) in contrast to Audacity.
Apr 9 2018
Ok, there is a lot going on here and a lot of the information written here is just plainly wrong. Let me try to summarize: you're setting the volume of an audio strip to 10 and get an output that is clipped. Well, that's expected. I tried this and whatever I do, it always sounds the same - whether I play back in Blender (SDL or OpenAL), mix down (wav or flac) or render to a video (matroska, PCM). In that sense, I cannot reproduce the bug report.
Mar 29 2018
Your distinction between the three types of settings is pretty clear. Regarding the user preferences there is no discussion, they are stored system wide and not blend file wide. Playback will always have to happen within the bounds of the user settings. For example, if your user settings are stereo, you simply can't playback 7.1, it will be mixed down to stereo, as in @Christopher Anderssarian (Christopher_Anderssarian)'s laptop example.
Jan 31 2018
Hi, thanks for the report.
Hi, thanks for the report. This is a known problem - audaspace doesn't work with negative timings. The reason for this is that blender itself wasn't really designed to work with audio - to overcome some of these problems, I had to make some design decisions that ended in not supporting negative timings. The alternative would have been to not support animation (of volume, pitch, etc.), which is definitely the worse alternative. We could at least document somewhere that negative timings don't work with audio?
Dec 8 2017
Sep 19 2017
Sep 8 2017
Hey @Sergey Sharybin (sergey) I just checked the build bots, while the 64 bit linux builds still fail, I checked both 32 and 64 bit and they now both have WITH_CODEC_SNDFILE as ON. So we can consider this resolved?
Sep 5 2017
Thanks @Sergey Sharybin (sergey) ! With libsndfile disabled I can reproduce the error with ffmpeg.
I just saw that the buildbots build without libsndfile, so the cause could/should be ffmpeg. Do you build with libsndfile @Carlo Andreacchio (candreacchio) ?
Ok I can reproduce the bug with a buildbot build, but it's not a debug build, so it's hard to track down the error. @Sergey Sharybin (sergey) can you help? Which linux is the buildbot running and which ffmpeg/libsndfile version? Is it a self built one? Can I get it somehow?
Sep 1 2017
This is because audio volume changes are always faded and the initial volume is 1.0. The bugfix sets this initial volume to 0.0 so that it's more a fade in. Meanwhile you can try to set the mixdown "Accuracy" to lower values than 1024. This will make it take longer for the export (1 would take VERY long, I'd start with 128 and keep halving that) but if the first few samples of your sound file are not too loud, this can help.
I can't reproduce this bug. Can you tell me which distribution and version you are on and how you get blender? Official builds, distribution builds or self built? I guess it's a faulty ffmpeg version that causes this, probably not much we can do.
Aug 20 2017
Aug 19 2017
Aug 18 2017
Aug 17 2017
The latest changes required for numpy paths to work on Linux and Mac! All build bots build this patch fine now, so I guess it's ready to push. Will do that tomorrow morning.
Aug 16 2017
Since I got the OK from @LazyDodo (LazyDodo) on IRC, I'm ready to push the commit. I will do so on Friday morning (CEST) so that I'll be around for a few hours afterwards and I can handle any problems people might have after the push. That is, if there is nothing else coming here.
Aug 15 2017
I incorporated all the latest comments. As I understand it, Mac works now. So if everything is ok now, I'll push the patch as soon as I get the ok from @LazyDodo (LazyDodo).
Aug 13 2017
Fixed some more bugs, removed unnecessary files and added a cmake config files. Now everyone is hopefully happy?
@LazyDodo (LazyDodo) Can you check if it works this time please? (I guess not, maybe you can find the errors, I can't reproduce)
Disabled FFTW3 for blender's audaspace (it wasn't used anyway) and fixed cmake stuff (no more find_package calls, no cmake modules added, no dlls added, ...).
Aug 11 2017
This is wrong, the sound file in the blender file is an mp3 and libsndfile can't read mp3s. Therefore, those functions shouldn't be called as often. Maybe there is something wrong with the library.
Aug 10 2017
Can you put a breakpoint in all four methods that start with "vio_" and tell me how often they get called until the crash happens?
Aug 9 2017
But that exception is caught in AUD_FileFactory::createReader(). Wait, you can see that the exception happened somehow, even though it's caught and the program continues to run? This is expected behaviour in this case, as libsndfile can't read the sound file packed in the blender file. FFMPEG then is able to read it in my case. Does the file open properly, showing the waveform on the audio strip? If so, everything is working fine for you @Fable Fox (fablefox) and this debug information has nothing to do with the bug.
Me neither. @Fable Fox (fablefox) could you find out where the exception(s) are thrown?
Jun 19 2017
Jun 18 2017
Thanks, I'll have a look at that. Can you show me the errors that you get with Python? For fftw3 it is not enough to enable it with cmake (WITH_FFTW3)?
May 22 2017
As long as the sound works, I'm fine :)
Did you also open a sound file and try to play it? For example in the video sequence editor? If it works, you can just ignore those messages...
Of course windows also can be configured. You have to edit the file %AppData%\alsoft.ini
Does this actually affect your audio output? The messages don't sound like they would affect audio working.
Okay, then I'm closing this.
Apr 16 2017
With respect to enable/disable, we could make the following change: Add a choice between enabled, disabled and automatic and for automatic determine whether the scene has sound or not and enable automatically in that case. The choice of codec is a different problem, but a default codec for each file format shouldn't be difficult to choose.
Mar 8 2017
Mar 6 2017
Well yeah, it's not so easy... I actually had to figure out again, why it's so problematic in detail, when I was writing this, that's a reason why it took so long...
Mar 5 2017
Mar 3 2017
Feb 10 2017
Hmm, sounds like a problem of OpenAL/pulseaudio, so I guess there's nothing I can do here. It works fine for me, but then again I don't have pulseaudio even installed. ;-)
Feb 9 2017
As there hasn't been a response for a week, I'm closing this. Feel free to reopen if the bug in Blender still persists.
Jan 6 2017
Jan 4 2017
Hi, can you try if D2447 fixes your problem as well? I'd prefer if the code of changing the specs is run as seldomly as possible/necessary. This is to avoid for example resetting the resamplers everytime you pause/play.
Dec 27 2016
Nice report @Tobiasz Karoń (unfa). I see two things here: the first thing is the difference of one off, which I could reproduce and is a bug of ardour and the other one I couldn't reproduce and is cause by blender.
Fixing this with a workaround by disabling JACK Transport functionality during rendering. Thanks @Bastien Montagne (mont29) for the help on IRC. A proper solution would be blender supporting animation playback during rendering, but I guess we are far away from that. ;-)