- User Since
- Feb 4 2009, 8:52 AM (532 w, 6 d)
Sun, Apr 14
Nice find on where the problem was introduced! I'd be fine with the temporary fix. Are you sure that this doesn't prevent both calls of the two threads during rendering? One should still be done. I'm not sure what moving to the dependency graph will change here, does the dependency graph get updated from two threads as well? Or is it two different instances and thus as you said viewport and renderer each need their own sound_scene instance?
Tue, Apr 9
I tried to reproduce the crash, but it also didn't happen on my linux. Given the stack traces though, it looks like the issue is BKE_sound_update_scene being called at the same time in different threads. As such a mutex in this function should solve the problem?
Wed, Apr 3
Mar 10 2019
I fixed both issues.
Feb 23 2019
Feb 12 2019
@Hossein Shah (hk1ll3r), you're right the strip length stays the same, but that doesn't mean that the speed doesn't change. If you have a strip of full length (not cut on any side) and you change the pitch to 2, you'll end up with double pitch and double speed, but the strip will stay the same length, meaning that it's silent in the second half. We intentionally don't change the strip length when the pitch is changed, since this can cause many problems, especially when you animate the pitch. I hope this clarifies the issue. If not, please ask further questions on devtalk.blender.org as @Sergey Sharybin (sergey) suggested.
Feb 1 2019
Jan 22 2019
Just tried with setting it in C/CXX FLAGS but same result as with WITH_COMPILER_ASAN. Please let me know what you get with gcc9. If you still have it there, I fear you'll have to help me debugging this. ;)
Jan 21 2019
I tried reproducing - I enabled WITH_COMPILER_ASAN (and disabled cycles and libmv in order to get it compile on my machine) but I only get memory leaks when I run blender then, not the error you reported. Do you have any hints on how to reproduce it?
Jan 17 2019
Should be fixed in d3e856cd but I can't really test it. Please give it a try with a build bot build newer than this comment and check if it works there. If not, feel free to reopen this bug report!
Jan 12 2019
With your last two points you mean the playback device in the windows audio settings right? I guess you can recover if you go to the blender user settings and switch the audio device to None and back?
Dec 31 2018
So my suspicion seems to be correct. The difference in playback is cause by the changes in OpenAL soft which now uses ambisonics (https://en.wikipedia.org/wiki/Ambisonics) based surround sound. Originally, I implemented Blender's speaker mapping based on OpenAL soft. That's why so far they have been the same. Ambisonics is actually better and ideally Blender would also go this direction for 3D sound. Unfortunately, I don't have the resources for this development right now, it's more a long term goal. If you need an short term solution, please use the configuration files as I have mentioned them. There are also ambdec files for different speaker setups including 7.1 here: https://github.com/kcat/openal-soft/tree/master/presets.
Dec 30 2018
Finally, if you can confirm that this change in OpenAL soft is the cause, there's one thing you can do to get the same output in playback as in rendering. You have to create an openal soft configuration file named $HOME/.alsoftrc and have the following content:
Please also try a 2.79 version from the build bot here: https://builder.blender.org/download/
Hmm, SDL should work though, can you play around with the settings a bit? The sample rate 48 kHz should work fine, you can try changing the number of channels in the user preferences. Also please try setting the environment variable SDL_AUDIODRIVER to dsound or waveout and check if that helps to get SDL to work. See https://sdl.beuc.net/sdl.wiki/SDL_envvars#head-9ae11b2daf93dc3706eccf15cbf26eb6235ac634
Ok @Brecht Van Lommel (brecht). Since I'm not really a NLE guy, but rather a real-time guy, I'm also lacking some knowledge about that @Troy Sobotka (sobotka). If you have any resources on what a proper NLE would be (doesn't have to be audio specific), that would help. Also it might be interesting to talk to the actual Spring team in terms of what they would like or how it should work. Where/how should we continue this discussion?
Hey, what audio backend/device are you using? OpenAL? If so, does it work with SDL? What operating system are you on? Do you have an audio driver installed? If so, which one and which sound card do you have if any?
Dec 25 2018
Ok, let me add my 2 - or more like 1000 - cents here. TL;DR: I list 7 major problems with Blender in relation to audio.
Dec 24 2018
Sorry for the late reply, it's been a busy month, but now there are holidays :) The noise problem sounds weird, especially since you're the only windows user reporting it. Have you tried updating your drivers? Also have you tried SDL instead of OpenAL or None? Does saving the device settings work in the 2.80 beta?
Dec 5 2018
I guess this change needs more evaluation. If the change targets video encoding, I don't see on what basis the changes in audaspace are necessary? Does audio en-/decoding actually benefit from the changes? It also seems a bit random to have those changes for playing audio files and mixing down audio, but not for decoding videos (within movie clip editor or VSE)?
Oct 21 2018
Hey! Thanks for reporting the warnings! I'm not sure what to do about the macro redefinition. That's of course mainly a problem in the SDL source code - did you report it there or submit a fix? Your solution is certainly too broad of a workaround since it silences all those warnings even if there are legitimate ones in audaspace's code.
Sep 12 2018
I have no idea what to do about that either. Sounds like a weird Mac problem. I'd like to know if it worked before on this system? If yes, what changed? MacOS update? Blender update? Is there an older Blender version that works?
Aug 13 2018
Well, ideally you would use the offsets as they were used before. But if not, you don't have to convert before storing, since we're not forward compatible, but we have to be backward compatible, that's what do_versions() is for.
Aug 12 2018
I tried the patch. Animated strips work now, but I still have trouble with strips loaded from a file that was created with vanilla blender.
Aug 7 2018
Hey, sorry for the slow responses, I'm quite busy at work at the moment. Can you please update the patch? Currently it doesn't apply.
Aug 3 2018
Jul 11 2018
Jul 8 2018
Hey! Have you tried this with pitch animated sound? It still doesn't work. I've also found a few other bugs and might even find more if I keep digging.
Jul 7 2018
Jul 6 2018
Hey, thanks for the update! I tried it and it works really nice for strips with constant pitch. However it doesn't work for animated pitch at all. I think we should go a middle way and don't touch the strip length when the pitch is changed at all. Otherwise we get serious problems with animated pitch strips. The code for cutting and drawing would work fine, but we would have to remove the other changes I guess. Can you try strips with animated pitch? The goal is that the strip start and end don't change during playback.
Jun 26 2018
Ok, so the first problem is, that the display of the pitch scaled waveform is wrong in D3496 when the start is not 0. Changing the start on pitch change is also wrong, since the start of the playback of that strip doesn't depend on the pitch. So to keep the sound start at the same position when changing the pitch, you shouldn't do anything, because that's what happens right now.
Jun 24 2018
Jun 8 2018
I finally got around fixing everything in extern/audaspace in blender2.8 and committed this as dd2e1873446e2019a3020e9d62c6efc29b43d930. Everything above apparently has been fixed with ffmpeg_compat.h in master and blender2.8.
Apr 30 2018
I'm not sure if the fixes are necessary for the 2.7x series? The fixes certainly make sense for 2.8 and there might be even more changes that we should make here, since a lot of the ffmpeg functions we use are marked deprecated already. Internal audaspace is not used in Blender 2.8 anymore, but we can certainly apply these fixes if we want to support ffmpeg 4 with blender 2.7x. For 2.8 I'll fix audaspace together with the deprecated ffmpeg function calls as soon as my distribution releases the ffmpeg 4 update (currently in staging).
Apr 21 2018
- You have to know what you want to do... If you can do the audio editing in Blender, do it there, if not use some other program.
Apr 17 2018
Looking at T54115 it sounds like system settings and not workspace settings are where the current user settings should go to, since these settings are hardware specific.
Apr 16 2018
Apr 15 2018
If your source audio file isn't using the full dynamic range of the amplitude, it is basically wasting quality. Most audio files have a dynamic range of 16 bit. If you have to double the amplitude (volume in blender = 2), you are already wasting one of these bits. At 10, you are already wasting more than 3 bits. So for any recording engineer the goal is to utilize the whole dynamic range to not lose any quality during recording. When you have a file that doesn't use the full dynamic range, then increasing the volume in audacity instead of blender doesn't make any difference in terms of outcome, so you could just do it in blender directly. The problem is of course that you don't by how much, so using audacity makes sense after all. As soon as you normalized the volume in audacity, you are using the full dynamic range, going higher in blender then leads to clipping.
Apr 10 2018
Ok, then I'll close this. There is no difference in processing of the strips between mixdown and rendering. Not even to playback, the only difference there is that the user settings are used in terms of channels and sample rate.
Ok, changing the number of channels is another story. Blender supports surround sound (=more than two channels) in contrast to Audacity.
Apr 9 2018
Ok, there is a lot going on here and a lot of the information written here is just plainly wrong. Let me try to summarize: you're setting the volume of an audio strip to 10 and get an output that is clipped. Well, that's expected. I tried this and whatever I do, it always sounds the same - whether I play back in Blender (SDL or OpenAL), mix down (wav or flac) or render to a video (matroska, PCM). In that sense, I cannot reproduce the bug report.
Mar 29 2018
Your distinction between the three types of settings is pretty clear. Regarding the user preferences there is no discussion, they are stored system wide and not blend file wide. Playback will always have to happen within the bounds of the user settings. For example, if your user settings are stereo, you simply can't playback 7.1, it will be mixed down to stereo, as in @Christopher Anderssarian (Christopher_Anderssarian)'s laptop example.
Jan 31 2018
Hi, thanks for the report.
Hi, thanks for the report. This is a known problem - audaspace doesn't work with negative timings. The reason for this is that blender itself wasn't really designed to work with audio - to overcome some of these problems, I had to make some design decisions that ended in not supporting negative timings. The alternative would have been to not support animation (of volume, pitch, etc.), which is definitely the worse alternative. We could at least document somewhere that negative timings don't work with audio?
Dec 8 2017
Sep 19 2017
Sep 8 2017
Hey @Sergey Sharybin (sergey) I just checked the build bots, while the 64 bit linux builds still fail, I checked both 32 and 64 bit and they now both have WITH_CODEC_SNDFILE as ON. So we can consider this resolved?
Sep 5 2017
Thanks @Sergey Sharybin (sergey) ! With libsndfile disabled I can reproduce the error with ffmpeg.
I just saw that the buildbots build without libsndfile, so the cause could/should be ffmpeg. Do you build with libsndfile @Carlo Andreacchio (candreacchio) ?
Ok I can reproduce the bug with a buildbot build, but it's not a debug build, so it's hard to track down the error. @Sergey Sharybin (sergey) can you help? Which linux is the buildbot running and which ffmpeg/libsndfile version? Is it a self built one? Can I get it somehow?
Sep 1 2017
This is because audio volume changes are always faded and the initial volume is 1.0. The bugfix sets this initial volume to 0.0 so that it's more a fade in. Meanwhile you can try to set the mixdown "Accuracy" to lower values than 1024. This will make it take longer for the export (1 would take VERY long, I'd start with 128 and keep halving that) but if the first few samples of your sound file are not too loud, this can help.
I can't reproduce this bug. Can you tell me which distribution and version you are on and how you get blender? Official builds, distribution builds or self built? I guess it's a faulty ffmpeg version that causes this, probably not much we can do.
Aug 20 2017
Aug 19 2017
Aug 18 2017
Aug 17 2017
The latest changes required for numpy paths to work on Linux and Mac! All build bots build this patch fine now, so I guess it's ready to push. Will do that tomorrow morning.
Aug 16 2017
Since I got the OK from @LazyDodo (LazyDodo) on IRC, I'm ready to push the commit. I will do so on Friday morning (CEST) so that I'll be around for a few hours afterwards and I can handle any problems people might have after the push. That is, if there is nothing else coming here.
Aug 15 2017
I incorporated all the latest comments. As I understand it, Mac works now. So if everything is ok now, I'll push the patch as soon as I get the ok from @LazyDodo (LazyDodo).
Aug 13 2017
Fixed some more bugs, removed unnecessary files and added a cmake config files. Now everyone is hopefully happy?
@LazyDodo (LazyDodo) Can you check if it works this time please? (I guess not, maybe you can find the errors, I can't reproduce)