Sat, Apr 21
- You have to know what you want to do... If you can do the audio editing in Blender, do it there, if not use some other program.
Fri, Apr 20
So this is what I'm getting from all this:
- The 32 channels of the blender vse are called channels but they aren't the same as audio channels (aka mono being 1, stereo 2 channel).
If anyone can give me some advice on how to do the versioning code to automatically convert to the new effect type that would be amazing.
Tue, Apr 17
I just downloaded your sample... it only contains 1391 video frames (checked by transcoding it with ffmpeg and looking at the frame count at the end).
Okay, so I just took the test video which I supplied, used Handbrake to convert it to 30 constant fps. Now Codec info says exactly 30 fps.
Since the audio length in your reports is the same between 2.76b and later versions, and it's the video that has changed length, I would argue that 2.76b was getting the video length wrong somehow. It may even be a difference in ffmpeg / libavcodec rather than in blender itself, since the library versions also change between blender releases.
Stephen, this is all fine, but somehow Blender 2.76b was doing it correctly, without requiring me to use Handbrake or any other converter.
Yes, this in general is how all cameras and video recorders work, these days. Why? Gone are the days when you needed a bit of magnetic tape to move across a tape-head at a constant rate, or a constant 33 1/3 rotations per minute to faithfully reproduce video or sound.
All readily-available video and audio is compressed in some form, to either make it fit into a smaller distribution medium, or stream "faster" through phone lines, to save someone time, or money.
Before you load a file for editing in Blender, consider using a program (such as Handbrake) that can convert between variable-rate and constant-rate frames-per-second formats. You might also try FFMPeg, which is used by both Blender and Handbrake to do their video conversions for exporting to different formats. Handbrake is a lot easier to work with, than FFMPeg, though.
Other editing suites do similar conversions, without telling you about it, and still take just as long to process.
Mon, Apr 16
I am not sure how is this possible, unless this is really how all cameras and videos in general work. And somehow Blender 2.76b does everything correctly, whereas the "correct" frame rate of later versions results in audio and video being out of sync.
Having a look at the metadata; the audio is longer than the video:
Video: Line 23: 46 s 361 ms
Audio: Line 50: 46 s 784 ms
Blender seems to be setting the correct frame rate there's just a bit of extra audio.
Sun, Apr 15
If your source audio file isn't using the full dynamic range of the amplitude, it is basically wasting quality. Most audio files have a dynamic range of 16 bit. If you have to double the amplitude (volume in blender = 2), you are already wasting one of these bits. At 10, you are already wasting more than 3 bits. So for any recording engineer the goal is to utilize the whole dynamic range to not lose any quality during recording. When you have a file that doesn't use the full dynamic range, then increasing the volume in audacity instead of blender doesn't make any difference in terms of outcome, so you could just do it in blender directly. The problem is of course that you don't by how much, so using audacity makes sense after all. As soon as you normalized the volume in audacity, you are using the full dynamic range, going higher in blender then leads to clipping.
Fri, Apr 13
I don't understand why you said
Wed, Apr 11
@Justin Moore (DrChat) I too am anxiously awaiting this feature. How's development coming along? Any chance we'll see this feature in 2.8?
Tue, Apr 10
Ok, then I'll close this. There is no difference in processing of the strips between mixdown and rendering. Not even to playback, the only difference there is that the user settings are used in terms of channels and sample rate.
Ok, changing the number of channels is another story. Blender supports surround sound (=more than two channels) in contrast to Audacity.
Thanks @Joerg Mueller (nexyon) for the clarification.
Mon, Apr 9
Ok, there is a lot going on here and a lot of the information written here is just plainly wrong. Let me try to summarize: you're setting the volume of an audio strip to 10 and get an output that is clipped. Well, that's expected. I tried this and whatever I do, it always sounds the same - whether I play back in Blender (SDL or OpenAL), mix down (wav or flac) or render to a video (matroska, PCM). In that sense, I cannot reproduce the bug report.
Sat, Apr 7
Fri, Apr 6
Thu, Apr 5
Wed, Apr 4
Can confirm this, too. Maybe this is for @Joshua Leung (aligorith) ? (feel free to throw back at me and I can see what I can do...)
Problem appears in 2.79b
I like your idea, Cerbyo (Kite).
Starting with the best sound possible, and adjusting volumes downward (by ear, or numbers) seems reasonable, if you are trusting of your hearing. But I have been working with blender for many years, and would caution you to stay vigilant, with visual ques (from the Audacity graphical images), as well. Projects (in Blender) that have many small clips moved around, and key-frame adjustments to volume levels, can be taxing on some computer processors.
Right. As a workaround for 'all' of the mentioned oddities and limitations and differences, and in the interests of mimicking the process of programs like audacity, would this work?:
Tue, Apr 3
Creating a file from scratch in 2.79b produces reported issue for me. (on Windows)
Provided .blend file contains the issue. But I did not manage to reproduce issue from scratch with 2.79b or master under ubuntu 16.04.
just noting that it works fine here on linux with my headphones
Mon, Apr 2
From my quick testing regular exr works but multilayer exr does not work. I will add a note in the manual.
@Joerg Mueller (nexyon)
Your humble wizdom?
Sun, Apr 1
Maybe I shouldn't triage tasks without them being assigned (and therefore forgotten about or over looked).
Sat, Mar 31
I chimed in, just to let you know that I didn't think it was a good idea to use a volume level of '10', and expect that there wouldn't be a problem with an audio strip in the VSE.
After having done some testing;
there is a peculiarity to the way Blender downmixes (and whatever the name is for doing the opposite).
Fri, Mar 30
Responding to Cerbyo (Kite):
I don't fully understand the process or how the community works as of yet. Forgive me on that front. Question: Is this case still open, do I need to give it more time, or is it a case of people can't recreate what I said and its dismissed? I feel like I'm making a big deal out of something that has been that way for a very long time and is deemed a nonissue. The more I learn, and I'm learning alot here, the more I come to the conclusion that this is a 'purposely' built limitation of the VSE, but I'm the only one who thinks its a clear decisive limitation and a huge deal!
Thu, Mar 29
More info: For testing changing color management exposure should be enough to trigger the bug, however you also need to change the frame to see the issue.