Fri, May 25
Updates to fix compilation
Thu, May 17
Wed, May 9
Here's a simplified .blend that still crashes. It has a single image (used as a 360-frame image strip) and 2 sound files.
I don't know how to bundle the image into the .blend file, but the 2 sound files seem to be there.
I'll add the image file here, and you can re-add the image strip.
Mon, May 7
Cant reproduce, renders fine here.
Sun, May 6
Mon, Apr 30
ASAN gives me
Hm, something weird (at least for my experience) going on with memory corruption?
I was debugging in QTCreator and for the above backtrace [previous post -- where it crashes in BKE_nlastrip_find_active] the NlaTrack pointer was 0x2?
Confirmed on first sight (took me a while to reproduce this though), will have a closer look later (or get help onboard if I cant find a fix)
Can confirm crash here.
Fri, Apr 27
The value of the anim->duration variable is calculated in the startffmpeg function (anim_movie.c) :
anim->duration = (int)(pFormatCtx->duration * av_q2d(frame_rate) / AV_TIME_BASE + 0.5f);
The problem seems to be in the IMB_anim_absolute function (anim_movie.c).
Apr 26 2018
Can confirm on first sight.
Apr 25 2018
Apr 24 2018
I attempted at writing some versioning code however I am not sure if it works because I can't figure out how to write the for loop.
Apr 21 2018
- You have to know what you want to do... If you can do the audio editing in Blender, do it there, if not use some other program.
Apr 20 2018
So this is what I'm getting from all this:
- The 32 channels of the blender vse are called channels but they aren't the same as audio channels (aka mono being 1, stereo 2 channel).
If anyone can give me some advice on how to do the versioning code to automatically convert to the new effect type that would be amazing.
Apr 17 2018
I just downloaded your sample... it only contains 1391 video frames (checked by transcoding it with ffmpeg and looking at the frame count at the end).
Okay, so I just took the test video which I supplied, used Handbrake to convert it to 30 constant fps. Now Codec info says exactly 30 fps.
Since the audio length in your reports is the same between 2.76b and later versions, and it's the video that has changed length, I would argue that 2.76b was getting the video length wrong somehow. It may even be a difference in ffmpeg / libavcodec rather than in blender itself, since the library versions also change between blender releases.
Stephen, this is all fine, but somehow Blender 2.76b was doing it correctly, without requiring me to use Handbrake or any other converter.
Yes, this in general is how all cameras and video recorders work, these days. Why? Gone are the days when you needed a bit of magnetic tape to move across a tape-head at a constant rate, or a constant 33 1/3 rotations per minute to faithfully reproduce video or sound.
All readily-available video and audio is compressed in some form, to either make it fit into a smaller distribution medium, or stream "faster" through phone lines, to save someone time, or money.
Before you load a file for editing in Blender, consider using a program (such as Handbrake) that can convert between variable-rate and constant-rate frames-per-second formats. You might also try FFMPeg, which is used by both Blender and Handbrake to do their video conversions for exporting to different formats. Handbrake is a lot easier to work with, than FFMPeg, though.
Other editing suites do similar conversions, without telling you about it, and still take just as long to process.
Apr 16 2018
I am not sure how is this possible, unless this is really how all cameras and videos in general work. And somehow Blender 2.76b does everything correctly, whereas the "correct" frame rate of later versions results in audio and video being out of sync.
Having a look at the metadata; the audio is longer than the video:
Video: Line 23: 46 s 361 ms
Audio: Line 50: 46 s 784 ms
Blender seems to be setting the correct frame rate there's just a bit of extra audio.
Apr 15 2018
If your source audio file isn't using the full dynamic range of the amplitude, it is basically wasting quality. Most audio files have a dynamic range of 16 bit. If you have to double the amplitude (volume in blender = 2), you are already wasting one of these bits. At 10, you are already wasting more than 3 bits. So for any recording engineer the goal is to utilize the whole dynamic range to not lose any quality during recording. When you have a file that doesn't use the full dynamic range, then increasing the volume in audacity instead of blender doesn't make any difference in terms of outcome, so you could just do it in blender directly. The problem is of course that you don't by how much, so using audacity makes sense after all. As soon as you normalized the volume in audacity, you are using the full dynamic range, going higher in blender then leads to clipping.
Apr 13 2018
I don't understand why you said
Apr 11 2018
@Justin Moore (DrChat) I too am anxiously awaiting this feature. How's development coming along? Any chance we'll see this feature in 2.8?
Apr 10 2018
Ok, then I'll close this. There is no difference in processing of the strips between mixdown and rendering. Not even to playback, the only difference there is that the user settings are used in terms of channels and sample rate.
Ok, changing the number of channels is another story. Blender supports surround sound (=more than two channels) in contrast to Audacity.
Thanks @Joerg Mueller (nexyon) for the clarification.