Page MenuHome

SEQUENCER: Frame Rate Base Rounding Leads To Vid / Audio Strips Out Of Sync by 1 Frame
Closed, InvalidPublic

Description

Really pedantic issue but the frame rate base value ie /:1.000 is rounding the wrong way, for example if I add video which is 29.97 and enter 1.001 frame base video and audio strips are out by 1 frame. However if I change the base to 1.0009 vid/audio strips are correct same length.

Like wise if I add video which is 30.01 with a base of 1.001 the strips are out by one frame, changing it to 1.000 gives me correct vid/audio strip lengths.

Sorry really pedantic to raise this bug.

wget http://s3.amazonaws.com/videos.vimeo.com/155/000/15500064.mp4

Add as 30fps at 1.001 and see 5808 Aud & 5809 Vid

Add as 30fps at 1.0009 and see 5809 aud & 5809 Vid

Ubuntu Linux 64bit Rev. 26876

Details

Type
Bug

Event Timeline

Confirming here - Linux amd64 (Ubuntu 9.10)

Tested against 60fps NTSC (60/1.001) and rounding results in a frame off.

The frame rate rounding is not the issue here and all of the related calculations are done correct. The "problem" is that in the video file you provided the video and audio streams have slightly different lengths. The audio track has a length of 193.82697 seconds while the video stream's length is 193,86034 seconds (5810 frames / 29.970029 fps). Using the framerate base of 1.001 is absolutely correct here, but the difference in stream lengths results in the one frame offset.

I don't know how common it is for video recording software to leave the streams at slightly different lengths, but luckily this will not lead to syncing issues as long as the correct framerate base is used. So when you see the video and audio tracks are different length in the sequencer it only means that the actual data streams are of different lengths.

Closing the report.

Janne Karhu (jhk) closed this task as Invalid.Nov 16 2010, 7:33 PM

The frame rate rounding is not the issue here and all of the related calculations are done correct. The "problem" is that in the video file you provided the video and audio streams have slightly different lengths. The audio track has a length of 193.82697 seconds while the video stream's length is 193,86034 seconds (5810 frames / 29.970029 fps). Using the framerate base of 1.001 is absolutely correct here, but the difference in stream lengths results in the one frame offset.

I don't know how common it is for video recording software to leave the streams at slightly different lengths, but luckily this will not lead to syncing issues as long as the correct framerate base is used. So when you see the video and audio tracks are different length in the sequencer it only means that the actual data streams are of different lengths.

Closing the report.

Hi,

I am actually having the identical problem and I hope to reopen this bug (Blender 2.77, Ubuntu 16.04)

Setup: I have a video shot with a GOPRO at 59.95 fps. I want to use in a 29.97 fps environment inside blender, so I use ffmpeg to change the fps value.

Problem: When I import 29.97 video, the audio and video are off by 1 frame. Specifically, the audio is one frame longer than the video. I would like to have the audio and video to have the same==identical length.

What have I done so far: Now comes the funny part. So when I import the 29.97 fps video, the audio is one frame longer, as described above. However, the actual audio information is indeed a little shorter than the video. So I split up the audio and video, encoded the video separately and then joined the two parts together. That did not change a thing. However, when I just import the audio file, then the resulting strip is actually 1 frame shorter than the video! Both audio strips (from the video and the single one) line up perfectly in regards of their waveform, so there is only a difference in the padding at the end of the file.

The audio part from the original 59.94 video would line up perfectly with the encoded 29.97 video. But that would be too complicated to always import two videos and then combine audio and video...

So, here I have 3 audio strips with the very identical audio information, but with three different lengths, where the strip from the single audio file is one frame shorter than the video, the strip from the 29.97 video is 1 frame longer and the audio strip from the 59.94 is as long as the video strip.

In my opinion it is unclear how blender determines the length of an audio strip. I does not matter, whether the audio and video do not share the same size in milliseconds, as this would always never be the case due to different frame lengths in audio and video.

Unless there is a method on how to prepare files in such way, that blender will reliably detect the audio strip length, this is indeed a bug to consider.

Thanks for your help.

ffmpeg 59.94 to 29.94

ffmpeg -i GOPR7304-59.94fps.MP4 -y -probesize 5000000 -s 640x368 -c:v prores -profile:v 0 -qscale:v 13 -vendor ap10 -pix_fmt yuv422p10le -acodec copy -r 30000/1001 GOPR7304-29.97.mov

ffmpeg splitting

ffmpeg -i GOPR7304-59.94fps.MP4 -map 0:0 -vcodec copy 04.m4v -map 0:1 -acodec copy 04.m4a

Thanks!

This isn't related to the bug and likely due to a combination of GOP with non-equal encoded data.