FFMPEG frame seek bug fix
re: Issue T72347
Blender was using two slightly different strategies to load a video frame depending on if we were advancing to nearby frames or seeking from far away.
Fix: Make scanning work the same as forward stepping.
Investigate the problem using a breakpoint:
Checkout 39e37c9db56 "Consistency check for frame scanned pts" in this diff.
Set a breakpoint in source/blender/imbuf/intern/anim_movie.c at the line containing "av_log(anim->pFormatCtx, AV_LOG_ERROR, "PTS for frame differs from previous decode" which should complain on frame 60 in the last step above. If not check preferences->system->sequencer cache limit and make sure it's still 1 !!
I expect to do more testing, specifically related to how it may help ( or not ) audio sync in video editing and work with previous saved blender projects ( including those with video proxies ). Any help here would be appreciated ( recommend users to test, supply files to test etc. ).
When decoding compressed video frames we have very few "knowns" unless we want to get into working with specific compression codecs individually.
The packets before decoding are not necessarily in the same order as the decoded frames coming out.
The presentation timestamp ( PTS) on the encoded packets is not necessarily accurate or present. We can only trust the PTS from the decoded frame ( and remember it does not correlate with the order of the compressed packets going in ).
So all we can do reliably is search forward from an iframe to find a desired pts. This was already being done.
Some timeframes might not have an image assocated with them, therefore the only way to know if we have the associated image is to decode until we have a PTS for the next timeframe.
Without doing a pre-indexing we cannot think in terms of frames from the decoder.. we have to use timestamps only. We should be working in the time domain ( as opposed to frame domain ) in order to minimize audio sync issues.
The time frame we are searching may NOT have its own image ( due to compression or slow encoding ) so we need to keep showing the previous one ( which may be more than one video frame ago ).
The time frame we are searching may encompass multiple images. We will only show one... and let an indexing method deal with this in a smarter way if the user deems necessary ( it might be appropriate to show it in a missed frame before).
As currently done, we need to search forward from the nearest i-frame to find the frame we are looking for.
As currently done, we decode as we search forward in order to get an accurate pts.
Don't worry about finding i-frames - FFMPEG has internal lookups to do this efficiently ( or so it seems ).
If we search forward and stop when we have a pts PAST our current timeframe then the frame before is the proper frame. This handles timing gaps etc.
Compatibility with current Proxy timecode files:
The frame finding is done so that the pts_to_search value is an inclusive top end so that exact values will match the current frame ( and thus we can use pts values from the existing indexing methods - and should load old indexes properly ). Consider that this may be undesireable as proxies may put frame numbers ahead of timing. Needs testing.
Code changes ( all within source/blender/imbuf/intern/anim_movie.c and IMB_movie.h ):
Main goal: We needed to look at the pts for the next frame during scanning in order to find wanted pts. Before this we were only doing this when stepping forward a frame at a time ( and in some cases assuming there was a no missing frame ).
- Change 1) Add in a way to verify we were looking for the same pts as the last time this frame was shown: use arrayFrame2PTS to keep track of former search pts results.
- Change 2) Search for PTS based on timecode calculation only.
- Change 3) "Double buffer" ffmpeg_decode_video_frame() so we can look forward at the next pts within ffmpeg_decode_video_frame() for all situations instead of only in forward step searches occuring in ffmpeg_fetchibuf()
- Change 4) Use ffmpeg_decode_video_frame_scan() in the aforementioned forward step search to catch any skipped frames.
- Change 5) Empty double buffer at EOF ( now a bit more complicated, now have anim->pCurrFrameComplete and anim->pNextFrameComplete )
- Change 6) reinitialize arrayFrame2PTS ( and honour numframes from proxied indexing ) if user changes indexing method ( but leave any tracking already done, for better or worse. Handle this better?? ).
A) Use newer FFMPEG functions where we are using deprecated ones.
B) Speed up motion matching in reverse by postprocessing and buffering the forward scan (from the iframe to current frame).
C) Out of sync motion trackers
We need to re-align motion tracking data ( or warn user and delete some/all ?) if the timecode index results in different frame mapping.
Different Timestamp Proxy's are all the same length in frames (but this should not necessarily be true in order to keep timing for sync with audio). Gap-less indexes will have less frames if the recording device was unable to encode at a consistent frame rate or if multiple frames were compressed into one. The new code accomodates this to a point: It re-initializes arrayFrame2PTS BUT the motion tracking already performed will not be touched and so may be out of sync.
No warning is given to the user. Documentation should state that indexing should not be changed after tracking until changes are made to accomodate this ( i.e. the tracking is linked to the frame's pts ??).
D) Consider having blender work with variable frame rate i.e. using a (pts) timestamp within a blender frame for e.g. physics related calculations. Worth the trouble?? Only for video editing use? How would this affect time-modifying scripts etc.
Other benefits from this bug fix:
Fewer or less severe auidio-sync issues?
I originally started coding a fast container dts based index to map video frames to blender frames ( focused on motion matching: ignore timing until later ) .. but decided the way forward was to use the timestamps instead of optimizing on frames. Utilizing every frame independant of timing can still be done with a proxy index although stretching that back out so the timing is right would still require more changes. Blender would need to handle variable frame rates to make this really worthwhile.
I commented out a line in indexer.c to allow me to build a tc proxy without image proxies during testing. I've left the change in for now but it's not necessary as part of this bug fix. Should this change be left in?