Page MenuHome

VSE 2.0: Performance
Confirmed, NormalPublicTO DO

Assigned To
Authored By
Sergey Sharybin (sergey)
Jul 16 2020, 2:49 PM
"Love" token, awarded by mzamecki."Like" token, awarded by ashstar."Love" token, awarded by digim0nk."Like" token, awarded by Pipeliner."Love" token, awarded by mindinsomnia."Love" token, awarded by Andrea_Monzini."Love" token, awarded by gilberto_rodrigues."Love" token, awarded by christianclavet."Love" token, awarded by erickblender."Love" token, awarded by davidmcsween."Love" token, awarded by chumariesco."Love" token, awarded by neoncipher.


This task is to keep track of performance-related topics which needs to be addressed in order to achieve pleasant video editing experience. If any of the topics needs any deeper consideration and discussion it is to become a sub-task of this one.

Flat assorted list of performance and memory-usage related topics:

  • Seek in video files is rather slow, feels measurably (2x) slower compared to seek in VLC with the same video file.
  • Proxy file generation is slower than realtime. Sometimes only time code (TC) is needed, so, probably, the solution is to allow generating TC without scaled-down video. Encoding of different resolution should be possible to do in separate threads.
  • Color strip puts ever frame in cache. Simple to reproduce: crate new 4K project, add black color strip with the duration of entire 250 frames, playback. In theory, the final VSE frame cache should be able to re-use final strip frame.
  • Image strip caches every frame. Similar to the color case: adding and stretching image to occupy multiple frames will lead to way higher memory usage. This makes memory usage way higher in projects like story board, where edit consists of handful of image strips, and every image is stretched to occupy a frame range.
  • Image strips performs read-from-disk on every frame. Again, leads to worse performance for story-board like edit.
  • Moving image and cached video strips while being under the playhead is very slow. In 2.79 VSE the strip cache was attempted to be re-used as much as possible. Image strips should not be doing re-reads at all, video strips should reuse frames which are already in the cache.
  • Operate in preview resolution. Operating and caching on original 4K images just to make the result be 320x240px in preview is very wasteful. Operation and cache should use the preview resolution. This will lower memory usage for all types of projects, and will make VSE faster when a lot of effects are used.

For the caching related topics see T80278 as well.

Event Timeline

Sergey Sharybin (sergey) changed the subtype of this task from "Report" to "To Do".Jul 20 2020, 10:39 AM

Seek in video files is rather slow, feels measurably (2x) slower compared to seek in VLC with the same video file.
Proxy file generation is slower than realtime. Sometimes only time code (TC) is needed, so, probably, the solution is to allow generating TC without scaled-down video. Encoding of different resolution should be possible to do in separate threads.

If the codec of the proxies is changed from mjpeg to h264(the avi container would still work), all sorts of hardware accelerated options will be available.

Adding this to the h264 command line: -g 1 -preset ultrafast -tune fastdecode -crf 0 make the files behave like mjpeg(but really small).
-g 1 generate full images(like mjpeg).
-preset ultrafast fast encoding.
-tune fastdecode fast decoding.
-crf 0 (Should be a optional setting) 0=lossless

Alternatively HDxHR doesn't have the resolution restrictions of DNxHD and could be used for proxies, but compared to h264 this is an i-frame heavy codec(like mjpeg) and will produce a huge files(which also could be used as intermediate files(improved seeking)). It comes in a variety of options including 10-bit, but I don't think avi will work as a container, so going this way will mean a new container format(which also could contain metadata).

For keeping opacity in proxies, png in mov container could be used for proxies.

For optimal quality for color grading, EXR format could be considered for proxies, however since 10-bit is not enabled in the VSE, this may currently be overkill?

On the playback of proxies, a mismatch between proxy-file resolution and project-resolution will cause loss in fps.

And the additional scaling of the proxies in order to get text(text strips) scaled correct will also result in a loss of fps.

On hardware enhanced encoding and decoding:

There is a slight pause in playhead movement when seeking(arrow keys), this should be removed, if possible?

The seeking backwards is poor when Prefetch Frames is on:
(A fairly normal operation is to quickly seek backwards while playing forward, but the cached area is kept ahead of the playhead, so this operation will choke because of uncached frames before the playhead.)

To improve the seeking control maybe an industry-standard of seeking could be considered? Ex. tabbing forward(L key) will increase seek speed and tabbing backward(J key), will decrease seek speed(and seek reverse), (K for stop).
(These controls could also be used in modal-move/extend operations)

On Threads, as the libav seems to be running multithreaded, it's on the Blender side of things things aren't, which is why the various parallel render add-ons will render faster. Ex. So maybe it's better to deal with the image processing on the Blender side than separating generation of various proxy files into separate threads?

On the cluttered and incomprehensive Proxy UI:

Moving image and cached video strips while being under the playhead is very slow. In 2.79 VSE the strip cache was attempted to be re-used as much as possible.

The poor performance of Scene Strips(regression) should also be mentioned here, and the fact that proxy generation of Scene Strips currently is removed, so there is no workaround to improve the performance. If storyboards are drawn in Blender with GP, an improved performance of Scene Strips would mean that the boards wouldn't have to be rendered to file before adding them to the Sequencer.

Effect strips also slow down playback, not utilizing GPU and some of them are not even multi-threaded.

I wonder if it is part of this project to make available playback in realtime of H264 files (fHD without effects at least), without latency and without need of proxies. I"m reading a lot of mention to multithreaded generation of proxies on the net, but not much focus on multithreaded direct playback of source files. Like commercial non-linear editors have been able to do better and better throughout the years, some being able even to playback 200mbit fHD h264 files without any latency even in old machines with practically no GPU acceleration (HD4000 for example). I wonder how they manage to do so and where Blender's bottleneck to do so is. Even considering that standard playback softwares like vlc and so on can do it too.

Thanks for opening this thread! Constantly looking forward to its evolution and attentive to contribute in something if I can.

Please refer to the parent task, which states expected baseline for playback.

Hello, i hope to write in the right section.
Could you please confirm that playback performance of the OpenEXR in the VSE is relatively slow ?
It's ~10/20 fps in my system for 1080P without the cache ( Blender 2.90, Ryzen 1700 @3700, 32 GB RAM, RX 580 8 GB, SSD )

As the OpenEXR is the way we ( should ) render in Blender, it would be important to not forget the OpenEXR playback optimization for the VSE 2.0 so we can edit easily what we render.
And as wrote by Peter, it would very interesting to consider OpenEXR for the proxy too.
Example to reproduce 1080P exported to OpenEXR ( The Daily Dweebs intro by Blender Animation Studio ) :

Anyway thank you also for the work on the VSE 2.0 !

Playback of EXR is way more involved than playback of PNG or movie files. Comes from the complexity of decoder, bigger amount of data, more involved color management.

There are ways to mitigate that, is something we keep in mind.

Thank you for the answer, looking forward for the updates !
About file size and encode time there is a very interesting codec comparison work ( by Robert Gützkow ) :

It seems that file size and encoding time of OpenEXR ( compressed lossy DWAA and loseless Zip for example ) are very good :)

As long the main slowdowns are on the Blender side of things, codecs and ffmpeg settings properly will not make that much of a difference. I can't help thinking that there is no way around doing the profiling as LazyDodo started/tutorial'ed here:

In his example(exporting video):

Encoding a movie sounds and feels expensive, however this profile tells us of that 13.97s spend saving, only 1.63s is spend in ffmpeg, so the cost of encoding is *NOT* the most expensive process at hand here.
IMB_colormanagement_imbuf_for_write however is soaking up a solid 12.08 seconds, that's not super great.

He finds a way to optimize it:

Alright we're down to 19.6 seconds, not as much as i'd hoped but 30% improvement in save time is still pretty good, and worth this small profiling exercise

But now the patch is abandoned, without any explanation....?

Imo, this needs to be done for all video handling processes including preview, import, export, playback, caching, proxy, effect strips, compositing etc. in order to locate the spots where the real performance improvements can be made(and maybe in that process, investigations can be made into what adding 10+ bit handling would entail).

And then, when those possible optimizations have been done, hopefully Blender is now that fast at processing with video-images, that it makes sense to optimize the ffmpeg (hardware acc.) settings and codec options.

I would certainly concur about the lack of handling for 10bit movie files. As it stands you must use standalone ffmpeg command-line or external converter to create a frame sequence from these files, otherwise Blender downsamples (sometimes badly with an old ffmpeg) and you loose the benefit of shooting higher quality media.
Not having access to external 10bit video is odd; when you consider that Blender has a compositor and material texture input that is being used in a high bit depth environment (Blender renders).

Why GPU support for strips preview and final rendering is not here? Or CPU multi-threading atleast.. imao that is first need to do for performance in VSE.
Or it is meant by one of the points?