MEMORY LEAK SEGFAULT: Rendering a sequence always segfaults Blender #23616
Labels
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
8 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: blender/blender#23616
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
%%%Summary: Given a large sequence, Blender consistently segfaults. There are no models in the scene nor any nodes in the compositor. /var/log/syslog shows a KILL being sent to Blender for memory consumption.
When using GDB to reproduce the bug, the following information reveals the source of the leak:
(gdb) p MEM_printmemlist_stats()
AFTER 100 FRAMES:
total memory len: 169.209 MB
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
101 ( 127.828 1296.000) scaledowny
25 ( 24.460 1001.880) imb_addrectImBuf
AFTER 500 FRAMES:
total memory len: 1585.030 MB
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
729 ( 922.641 1296.000) scaledowny
523 ( 648.097 1268.931) imb_addrectImBuf
AFTER 1000 FRAMES:
total memory len: 5106.893 MB
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
2491 (3152.672 1296.000) scaledowny
1539 (1933.972 1286.801) imb_addrectImBuf
AFTER 1700 FRAMES:
total memory len: 6261.658 MB
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
2790 (3531.094 1296.000) scaledowny
2139 (2699.991 1292.562) imb_addrectImBuf
The two functions in tandem will drag Blender down to being killed by the system. Given a current test case for example, the two will gobble up 10 gigs of data in approximately 1900 full HD frames.
Many different output formats have been tested including MPEG variants and standard still frames such as JPEGs.
Version: SVN 31642
Platform: Linux amd64 (Ubuntu 10.04)%%%
Changed status to: 'Open'
#38360 was marked as duplicate of this issue
%%%Turning off the % scale of the Scene Size we see one of the leaks reduced but not quite eradicated:
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
699 (5442.284 7972.674) imb_addrectImBuf
73 ( 577.441 8100.000) scaleupy
There are at least two significant leaks at work - one in the Scene Render size scaler and one that appears to be relying on imb_addrectImBuf.%%%
%%%Thanks to Uncle_Entity and some gdbing, it appears to be located in the Sequencer settings Prefetch Frames.
Multiple calls to seq_stripelem_cache_put()
The scaledowny reference count appears to be related to the SpeedControl Sequencer effect. Forking the bug report. sigh.%%%
%%%Only occurs with Sequencer Caching enabled in preferences. Only fix is to delete the startup.blend file.%%%
%%%Hi Troy,
I have difficulties reproducing that bug. Could you please check the following things:
Thanks in advance!
Cheers,
Peter
%%%There's still no good information here to fix the bug, no .blend file or steps to redo. Closing report, if there's more information provided, I can reopen it.%%%
%%%I can confirm this bug with Blender 2.61 too.
I have some H264 1080p sequences, loaded in the VSE and split (with K) in small 20-45 frames long sequences.
When I play the video in the VSE or try to render it, at every sequence cut, the blender process grows by 10-20MB.
With my 2GB system, I cannot play or render this video completely.
I'll try to send you a .blend using a publicly available video (or should I get a special blender debug enabled release?)%%%
%%%How to disable the "Sequencer Caching" in 2.61 ? (would like to try Troy's WA)%%%
%%%Mapalloc returns null, fallback to regular malloc: len=8355840 in imb_addrectImBuf, total 1201253376
Calloc returns null: len=8355840 in imb_addrectImBuf, total 1288389876
Program received signal SIGSEGV, Segmentation fault.
0x08d8dd8e in IMB_anim_absolute ()
(gdb) bt
(gdb) p MEM_printmemlist_stats()
total memory len: 1228.702 MB
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
241 (1145.604 4867.631) imb_addrectImBuf
73 ( 64.160 900.000) scalefastimbuf
115 ( 10.570 94.119) anim_index_entries
33 ( 1.550 48.092) Chunk buffer
1464 ( 1.076 0.752) BLI Mempool Chunk Data
%%%
%%%* we are still missing a blend file
Cheers,
Peter%%%
%%%* Yes I'll try to give you a blend file, but I suspect that we will need at least 8GB of HD footage to trigger the bug...
Malloc returns null: len=921600 in scalefastimbuf, total 1546164100
Program received signal SIGSEGV, Segmentation fault.
0x00000006 in ?? ()
(gdb) bt
0 0x00000006 in ?? ()
Cannot access memory at address 0x5
(gdb) p MEM_printmemlist_stats()
total memory len: 1471.894 MB
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
209 ( 966.073 4733.297) imb_addrectImBuf
555 ( 487.793 900.000) scalefastimbuf
96 ( 9.678 103.236) anim_index_entries
764 ( 1.652 2.215) ImBuf_struct
1464 ( 1.076 0.752) BLI Mempool Chunk Data
%%%
%%%So it seems that the leak is not related to the "mem cache limit" because "scalefastimbuf" mallocs are always under the cache limit. The leak is related to "imb_addrectImBuf"...%%%
%%%So here is a test .blend : http://drolez.com/memleakvse.blend
You have to download also this video: http://mirrorblender.top-ix.org/peach/bigbuckbunny_movies/big_buck_bunny_1080p_h264.mov and put it in /tmp
In the VSE you'll see the original video cut every 10 frames, up to frame 1000.
1- When you load the project, everything is ok, blender uses 300MB.
VmPeak: 313952 kB
VmSize: 290712 kB
VmLck: 0 kB
VmHWM: 127848 kB
VmRSS: 112080 kB
VmData: 134628 kB
VmStk: 136 kB
VmExe: 51952 kB
VmLib: 23192 kB
VmPTE: 444 kB
VmSwap: 0 kB
Threads: 3
2- When you start to play the VSE sequence, blender memory usage grows.
At frame 500 we have the following stats:
...see next post...%%%
%%%VmPeak: 1635504 kB
VmSize: 1604128 kB
VmLck: 0 kB
VmHWM: 1431884 kB
VmRSS: 1382988 kB
VmData: 1039844 kB
VmStk: 136 kB
VmExe: 51952 kB
VmLib: 23192 kB
VmPTE: 2996 kB
VmSwap: 14204 kB
(gdb) p MEM_printmemlist_stats()
total memory len: 475.045 MB
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
62 ( 406.503 6713.855) imb_addrectImBuf
32 ( 63.281 2025.000) scaledowny
1464 ( 1.076 0.752) BLI Mempool Chunk Data
28 ( 0.984 36.000) icon_rect
16 ( 0.303 19.364) Chunk buffer
%%%
%%%2- After playing or rendering 1000 frame, I have:
VmPeak:
2842452
kBVmSize: 2811076 kB
VmLck: 0 kB
VmHWM: 1635148 kB
VmRSS: 1495224 kB
VmData: 1846756 kB
VmStk: 136 kB
VmExe: 51952 kB
VmLib: 23192 kB
VmPTE: 5356 kB
(gdb) p MEM_printmemlist_stats()
total memory len: 865.930 MB
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
111 ( 796.972 7352.243) imb_addrectImBuf
32 ( 63.281 2025.000) scaledowny
1464 ( 1.076 0.752) BLI Mempool Chunk Data
28 ( 0.984 36.000) icon_rect
19 ( 0.401 21.609) Chunk buffer
So it seems that, for each cut, an "imb_addrectImBuf" struct is allocated and never released.
Blender should not cache these kind of data.%%%
%%%No reopen ? Ok, i'll submit a new one%%%
%%%Noticed high memory usage which is quite odd but it's not about non-freed imb_addrectImBuf structures. All this stuff is under guarded memory allocatior and if it weren't freed you'll have message "Not freed datablocks" in the console on exit.
That memory "leak" happens even if all caching from blender side is disabled. It might be leak in FFmpeg or x264 codec and it need to be investigated.
Bad thing is that valgrind doesn't show any leaks related on this issue. So it might be caching from FFmpeg stuff or something like this.
Reopening to investigate if it's something from Blender side which is buggy or it's bug in some dependecy library.%%%
%%%Made deeper investigation of this issue. It's not a memory leak and it's not bug in any dependency lib (as it imagined tonight). Cache limiter wouldn't help here because it's not only image buffer which are taking memory.
You're using 50 movie strips, every strip is a fullhd movie. From cod epoint of view it means you've got 50 descriptors of ffmpeg streams which are reading from disk so some buffering is used. Each stream requires storing decoded frame (because some data is shared between decoded frame and packet at which decoding of frame finished, so this frame can't be freed). That's where from non-guarded (not displaying by MEM_printmemlist_stats but which can be seen using htop) memory usage is came. Guarded memory usage came from the fact, that in some situations decoding of frame and it's post-processing isn't needed at all (for example when strobe is used or so) so internally fully decoded frame is stored inside each strip. So we've got 50 frames each of them is 1920x1080 and 32bit depth. It's something around 400 megabytes -- exactly that size you can see using MEM_printmemlist_stats.
It might be annoying but it's not actually case for which sequencer was designed and it's current design works as it was expected. Refactoring of cache system is in my TODO list, but also added not to http://wiki.blender.org/index.php/Dev:2.5/Source/Development/Todo/Editors#Video_Sequencer
Thanks for the report, but i wouldn't consider it as a bug, it's just non-supported feature of freeing unused FFmpeg descriptors.%%%
Changed status from 'Open' to: 'Archived'
%%%Ok :-( but if the sequencer is not usable for videos longer than 10 minutes, it seems like a bug....
Since these strucs are allocated when each strip is 1st played/rendered, it may be possible to set a limit on the number of strucs kept in memory?
Or at least, blender should not segfault in that case, and if such an allocation fails, it may also try to free older imb_addrectImBuf structs?%%%
%%%It's not about length of video, it's about amount of movie strips. If there'll be 5-10 fullhd strips, it'll work just fine. But dealing with 50 and more fullhd strips is a really challenge.
Ofcourse it's possible to set limit on that structures, but it'll require of redesigning plenty of things in blender. Main issue is threading. It's not just like "free unused imb_addrectImBuf", it's more like "make all images/videos frame reading/writting/freeing threadsafe and make nost just sequencer be affected by cachelimiter, but also all system structures llike anim (which holds all ffmpeg-related structures)". Memory allocated with imb_addrectImBuf is not the only thing which takes memory. It's think which takes guarded memory. If you'll open htop and see usage of memory it'll be much more than memory guard tells you. It's because ffmpeg structures don't use guarded allocaiton. Freeing such things is more dificult because they are using for video decoding and if they're getting freed, all stuff like frame seek, packets decoding and so should be handled in a way different from current (currently it's designed for faster and accurate seek).
Rather than trying to implement some workaround for this particular issue by freeing that structs which confuses you i'd rather see much better deigned image reading/cacheing which will allow to make that system structures creation/usage/freeing much more transparent. I'm trying to work on this in spare time i'm not spending on bug tracker..%%%
%%%I am pretty certain that I experienced this issues with only a single strip. Not entirely sure what was going on there.
That said, it is probably worth noting that Open Image IO has support for relatively efficient caching. An FFMPEG plugin for it would allow a motion picture sequence to be treated as part of the regular system.%%%
%%%Troy, if you've got single strip which runs out of memory, it'll be a bug. But to fix this i'll need video file .blend to be able to reproduce. But again, it's not just blender which involved here, it's also ffmpeg library in which i'm noticing fixes like null-pointer dereferences, memory corruptions and leaks.
It's a myth that oiio will resolve all image/video related issues in blender. Probably it can make some things easier to do, but it's not just like "link against oiio and be happy". It will involve global changes in blender design as well. And it's not cache of original frames which is actually needed, it should be a way to cache post-processed images (video stabilization in clip editor, flipping, cropping, de-interlacing in vse and so on). And doubt oiio have got support of proxies and timecodes. Probably proxies aren't so critical, but you can't work with video without timecode. From this point of view, caching from oiio side would just make things worse -- it'll be nothing more than just-another-subsystem wasting space in memory.%%%
%%%@Sergey
I of course am not suggesting it is merely a matter of stuffing a #define in there and we are done. ;)
That said, I do think that OIIO brings plenty to the table, and the caching might be one thing that we could look at. I know that the caching was built on the idea of loading a boatload of images of massive dimension under an extremely small memory footprint.
Regarding the original bug, Uncle Entity and I spent a long time diagnosing this. The diagnosis appeared to be an errant allocate that wasn't being freed, causing the memory to grow as stated. This was directly related to the memcache, and the only path to solution was to delete the user settings file to reset the value, as returning it to the default value manually still resulted in the leak.
If I can find time I'll try to purposefully break this again and get you a Blend.%%%
%%%Was it 2.61 release which you've made to run out of memory? Several changes to memcache were done in tomato project and seqcache was re-written from scratch. It was merged into trunk for 2.61 release and some possible fixes were done afterwards. So, only out-of-memory issues on single strips in 2.61 official release would be considered as bugs.%%%
%%%Would it be possible to share the image buffer between several strips instead of allocating a new one for each cut?%%%
%%%Not actually. It's because of threading issues. And again, it's not only image buffer involved into memory issues -- it's just only ~30% of memory "overhead". Another goes to ffmpeg buffers which is not guarded by our allocation system and might be seen only using system's process viewer.%%%
%%%b2.63 r49303 Vista 2Gb mem
I too run out of memory with more than ten HD strips loaded. Playback and scrubbing worked ok, but rendering would accumulate more and more memory until failure. My work around was to reduce the Sequencer Memory Cache Limit down to just 64mb. Memory usage seemed to be controlled but I wonder if the memory leak was reeal for render? I didn't experience it on playback only, just render.
I used speed effects too.%%%
%%%It's not actually a leak in it's technical meaning. It was lots of buffers kept in memory to make video encoding faster.
I've made some changes in tomato branch which should force this buffers to be freed during render. This could solve your issues.
This changes needs to be tested more before they go to trunk. If you'll have time to test them it'll help a lot.%%%
Added subscriber: @PauloJoseOliveiraAmaro
Hi, I made use a lot of video sequence editor and I'm having troubles with its memory management. This issue become really unacceptable, making me unable to render a simple video. Blender crash after hours of rendering, all the 5 times I tried.
THE ISSUE
My scene is really simply. I imported two recorded video files (1: MOV 2GB 30fps 1280p; 2: MP4 700MB 30fps 1920p) into video sequence editor. No problems with the edition workflow. Using smal size proxies files for the video strips (10MB each), Blender uses really low memory while editing. Great!
To change the active screen between the video footages, I make o lot of soft cuts in just one of the video strips, and deleting parts I don't want. I believe this is a common procedure:
Then I save the file, close Blender and render with command line (terminal). Render file is a AVI 30fps 1280p, h264 for video codec and AC3 for sound codec. The render was going well until frame 4827, witch Blender has quit with the following message:
Segmentation fault
According to System Monitor (Ubuntu), Blender was spending 1.2GB. I was really impressed by this, so I began to investigate the problem.
This way I came up here and found the probable cause: large footage 1980p for my computer RAM (3GB).
THE FIRST ATTEMPT
So I re-encoded the both original videos into smaller ones (MP4 1280p ~400MB each) and replaced its occurrences in the sequence editor. And started render again.
Surprise: **
Segmentation fault
**at 4827 frame,again.To try understand what's happening, I recorded the memory usage and the Blender "open files" window of System Monitor. I found that the Blender memory consumption was changing in constant great steps. For instance, varying little around ~455MB until a certain frame, then growing up to ~490MB instantly.
The Blender "open files" window showed a growing list of the same file entries, exactly the video I had cut in pieces. Watch the attached video:
to check the memory growing and open file list.
The time 48s is when one more file is added to the list. In this time, Blender is rendering frame 1661, witch takes more time than the previous ones (5s against 1s). This frame is exactly the start frame of 10st part of the video (as you can check in the ending of attached video).
So I suspected maybe the memory growing is related to the amount of cuts I made in one of the videos. So I tried something really annoying.
POSSIBLE WORKAROUND
I replaced all the cuts by just one strip with an opacity animation to simulate the cuts. This way I get the same result, but blender have to manage just one strip very long, instead of many small cuts of the same video. See the attached image.
This is what I have now:
The video is still rendering right now and Blender is using just ~200MBat frame7310. I'm really happy to had found a workaround and hope someone more could use it. Since now up this is fixed, I'll editing my videos like I always did, but before render, I'll manually convert the cuts to animation keyframes on opacity. If someone could write a script to automatize this, would be great. This issue is really annoying.
It seems something related to strip headers, as already pointed before (I think). But I hope this bug may be reopened and fixed soon. Even it's a ffmpeg issue... My workaround just works for continuous video input. If the time of cuts is shifted individually, would not be possible reunite them into a single strip and then the memory overload would happen again!
If I can help some way, let me know... And sorry for eventual misspellings - I'm not a good english speaker
Tested in Blender 2.69 Date 2013-12-17 09:47 Hash
db795b6
This also happens in Blender 2.69 from Blender.org and I suspect this is being there for a very very long time.
@PauloJoseOliveiraAmaro, as far as i can see you're using loads of cuts. Every cut will have own FFmpeg descriptor and buffers which we don't actually manage. This is in my personal TODO to be solved.
Since this issue was started in 2010 and the last update was in 2012, my wish was to say this is still happening. Good to know this issue is still investigated, @sergey.
These FFmpeg descriptor can't not be shared from one only instance in memory? You wrote something about threading issues, anything changed since then?
Or maybe it's possible freed an strip when is not in use anymore?
There are any other workaround you'd suggest me?
Well, now when we've got generic moviecache system (used for image sequences and movie clips) it should be possible to make FFmpeg buffers guarded by it.
But that ended up not being so trivial: this is because of seek in video stream. Imagine you've been playing VSE, this would load frames from the video and update internal pointers to current frame in FFmpeg descriptor. Now, if you simply drop some Ffmpeg descriptors and jump back to frame you recently played FFmpeg might seek that frame incorrect if you don't have timecode enabled.
It actually might happen already if you press "Refresh Sequencer" button, but in this case you kinda know that something wrong might happen with seek after this.
Ideally VSE should always use timecode. It's doable, but ends up in a bigger project..
I can remember there used to be a hack in code which freed FFmpeg buffers of strips which are not under the current frame during animation rendering. But seems it's no longer used. Need to investigate reason for this. Unless it was giving issues it might be nice to bring it back.
As for work around -- couldn't actually think of any unfortunately. Would just need to fix the issue. Will try to it in nearest week(s).
The proper solution is:
Isn't that hard to do and was also on my personal todo list, not only sergeys.
Reason why I haven't done it already (and Sergey probably, too):
Work around solution in the meantime:
Ah, Peter is here :)
@schlaile, i think we might avoid using blender's allocator for FFmpeg? We wouldn't be able to measure descriptor size 100% accurate, but think it's possible to do close enough approximation?
hmm well, you'd have to add some silly code that does things like:
sure, it will work, but it's hard to look anyone into the eyes after that :)
reason is simple: it's really slow... (file close / open is an expensive operation on movie files, doing it on every frame is really a no go).
Adding the anim cache pool would be the far better idea, even if you just add a global open file limit (which can be changed afterwards to a proper solution, which does a proper memory size estimation)
@schlaile, I'd become very, very happy if this "obviously hackish configuration option" is available just from command line, or if a python script could do the job.
@schlaile, the reasons you gave to had not take this bug fixed are very sad to read, and very understandable too.
This disabled slow code can be turned in a python script? - please, say yes...
I'm still hopeful there are a [viable] way to save me from having to cut my videos with opacity keyframes.
In meantime, if could be possible to edit opacity curves direct above a strip, would be good too... > <
If I intepreted your timeline correctly, you are editing shows which were recorded using several cameras, different angles?
Why don't you use the multicam strip?
It even has the nifty feature of helping you cut while viewing. (You can press number keys to add a cut to a different angle)
That way, there is only one movie strip for each camera (ffmpeg problem solved), and you got faster editing for free.
I've used that for timelines going up to 3 hours with no problems at all.
The strip layout goes e.g. like this (for 3 cameras):
editing using number keys works by selecting the multicam strip (the "unedited" part), starting playback and hitting number keys.
You might want to add several channel viewer windows for each input, so that it isn't blind editing :)
real world timeline looks e.g. like this (theatrical production):
@schlaile, thank you so much for showing me the multicam strip! I never used it before, and it's looks perfect for what I'm doing. And this scene layout is amazing, it'll improve a lot my workflow! Think this could become a new official layout preset.
Sometimes I lite to do some fade transitions. How is this possible with the multicam strip? My guess is to use two multicam strips and fade between them.
I'm very excited with what you just teach me! Thank you!
Off-topic: Blender is - with NO DOUBT - the program with more resources I'll ever know! Never mind would be so easy to change between cams, as it is by just pressing number keys while playing back!
@PauloJoseOliveiraAmaro: exactly :)
the final timeline layout, when everything is finished, looks most of the time like this:
Thank you very much!
There are some links about this topic? I'm very interested to read more about it!
there isn't much more to tell about it.
Don't forget to render 25% proxies for the preview screens, that's all.
A look into the manual sometimes helps:
http://wiki.blender.org/index.php/Doc:2.6/Manual/Sequencer/Effects
The multicam strip isn't exactly doing rocket science, fun thing is: it does it's "science" still better than kdenlive :)
http://www.addlime.com/blog/video/kdenlive-multicam-editing-workflow/
Thank you! True, it seems quite simple and very efficient (in memory terms) what multicam is doing. The fact is that I didn't know it, even after years of using blender to video editing. Shame on me... :-P
[Sorry, still off-topic] I was thinking about tips of editing workflow, specially in Blender, like that about adjustment layer. Most of what you have presented here is enough for a good sequencer tutorial.
https://developer.blender.org/p/pauloup/ have you looked over at the Blendersvse blog? There are a number of tips for VSE usage.
https://developer.blender.org/p/schlaile/ I am very sad to hear how difficult it is to deal with ffmpeg devs :( If the Goosebury project rellies more on the VSE for long form edit, would these ffmpeg encoding issues present a problem?
@davidmcsween, Thank you for the Blender's VSE Blog tip, I did not know it. Looks very good!
Added subscribers: @brothermechanic, @mont29
◀ Merged tasks: #38360.