Page MenuHome

MEMORY LEAK SEGFAULT: Rendering a sequence always segfaults Blender
Closed, ArchivedPublic

Description

Summary: Given a large sequence, Blender consistently segfaults. There are no models in the scene nor any nodes in the compositor. /var/log/syslog shows a KILL being sent to Blender for memory consumption.

When using GDB to reproduce the bug, the following information reveals the source of the leak:

(gdb) p MEM_printmemlist_stats()

AFTER 100 FRAMES:
total memory len: 169.209 MB
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
101 ( 127.828 1296.000) scaledowny
25 ( 24.460 1001.880) imb_addrectImBuf

AFTER 500 FRAMES:
total memory len: 1585.030 MB
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
729 ( 922.641 1296.000) scaledowny
523 ( 648.097 1268.931) imb_addrectImBuf

AFTER 1000 FRAMES:
total memory len: 5106.893 MB
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
2491 (3152.672 1296.000) scaledowny
1539 (1933.972 1286.801) imb_addrectImBuf

AFTER 1700 FRAMES:
total memory len: 6261.658 MB
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
2790 (3531.094 1296.000) scaledowny
2139 (2699.991 1292.562) imb_addrectImBuf

The two functions in tandem will drag Blender down to being killed by the system. Given a current test case for example, the two will gobble up 10 gigs of data in approximately 1900 full HD frames.

Many different output formats have been tested including MPEG variants and standard still frames such as JPEGs.

Version: SVN 31642

Platform: Linux amd64 (Ubuntu 10.04)

Details

Type
To Do

Event Timeline

Turning off the % scale of the Scene Size we see one of the leaks reduced but not quite eradicated:

ITEMS TOTAL-MiB AVERAGE-KiB TYPE
699 (5442.284 7972.674) imb_addrectImBuf
73 ( 577.441 8100.000) scaleupy

There are at least two significant leaks at work - one in the Scene Render size scaler and one that appears to be relying on imb_addrectImBuf.

Thanks to Uncle_Entity and some gdbing, it appears to be located in the Sequencer settings Prefetch Frames.

Multiple calls to seq_stripelem_cache_put()

The scaledowny reference count appears to be related to the SpeedControl Sequencer effect. Forking the bug report. *sigh*.

Only occurs with Sequencer Caching enabled in preferences. Only fix is to delete the startup.blend file.

Hi Troy,

I have difficulties reproducing that bug. Could you please check the following things:

* can you attach a small sample blend file which can reproduce that behaviour?
* could you also:
run blender with ./blender >/tmp/blender.out 2>&1
make it render several frames (but not enough to make it crash(!)),
quit blender, and attach the output of blender.out?
* and: do the same with: (after doing apt-get install valgrind)
run blender with
valgrind --leak-check=full --show-reachable=yes ./blender >/tmp/valgrind.out 2>&1
make it render several frames (preferable not too much, since valgrind will make
blender really slow and it shouldn't crash!)
quit blender regularly and attach the output of valgrind.out?

Thanks in advance!

Cheers,
Peter

There's still no good information here to fix the bug, no .blend file or steps to redo. Closing report, if there's more information provided, I can reopen it.

I can confirm this bug with Blender 2.61 too.
I have some H264 1080p sequences, loaded in the VSE and split (with K) in small 20-45 frames long sequences.
When I play the video in the VSE or try to render it, at every sequence cut, the blender process grows by 10-20MB.
With my 2GB system, I cannot play or render this video completely.

I'll try to send you a .blend using a publicly available video (or should I get a special blender debug enabled release?)

How to disable the "Sequencer Caching" in 2.61 ? (would like to try Troy's WA)

Mapalloc returns null, fallback to regular malloc: len=8355840 in imb_addrectImBuf, total 1201253376
Calloc returns null: len=8355840 in imb_addrectImBuf, total 1288389876

Program received signal SIGSEGV, Segmentation fault.
0x08d8dd8e in IMB_anim_absolute ()

(gdb) bt
#0 0x08d8dd8e in IMB_anim_absolute ()
#1 0x08ccf759 in openanim ()
#2 0x08c3b44e in ?? ()
#3 0x08c3c750 in ?? ()
#4 0x08c3e08d in ?? ()
#5 0x08c3e1a3 in give_ibuf_seq ()
#6 0x089cbf5c in draw_image_seq ()
#7 0x089c8724 in ?? ()
#8 0x089dbd32 in ED_region_do_draw ()
#9 0x087ac390 in wm_draw_update ()
#10 0x087ad818 in WM_main ()
#11 0x087a944d in main ()

(gdb) p MEM_printmemlist_stats()

total memory len: 1228.702 MB
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
241 (1145.604 4867.631) imb_addrectImBuf
73 ( 64.160 900.000) scalefastimbuf
115 ( 10.570 94.119) anim_index_entries
33 ( 1.550 48.092) Chunk buffer
1464 ( 1.076 0.752) BLI Mempool Chunk Data


* we are still missing a blend file
* could you please tell us the "MEM Cache Limit" setting in preferences?
* you can't disable the cache, but limit it to a very small number (like 64 Mb). Again check preferences -> System -> MEM Cache Limit

Cheers,
Peter

* Yes I'll try to give you a blend file, but I suspect that we will need at least 8GB of HD footage to trigger the bug...
* The mem cache limit was set to 128MB
* I haven't tried to lower it, but I tried to increase it to 1024MB: this time I get a segfault here:

Malloc returns null: len=921600 in scalefastimbuf, total 1546164100

Program received signal SIGSEGV, Segmentation fault.
0x00000006 in ?? ()
(gdb) bt
#0 0x00000006 in ?? ()
Cannot access memory at address 0x5

(gdb) p MEM_printmemlist_stats()

total memory len: 1471.894 MB
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
209 ( 966.073 4733.297) imb_addrectImBuf
555 ( 487.793 900.000) scalefastimbuf
96 ( 9.678 103.236) anim_index_entries
764 ( 1.652 2.215) ImBuf_struct
1464 ( 1.076 0.752) BLI Mempool Chunk Data



So it seems that the leak is not related to the "mem cache limit" because "scalefastimbuf" mallocs are always under the cache limit. The leak is related to "imb_addrectImBuf"...

So here is a test .blend : http://drolez.com/memleakvse.blend
You have to download also this video: http://mirrorblender.top-ix.org/peach/bigbuckbunny_movies/big_buck_bunny_1080p_h264.mov and put it in /tmp

In the VSE you'll see the original video cut every 10 frames, up to frame 1000.

1- When you load the project, everything is ok, blender uses 300MB.
VmPeak: 313952 kB
VmSize: 290712 kB
VmLck: 0 kB
VmHWM: 127848 kB
VmRSS: 112080 kB
VmData: 134628 kB
VmStk: 136 kB
VmExe: 51952 kB
VmLib: 23192 kB
VmPTE: 444 kB
VmSwap: 0 kB
Threads: 3

2- When you start to play the VSE sequence, blender memory usage grows.
At frame 500 we have the following stats:

...see next post...

VmPeak: 1635504 kB
VmSize: 1604128 kB
VmLck: 0 kB
VmHWM: 1431884 kB
VmRSS: 1382988 kB
VmData: 1039844 kB
VmStk: 136 kB
VmExe: 51952 kB
VmLib: 23192 kB
VmPTE: 2996 kB
VmSwap: 14204 kB

(gdb) p MEM_printmemlist_stats()

total memory len: 475.045 MB
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
62 ( 406.503 6713.855) imb_addrectImBuf
32 ( 63.281 2025.000) scaledowny
1464 ( 1.076 0.752) BLI Mempool Chunk Data
28 ( 0.984 36.000) icon_rect
16 ( 0.303 19.364) Chunk buffer

2- After playing or rendering 1000 frame, I have:

VmPeak: 2842452 kB
VmSize: 2811076 kB
VmLck: 0 kB
VmHWM: 1635148 kB
VmRSS: 1495224 kB
VmData: 1846756 kB
VmStk: 136 kB
VmExe: 51952 kB
VmLib: 23192 kB
VmPTE: 5356 kB

(gdb) p MEM_printmemlist_stats()

total memory len: 865.930 MB
ITEMS TOTAL-MiB AVERAGE-KiB TYPE
111 ( 796.972 7352.243) imb_addrectImBuf
32 ( 63.281 2025.000) scaledowny
1464 ( 1.076 0.752) BLI Mempool Chunk Data
28 ( 0.984 36.000) icon_rect
19 ( 0.401 21.609) Chunk buffer

So it seems that, for each cut, an "imb_addrectImBuf" struct is allocated and never released.
Blender should not cache these kind of data.

No reopen ? Ok, i'll submit a new one

Noticed high memory usage which is quite odd but it's not about non-freed imb_addrectImBuf structures. All this stuff is under guarded memory allocatior and if it weren't freed you'll have message "Not freed datablocks" in the console on exit.
That memory "leak" happens even if all caching from blender side is disabled. It might be leak in FFmpeg or x264 codec and it need to be investigated.
Bad thing is that valgrind doesn't show any leaks related on this issue. So it might be caching from FFmpeg stuff or something like this.

Reopening to investigate if it's something from Blender side which is buggy or it's bug in some dependecy library.

Made deeper investigation of this issue. It's not a memory leak and it's not bug in any dependency lib (as it imagined tonight). Cache limiter wouldn't help here because it's not only image buffer which are taking memory.
You're using 50 movie strips, every strip is a fullhd movie. From cod epoint of view it means you've got 50 descriptors of ffmpeg streams which are reading from disk so some buffering is used. Each stream requires storing decoded frame (because some data is shared between decoded frame and packet at which decoding of frame finished, so this frame can't be freed). That's where from non-guarded (not displaying by MEM_printmemlist_stats but which can be seen using htop) memory usage is came. Guarded memory usage came from the fact, that in some situations decoding of frame and it's post-processing isn't needed at all (for example when strobe is used or so) so internally fully decoded frame is stored inside each strip. So we've got 50 frames each of them is 1920x1080 and 32bit depth. It's something around 400 megabytes -- exactly that size you can see using MEM_printmemlist_stats.
It might be annoying but it's not actually case for which sequencer was designed and it's current design works as it was expected. Refactoring of cache system is in my TODO list, but also added not to http://wiki.blender.org/index.php/Dev:2.5/Source/Development/Todo/Editors#Video_Sequencer

Thanks for the report, but i wouldn't consider it as a bug, it's just non-supported feature of freeing unused FFmpeg descriptors.

Sergey Sharybin (sergey) closed this task as Archived.Jan 12 2012, 4:06 PM

Ok :-( but if the sequencer is not usable for videos longer than 10 minutes, it seems like a bug....

Since these strucs are allocated when each strip is 1st played/rendered, it may be possible to set a limit on the number of strucs kept in memory?
Or at least, blender should not segfault in that case, and if such an allocation fails, it may also try to free older imb_addrectImBuf structs?

It's not about length of video, it's about amount of movie strips. If there'll be 5-10 fullhd strips, it'll work just fine. But dealing with 50 and more fullhd strips is a really challenge.
Ofcourse it's possible to set limit on that structures, but it'll require of redesigning plenty of things in blender. Main issue is threading. It's not just like "free unused imb_addrectImBuf", it's more like "make all images/videos frame reading/writting/freeing threadsafe and make nost just sequencer be affected by cachelimiter, but also all system structures llike anim (which holds all ffmpeg-related structures)". Memory allocated with imb_addrectImBuf is not the only thing which takes memory. It's think which takes guarded memory. If you'll open htop and see usage of memory it'll be much more than memory guard tells you. It's because ffmpeg structures don't use guarded allocaiton. Freeing such things is more dificult because they are using for video decoding and if they're getting freed, all stuff like frame seek, packets decoding and so should be handled in a way different from current (currently it's designed for faster and accurate seek).
Rather than trying to implement some workaround for this particular issue by freeing that structs which confuses you i'd rather see much better deigned image reading/cacheing which will allow to make that system structures creation/usage/freeing much more transparent. I'm trying to work on this in spare time i'm not spending on bug tracker..

I am pretty certain that I experienced this issues with only a single strip. Not entirely sure what was going on there.

That said, it is probably worth noting that Open Image IO has support for relatively efficient caching. An FFMPEG plugin for it would allow a motion picture sequence to be treated as part of the regular system.

Troy, if you've got single strip which runs out of memory, it'll be a bug. But to fix this i'll need video file .blend to be able to reproduce. But again, it's not just blender which involved here, it's also ffmpeg library in which i'm noticing fixes like null-pointer dereferences, memory corruptions and leaks.

It's a myth that oiio will resolve all image/video related issues in blender. Probably it can make some things easier to do, but it's not just like "link against oiio and be happy". It will involve global changes in blender design as well. And it's not cache of original frames which is actually needed, it should be a way to cache post-processed images (video stabilization in clip editor, flipping, cropping, de-interlacing in vse and so on). And doubt oiio have got support of proxies and timecodes. Probably proxies aren't so critical, but you can't work with video without timecode. From this point of view, caching from oiio side would just make things worse -- it'll be nothing more than just-another-subsystem wasting space in memory.

@Sergey

I of course am not suggesting it is merely a matter of stuffing a #define in there and we are done. ;)

That said, I do think that OIIO brings plenty to the table, and the caching might be one thing that we could look at. I know that the caching was built on the idea of loading a boatload of images of massive dimension under an extremely small memory footprint.

Regarding the original bug, Uncle Entity and I spent a long time diagnosing this. The diagnosis appeared to be an errant allocate that wasn't being freed, causing the memory to grow as stated. This was directly related to the memcache, and the only path to solution was to delete the user settings file to reset the value, as returning it to the default value manually still resulted in the leak.

If I can find time I'll try to purposefully break this again and get you a Blend.

Was it 2.61 release which you've made to run out of memory? Several changes to memcache were done in tomato project and seqcache was re-written from scratch. It was merged into trunk for 2.61 release and some possible fixes were done afterwards. So, only out-of-memory issues on single strips in 2.61 official release would be considered as bugs.

Would it be possible to share the image buffer between several strips instead of allocating a new one for each cut?

Not actually. It's because of threading issues. And again, it's not only image buffer involved into memory issues -- it's just only ~30% of memory "overhead". Another goes to ffmpeg buffers which is not guarded by our allocation system and might be seen only using system's process viewer.

b2.63 r49303 Vista 2Gb mem

I too run out of memory with more than ten HD strips loaded. Playback and scrubbing worked ok, but rendering would accumulate more and more memory until failure. My work around was to reduce the Sequencer Memory Cache Limit down to just 64mb. Memory usage seemed to be controlled but I wonder if the memory leak was reeal for render? I didn't experience it on playback only, just render.

I used speed effects too.

It's not actually a leak in it's technical meaning. It was lots of buffers kept in memory to make video encoding faster.
I've made some changes in tomato branch which should force this buffers to be freed during render. This could solve your issues.

This changes needs to be tested more before they go to trunk. If you'll have time to test them it'll help a lot.

Hi, I made use a lot of video sequence editor and I'm having troubles with its memory management. This issue become really unacceptable, making me unable to render a simple video. Blender crash after hours of rendering, all the 5 times I tried.

THE ISSUE

My scene is really simply. I imported two recorded video files (1: MOV 2GB 30fps 1280p; 2: MP4 700MB 30fps 1920p) into video sequence editor. No problems with the edition workflow. Using smal size proxies files for the video strips (10MB each), Blender uses really low memory while editing. Great!

To change the active screen between the video footages, I make o lot of soft cuts in just one of the video strips, and deleting parts I don't want. I believe this is a common procedure:

Then I save the file, close Blender and render with command line (terminal). Render file is a AVI 30fps 1280p, h264 for video codec and AC3 for sound codec. The render was going well until frame 4827, witch Blender has quit with the following message:

Fra:4827 Mem:11.85M (175.56M, Peak 205.00M) Sce:  Ve:0 Fa:0 La:0
Fra:4827 Mem:11.85M (175.56M, Peak 205.00M) Sce:  Ve:0 Fa:0 La:0
Append frame 4827 Time: 00:00.90 (Saving: 00:00.22)

Malloc returns null: len=5529600 in scaledownx, total 202741092
Malloc returns null: len=4147200 in scaledowny, total 202741092
Mapalloc returns null, fallback to regular malloc: len=6220800 in imb_addrectImBuf, total 190310400
Calloc returns null: len=6220800 in imb_addrectImBuf, total 202743456
Mapalloc returns null, fallback to regular malloc: len=3686400 in imb_addrectImBuf, total 193996800
Calloc returns null: len=3686400 in imb_addrectImBuf, total 206432196
Writing: /tmp/grupo-viola.crash.txt
**Segmentation fault**

According to System Monitor (Ubuntu), Blender was spending 1.2GB. I was really impressed by this, so I began to investigate the problem.

This way I came up here and found the probable cause: large footage 1980p for my computer RAM (3GB).

THE FIRST ATTEMPT

So I re-encoded the both original videos into smaller ones (MP4 1280p ~400MB each) and replaced its occurrences in the sequence editor. And started render again.

Surprise: Segmentation fault at 4827 frame, again.

Fra:4827 Mem:11.90M (119.53M, Peak 145.50M) Sce:  Ve:0 Fa:0 La:0
Fra:4827 Mem:11.90M (119.53M, Peak 145.50M) Sce:  Ve:0 Fa:0 La:0

Append frame 4827 Time: 00:00.68 (Saving: 00:00.41)

not an anim: /docs/Projects/Sonora/EMA3/video/dia 17/sony/MAH00900.m4v
Mapalloc returns null, fallback to regular malloc: len=3686400 in imb_addrectImBuf, total 136396800
Calloc returns null: len=3686400 in imb_addrectImBuf, total 148881892
Writing: /tmp/grupo-viola.crash.txt
Segmentation fault

To try understand what's happening, I recorded the memory usage and the Blender "open files" window of System Monitor. I found that the Blender memory consumption was changing in constant great steps. For instance, varying little around ~455MB until a certain frame, then growing up to ~490MB instantly.

The Blender "open files" window showed a growing list of the same file entries, exactly the video I had cut in pieces. Watch the attached video:

to check the memory growing and open file list.

The time 48s is when one more file is added to the list. In this time, Blender is rendering frame 1661, witch takes more time than the previous ones (5s against 1s). This frame is exactly the start frame of 10st part of the video (as you can check in the ending of attached video).

So I suspected maybe the memory growing is related to the amount of cuts I made in one of the videos. So I tried something really annoying.

POSSIBLE WORKAROUND

I replaced all the cuts by just one strip with an opacity animation to simulate the cuts. This way I get the same result, but blender have to manage just one strip very long, instead of many small cuts of the same video. See the attached image.

This is what I have now:


The video is still rendering right now and Blender is using just ~200MB at frame 7310. I'm really happy to had found a workaround and hope someone more could use it. Since now up this is fixed, I'll editing my videos like I always did, but before render, I'll manually convert the cuts to animation keyframes on opacity. If someone could write a script to automatize this, would be great. This issue is really annoying.

It seems something related to strip headers, as already pointed before (I think). But I hope this bug may be reopened and fixed soon. Even it's a ffmpeg issue... My workaround just works for continuous video input. If the time of cuts is shifted individually, would not be possible reunite them into a single strip and then the memory overload would happen again!

If I can help some way, let me know... And sorry for eventual misspellings - I'm not a good english speaker

Tested in Blender 2.69 Date 2013-12-17 09:47 Hash db795b6
This also happens in Blender 2.69 from Blender.org and I suspect this is being there for a very very long time.

@Paulo José Oliveira Amaro (pauloup), as far as i can see you're using loads of cuts. Every cut will have own FFmpeg descriptor and buffers which we don't actually manage. This is in my personal TODO to be solved.

Since this issue was started in 2010 and the last update was in 2012, my wish was to say this is still happening. Good to know this issue is still investigated, @Sergey Sharybin (sergey).

These FFmpeg descriptor can't not be shared from one only instance in memory? You wrote something about threading issues, anything changed since then?

Or maybe it's possible freed an strip when is not in use anymore?

There are any other workaround you'd suggest me?

Well, now when we've got generic moviecache system (used for image sequences and movie clips) it should be possible to make FFmpeg buffers guarded by it.

But that ended up not being so trivial: this is because of seek in video stream. Imagine you've been playing VSE, this would load frames from the video and update internal pointers to current frame in FFmpeg descriptor. Now, if you simply drop some Ffmpeg descriptors and jump back to frame you recently played FFmpeg might seek that frame incorrect if you don't have timecode enabled.

It actually might happen already if you press "Refresh Sequencer" button, but in this case you kinda know that something wrong might happen with seek after this.

Ideally VSE should always use timecode. It's doable, but ends up in a bigger project..

I can remember there used to be a hack in code which freed FFmpeg buffers of strips which are not under the current frame during animation rendering. But seems it's no longer used. Need to investigate reason for this. Unless it was giving issues it might be nice to bring it back.

As for work around -- couldn't actually think of any unfortunately. Would just need to fix the issue. Will try to it in nearest week(s).

The proper solution is:

  • put the anim struct pointers into a cache pool
  • remove anim struct pointers from Sequence struct
  • add functions (just like normal ImBuf cache), to pull anim struct pointers from the pool
  • limit that pool (calculated from global cache limit / probably user configurable option, see below)

Isn't that hard to do and was also on my personal todo list, not only sergeys.

Reason why I haven't done it already (and Sergey probably, too):

  • first step in doing that properly is making ffmpeg use the blender memory allocator
  • which boils down to: we can't use systems ffmpeg, since ffmpeg authors in their wisdom and glory made that a compile time option
  • which would boil down to having Blender been thrown out of Debian for using it's own version of ffmpeg
  • which again boils down to submitting patches to ffmpeg upstream and having discussions with Michael Niedermayer (if you haven't done that before, you don't really know, how hell on earth is feeling like... :) )

Work around solution in the meantime:

  • add a blender configuration options which just says "ffmpeg descriptor limit" and using a reasonable default (like 5-10).
  • have a discussion with Ton about adding a obviously hackish configuration option

Ah, Peter is here :)

@Peter Schlaile (schlaile), i think we might avoid using blender's allocator for FFmpeg? We wouldn't be able to measure descriptor size 100% accurate, but think it's possible to do close enough approximation?

hmm well, you'd have to add some silly code that does things like:

  • hmm, it's H264, let's assume we will use 100 MB / per file descriptor
  • hmm, it's raw RGB, let's assume we will use 10 MB / per file descriptor

sure, it will work, but it's hard to look anyone into the eyes after that :)

  • have the discussion with Michael Niedermayer... ? :)

I can remember there used to be a hack in code which freed FFmpeg buffers of strips which are not under the current frame during animation rendering. But seems it's no longer used. Need to investigate reason for this. Unless it was giving issues it might be nice to bring it back.

reason is simple: it's really slow... (file close / open is an expensive operation on movie files, doing it on every frame is really a no go).

Adding the anim cache pool would be the far better idea, even if you just add a global open file limit (which can be changed afterwards to a proper solution, which does a proper memory size estimation)

@Peter Schlaile (schlaile), I'd become very, very happy if this "obviously hackish configuration option" is available just from command line, or if a python script could do the job.

@Peter Schlaile (schlaile), the reasons you gave to had not take this bug fixed are very sad to read, and very understandable too.
This disabled slow code can be turned in a python script? - please, say yes...

I'm still hopeful there are a [viable] way to save me from having to cut my videos with opacity keyframes.

In meantime, if could be possible to edit opacity curves direct above a strip, would be good too... > <

If I intepreted your timeline correctly, you are editing shows which were recorded using several cameras, different angles?

Why don't you use the multicam strip?

It even has the nifty feature of helping you cut while viewing. (You can press number keys to add a cut to a different angle)

That way, there is only one movie strip for each camera (ffmpeg problem solved), and you got faster editing for free.

I've used that for timelines going up to 3 hours with no problems at all.

The strip layout goes e.g. like this (for 3 cameras):

channelstrip type
4multicam strip (extended to the whole timeline)
3camera 3 (most likely put into a meta, if you had to change tapes / cards)
2camera 2 (same)
1camera 1 (same)

editing using number keys works by selecting the multicam strip (the "unedited" part), starting playback and hitting number keys.

You might want to add several channel viewer windows for each input, so that it isn't blind editing :)

real world timeline looks e.g. like this (theatrical production):

@Peter Schlaile (schlaile), thank you so much for showing me the multicam strip! I never used it before, and it's looks perfect for what I'm doing. And this scene layout is amazing, it'll improve a lot my workflow! Think this could become a new official layout preset.

Sometimes I lite to do some fade transitions. How is this possible with the multicam strip? My guess is to use two multicam strips and fade between them.

I'm very excited with what you just teach me! Thank you!

Off-topic: Blender is - with NO DOUBT - the program with more resources I'll ever know! Never mind would be so easy to change between cams, as it is by just pressing number keys while playing back!

@Paulo José Oliveira Amaro (pauloup): exactly :)

the final timeline layout, when everything is finished, looks most of the time like this:

channelstrip type
6adjustment layer strip for final (officially called "primary") color correction
5additional multicam strips for fading, when need arises at the places, where we need it
4multicam strip (extended to the whole timeline)
3camera 3 (most likely put into a meta, if you had to change tapes / cards)
2camera 2 (same)
1camera 1 (same)

Thank you very much!
There are some links about this topic? I'm very interested to read more about it!

there isn't much more to tell about it.
Don't forget to render 25% proxies for the preview screens, that's all.

A look into the manual sometimes helps:

http://wiki.blender.org/index.php/Doc:2.6/Manual/Sequencer/Effects

The multicam strip isn't exactly doing rocket science, fun thing is: it does it's "science" still better than kdenlive :)

http://www.addlime.com/blog/video/kdenlive-multicam-editing-workflow/

Thank you! True, it seems quite simple and very efficient (in memory terms) what multicam is doing. The fact is that I didn't know it, even after years of using blender to video editing. Shame on me... :-P

[Sorry, still off-topic] I was thinking about tips of editing workflow, specially in Blender, like that about adjustment layer. Most of what you have presented here is enough for a good sequencer tutorial.

https://developer.blender.org/p/pauloup/ have you looked over at the Blendersvse blog? There are a number of tips for VSE usage.

https://developer.blender.org/p/schlaile/ I am very sad to hear how difficult it is to deal with ffmpeg devs :( If the Goosebury project rellies more on the VSE for long form edit, would these ffmpeg encoding issues present a problem?

@david mcsween (davidmcsween), Thank you for the Blender's VSE Blog tip, I did not know it. Looks very good!