Page MenuHome

Color transform in VSE accumulates. Sequencer color profile has no effect?
Closed, ArchivedPublic

Description

System Information
Win7x64, Gtx580
Blender Version
2.8 Beta
Short description of error
Setting view transform to Filmic (probably anything except Default), affects sequencer file output, not only it's scene input.
Resulting in applying view transform to file input that usually doesn't needs it and in accumulating multiple View Transforms if processing what have been already rendered with it.
From what it looks like, it is somewhat reasonable to assume that Color Management->Sequencer color profile selection is there to solve this, by separating it from the main color transform, but the setting seems not to have any effect.
IMHO a less convenient solution would be setting every VSE input strip to Color Space: "Filmic", to compensate for already present view transform in the sequence, but that option seems to do something else too as setting it to Filmic Log doesn't just revert the transform.

Exact steps for others to reproduce the error

  • Render Animation (1 frame is set to be saved where the blend file is located);
  • Scrub a frame forward+backward to refresh sequencer, that loads the frame just rendered or change data/file to point to it;
  • Enable Post Processing->Sequencer;
  • Render Animation;
  • Scrub a frame forward+backward;
  • Render Animation;
  • Scrub a frame forward+backward;
  • Render Animation;
  • Scrub a frame forward+backward;
  • The colors of the image have Filmic color transform applied multiple times.

Details

Type
Bug

Event Timeline

Brecht Van Lommel (brecht) claimed this task.

Please see the documentation, this setting is not a view transform:
https://docs.blender.org/manual/en/latest/render/post_process/color_management.html#sequencer

The view transform (including Filmic) has no effect on input, only on output. View transforms in general are one way operations for final output, not digital intermediates.

Note that the Video Editing template does not have Filmic enabled by default. But you do sometimes need Filmic in video editing, if you are loading linear EXR files.

A design change could be for the video editor to immediately apply the view transform when loading linear EXR, and then continue in sRGB space. However this would mean you can't do HDR color correction in the sequencer, as this information is lost after the view transform.

Ok, not a bug then.
As for design - then how about having separate View Transform profiles for 3D render result and for compositing result, so that a scene can be rendered giving output with Filmic transform, and combined with material that already has that transform (especially LDR video footage, which doesn't exist without "Filmic")?
Or - bit less convenient, but probably more flexible and simpler - having an option to apply inverse transform to input strips/nodes, so that when View Transform is applied it hits only the result of 3D render.

It depends on industry for sure, but the thing is Filmic is often (mostly?) used to achieve lifelike exposure impression or to roughly approximate that of real world image being combined with - exactly as digital intermediate, not final output (it cannot be used as LDR output anyway, as it is intended for dynamyc range, which needs to be clipped close to what you actually have).
Otherwise it's working semi-blindfold. I imagine that being Ok for productions where footage may come before even de-Bayering anyway, but it doesn't necessarily makes sense with what comes out of camera in 8-10 bits per channel. That is - where someone drops in a footage to render 3D straight over it. Being unable not to apply Filmic to that footage leaves no choice but to do it in a separate project just to see what it looks like together.

Just in case - I'm not claiming to be an expert :)

@Richard Antalik (ISS) have you reproduced this or did you simply just tag it?

Both :)
I haven't looked yet at whole color transform pipeline in sequencer, but I didn't think, that this is bug.

I am also new in color transformation stuff. Had only few lectures with Troy :)

@Brecht Van Lommel (brecht) If I understand this correctly, transformations go as follows:

Strip colorspace -> sequencer colorspace (preprocessing stage)
sequencer colorspace * look -> view transform colorspace -> render output
view transform colorspace -> display device colorspace (display stage - not affecting output)

A design change could be for the video editor to immediately apply the view transform when loading linear EXR, and then continue in sRGB space.

I know, that I had only few lectures with Troy, but this doesn't sound like a good idea. We need to be able to do processing(alpha overs, crosses) in linear(or any desired) colorspace.

I know, that I had only few lectures with Troy, but this doesn't sound like a good idea. We need to be able to do processing(alpha overs, crosses) in linear(or any desired) colorspace.

I wasn't clear, what I meant was to apply the view transform and then continue in sequencer color space, which is currently sRGB by default. If the sequencer working color space is set to Linear it could convert from sRGB to Linear after the view transform.

The best solution really depends on the specific workflow. If you are putting text effects over a Cycles EXR render, or edit it together with some other footage, you probably want to have Filmic affect the render only. This then implies doing the Filmic view transform immediately when reading the render into the strip.

On the other hand, if you are applying a motion blur or bloom effect on the render, you will get the most realistic results doing that before the view transform is applied and all the specular HDR values have not been compressed yet. It could be argued that such effects should be done in the compositor instead. But there are different possible workflows, the distinction is not always so clear.

Part of this is the fact that the VSE abuses OCIO in an attempt to work around some of its limitations.

The best solution really depends on the specific workflow. If you are putting text effects over a Cycles EXR render, or edit it together with some other footage

Nope, those too need to be applied on scene referred values, or display linear at the very least, otherwise everything from a simple dissolve to the antialising on text would be mangled up.

If the limitations were resolved, the correct information in Brecht’s last post would be feasible to work under a proper camera rendering transform. Currently however, notably a lack of a cache, results in a tremendous performance hit.

@Troy Sobotka (sobotka) Could you elaborate a little or maybe refer to some resource on why it's not feasible to apply view transform to current scene input strip/node separately and then pass it into VSE/compositor, that'd have it's own view transform ("None" for example), to be able to combine with existing non-hdr material?

The inputs would indeed need to be applied per image buffer, as that is OCIO’s design to take all imagery to the same scene referred reference space. The VSE abuses this design.

“None” wouldn’t be appropriate for any buffer in this case, and the “None” is a strange view transform leftover from legacy.

So:

  1. Take all ingested buffers to the reference space, which is always a scene referred linear encoding of a given set of primaries.
  2. Apply the chosen view transform as selected by the person pushing the pixels based on need. In most instances this would be a typical camera rendering transform coupled with an aesthetic look twist, but other possibilities are feasible.

I wasn't clear, what I meant was to apply the view transform and then continue in sequencer color space, which is currently sRGB by default. If the sequencer working color space is set to Linear it could convert from sRGB to Linear after the view transform.

No, this doesn’t work and is essentially adding cycles of overhead for no reason. It also leads to every single pixel being broken.

Key point is that the reference space must always be a linearized reference, and scene linear in the case of most work generated now given the nature of the cameras and rendering.

Richard’s patching addresses this, but currently the VSE is a bit of a mess due to historical reasons, and likely can’t be fixed until Richard’s work lands.

If the reference isn’t properly linearized to scene linear, every manipulation is wrong.

To “properly” unscrew the VSE, it requires several steps to make it performant:

  1. A threaded background rendering cache.
  2. All ingestion of assets loaded to display linear, and ideally a “lower quality” variant cached to reference to avoid lazy allocation cycles blown every time the frame is required.
  3. All strips perform math on the always “in reference” pixels. Use the half float lower quality offline for the real-time feedback, and the higher quality full float for final rendering.
  4. Undo the VSE OCIO abuse and use OCIO correctly, not cache busted up nonlinear and hijack the OCIO functionality.
  5. Background thread render all pixel ops to a blit cache ready in reference. Use GPU for live view and look applied, and CPU for online highest quality. The view needs to be kept off of the cached rendered blit because of multiple head cases, where two heads would require potentially different views and be of different display classes.