View Transform -> Filmic adds 333% to VSE render time and miscolors the result. #86622

Closed
opened 2021-03-16 09:41:50 +01:00 by tintwotin · 40 comments

System Information
Operating system: Windows-10-10.0.18362-SP0 64 Bits
Graphics card: GeForce RTX 2060/PCIe/SSE2 NVIDIA Corporation 4.5.0 NVIDIA 457.30

It is often referred to as a "know issue" that, when it was decided to implement Filmic as default View Transform, it would break the color output of the VSE, but this is not the only downside of it. A 2.50 minute VSE edit took me a staggering 26 minutes to render with Filmic on, and 6 minutes with it off.

An often heard argument is that the VSE template sets it to "Standard" as default and that should solve the problem, but how many users actually open the VSE in a way which will not give them the "Standard" View Transform?

Out of 88 votes 68.1% are opening the VSE in ways which will not give them the "Standard" View Transform:
{F9894470,size=full}
https://twitter.com/tintwotin/status/1370077312083955715

It was explained to me by @rjg that this problem has been solved by documentation. But doing a search on "Filmic" in the manual results in two hits, and none of them describes that VSE users must change the View Transform setting to Standard themselves: https://docs.blender.org/manual/en/latest/search.html?q=filmic

On stack exchange there are 19 topics on the VSE and filmic, mostly concerning the problems caused by it: https://blender.stackexchange.com/search?q=%5Bvideo-sequence-editor%5D+filmic

This doesn't solve the problem either, since 58.7% out of 46 votes do not set the View Transform to "Standard",
when outputting from the VSE. Either because they do not know about it(37%) or they don't for unknown reasons(21.7%):
{F9894476, size=full}
https://twitter.com/tintwotin/status/1371365685775962112
(It should be mentioned here, that most of my followers on Twitter are hardcore VSE users, so those number will look quite different if casual users were asked.)

So it is fair to say that this default setting is seriously damaging the user experience of the Video Sequence Editor, and should by all means be dealt with as a most urgent bug and regression.

**System Information** Operating system: Windows-10-10.0.18362-SP0 64 Bits Graphics card: GeForce RTX 2060/PCIe/SSE2 NVIDIA Corporation 4.5.0 NVIDIA 457.30 It is often referred to as a "know issue" that, when it was decided to implement Filmic as default View Transform, it would break the color output of the VSE, but this is not the only downside of it. A 2.50 minute VSE edit took me a staggering 26 minutes to render with Filmic on, and 6 minutes with it off. An often heard argument is that the VSE template sets it to "Standard" as default and that should solve the problem, but how many users actually open the VSE in a way which will not give them the "Standard" View Transform? Out of 88 votes 68.1% are opening the VSE in ways which will not give them the "Standard" View Transform: {[F9894470](https://archive.blender.org/developer/F9894470/image.png),size=full} https://twitter.com/tintwotin/status/1370077312083955715 It was explained to me by @rjg that this problem has been solved by documentation. But doing a search on "Filmic" in the manual results in two hits, and none of them describes that VSE users must change the View Transform setting to Standard themselves: https://docs.blender.org/manual/en/latest/search.html?q=filmic On stack exchange there are 19 topics on the VSE and filmic, mostly concerning the problems caused by it: https://blender.stackexchange.com/search?q=%5Bvideo-sequence-editor%5D+filmic This doesn't solve the problem either, since 58.7% out of 46 votes do not set the View Transform to "Standard", when outputting from the VSE. Either because they do not know about it(37%) or they don't for unknown reasons(21.7%): {[F9894476](https://archive.blender.org/developer/F9894476/image.png), size=full} https://twitter.com/tintwotin/status/1371365685775962112 (It should be mentioned here, that most of my followers on Twitter are hardcore VSE users, so those number will look quite different if casual users were asked.) So it is fair to say that this default setting is seriously damaging the user experience of the Video Sequence Editor, and should by all means be dealt with as a most urgent bug and regression.
Author

Added subscribers: @rjg, @tintwotin

Added subscribers: @rjg, @tintwotin

@tintwotin A little correction on this report, I did not claim that this was already solved by the current documentation in the manual (see thread on blender.chat ). I said that it*could// be partially addressed by improving the documentation in the manual.

What I did state was that this is "a well known and documented problem" referring to the many answers on Blender's Stack Exchange. With the effort of the community we have build quite a large knowledge base were this issue has been covered many times, which you also mentioned in your report.

@tintwotin A little correction on this report, I did not claim that this was already solved by the current documentation in the manual (see thread on [blender.chat ](https:*blender.chat/channel/blender-coders/thread/Tj5tB74fvXD5WuvTK)). I said that it*could// be partially addressed by improving the documentation in the manual. What I did state was that this is "a well known and documented problem" referring to the many answers on Blender's Stack Exchange. With the effort of the community we have build quite a large knowledge base were this issue has been covered many times, which you also mentioned in your report.

Changed status from 'Needs Triage' to: 'Confirmed'

Changed status from 'Needs Triage' to: 'Confirmed'

Added subscriber: @iss

Added subscriber: @iss

I've confirmed the issue and marked it as Known Issue as discussed in the blender-coders channel.

I've confirmed the issue and marked it as *Known Issue* as discussed in the blender-coders channel.
Author

Added subscriber: @brecht

Added subscriber: @brecht
Author

Tagging @brecht as he seemed to be in charge of the Filmic implementation.

@rjg Btw. great tagging this as "Known Issue", just to ensure that this will never be fixed...

Tagging @brecht as he seemed to be in charge of the Filmic implementation. @rjg Btw. great tagging this as "Known Issue", just to ensure that this will never be fixed...

@tintwotin I was instructed by @iss to mark it as Known Issue, not my decision. He's in charge of the VSE according to the module page.

@tintwotin I was instructed by @iss to mark it as *Known Issue*, not my decision. He's in charge of the VSE according to the [module page](https://wiki.blender.org/wiki/Modules).

Added subscriber: @muhuk

Added subscriber: @muhuk

What do you propose as the solution?

What do you propose as the solution?
Author

@brecht I'll refrain from trying to come up with a solution, since you guys surely can come up with something better.

Releasing a software with a default setting which adds extreme render times and basically breaks the colours of people's work, shouldn't be an acceptable solution.

@brecht I'll refrain from trying to come up with a solution, since you guys surely can come up with something better. Releasing a software with a default setting which adds extreme render times and basically breaks the colours of people's work, shouldn't be an acceptable solution.

I think the solution is to make color management apply Filmic to scene strips and EXR images, then go to the sequencer color space, and after the sequencer is done, apply the standard transform.

So in practice this means:

  • Scene strips should get a color management transform from the scene linear role to the display, and then from the display to the sequencer role.
  • Image strips should get a setting to optionally apply this kind of transform as well, similar to what the View as Render option does on datablocks.
  • The render result buffer of the compositor should be tagged somehow as requiring only a standard transform for display.
I think the solution is to make color management apply Filmic to scene strips and EXR images, then go to the sequencer color space, and after the sequencer is done, apply the standard transform. So in practice this means: * Scene strips should get a color management transform from the scene linear role to the display, and then from the display to the sequencer role. * Image strips should get a setting to optionally apply this kind of transform as well, similar to what the View as Render option does on datablocks. * The render result buffer of the compositor should be tagged somehow as requiring only a standard transform for display.

In #86622#1130511, @brecht wrote:
I think the solution is to make color management apply Filmic to scene strips and EXR images, then go to the sequencer color space, and after the sequencer is done, apply the standard transform.

So in practice this means:

  • Scene strips should get a color management transform from the scene linear role to the display, and then from the display to the sequencer role.
  • Image strips should get a setting to optionally apply this kind of transform as well, similar to what the View as Render option does on datablocks.
  • The render result buffer of the compositor should be tagged somehow as requiring only a standard transform for display.

This would result in having color management option doing nothing and duplicating color management setting into VSE possibly causing similar confusion.

IMO better would be to duplicate view transform option for sequencer and group it's settings so they are distinguished but in common area with other similar settings. Then only different treatment would be required in render pipeline for sequencer, but I guess that should be possible?

> In #86622#1130511, @brecht wrote: > I think the solution is to make color management apply Filmic to scene strips and EXR images, then go to the sequencer color space, and after the sequencer is done, apply the standard transform. > > So in practice this means: > * Scene strips should get a color management transform from the scene linear role to the display, and then from the display to the sequencer role. > * Image strips should get a setting to optionally apply this kind of transform as well, similar to what the View as Render option does on datablocks. > * The render result buffer of the compositor should be tagged somehow as requiring only a standard transform for display. This would result in having color management option doing nothing and duplicating color management setting into VSE possibly causing similar confusion. IMO better would be to duplicate view transform option for sequencer and group it's settings so they are distinguished but in common area with other similar settings. Then only different treatment would be required in render pipeline for sequencer, but I guess that should be possible?

I don't necessarily mind if there is an additional view transform setting for the sequencer, but what does that mean exactly? It uses one or the other depending if there exist any sequencer strips? Or it still can do view transforms at the scene/image strip level?

I think part of the problem we should solve is that you might want to put e.g. titles over a scene strip, have Filmic applied to the scene strip but Standard for the titles. And even when not doing that, from a usability point of view I think it's better if adding a scene strip does not suddenly alter what the render result looks like.

I don't necessarily mind if there is an additional view transform setting for the sequencer, but what does that mean exactly? It uses one or the other depending if there exist any sequencer strips? Or it still can do view transforms at the scene/image strip level? I think part of the problem we should solve is that you might want to put e.g. titles over a scene strip, have Filmic applied to the scene strip but Standard for the titles. And even when not doing that, from a usability point of view I think it's better if adding a scene strip does not suddenly alter what the render result looks like.

This is good point. What I was thinking was view transform for sequencer would be set to Standard, but you would still have to be able to do color management on individual strips. So that wouldn't solve anything really.

This is good point. What I was thinking was view transform for sequencer would be set to Standard, but you would still have to be able to do color management on individual strips. So that wouldn't solve anything really.

Added subscriber: @troy_s

Added subscriber: @troy_s

This is looking at the problem the wrong way in my estimation.

  1. All footage has its own encoding and that encoding must be honoured.
  2. Lack of proper management in the VSE has been a problem forever.
  3. The idea of using the sRGB transfer function for all footage is dumb as dirt.

Properly manage the editor. It’s that simple.

Small studios use Blender, and they are coming to the DCC with camera footage, as well as their own renders.

Design the VSE to handle the footage encodings properly.

  1. Take all inputs to half float.
  2. Manage to the working space chosen like all DCCs on the planet.
  3. Perform the manipulations in the working space.
  4. Provide access to the colour transforms like proper DCCs do.

This is long, long overdue.

This is looking at the problem the wrong way in my estimation. 1. All footage has its own encoding and that encoding *must* be honoured. 2. Lack of proper management in the VSE has been a problem forever. 3. The idea of using the sRGB transfer function for all footage is dumb as dirt. Properly manage the editor. It’s that simple. Small studios use Blender, and they are coming to the DCC with camera footage, as well as their own renders. Design the VSE to handle the footage encodings properly. 1. Take all inputs to half float. 2. Manage to the working space chosen like all DCCs on the planet. 3. Perform the manipulations in the working space. 4. Provide access to the colour transforms like proper DCCs do. This is long, long overdue.

@troy_s, am I correct in summarizing your point like this?

  • The highest priority in color management development in Blender should be providing more user control and full support for OpenColorIO configurations, not solving issues as raised here by @tintwotin about defaults.
@troy_s, am I correct in summarizing your point like this? * The highest priority in color management development in Blender should be providing more user control and full support for OpenColorIO configurations, not solving issues as raised here by @tintwotin about defaults.

am I correct in summarizing your point like this?

The highest priority in color management development in Blender should be providing more user control and full support for OpenColorIO configurations, not solving issues as raised here

In terms of “Blender Design”, and the rendering to creation model, absolutely 120% in agreement with your summary.

The VSE should be lensed under what is being done with it, and the Open Movies are a perfect example as to the types of encodings and manipulation thereof. If development focused on this much lower level issue, many smaller studios and creators would benefit tremendously.

> am I correct in summarizing your point like this? > The highest priority in color management development in Blender should be providing more user control and full support for OpenColorIO configurations, not solving issues as raised here In terms of “Blender Design”, and the rendering to creation model, **absolutely 120% in agreement with your summary**. The VSE should be lensed under what is being done with it, and the Open Movies are a perfect example as to the types of encodings and manipulation thereof. If development focused on this much lower level issue, many smaller studios and creators would benefit tremendously.
Author

@brecht A week ago some kids proudly premiered their lock-down film produced in Blender on Blendernation. This film was greyed out because of the filmic setting. If you look at animations and tutorials on Youtube, you can easily see which one where edited in the VSE, because they're all greyed out. This has been going on for years. A solution should have been planned and implemented right when Filmic was set up as default View Transform, or that setting shouldn't have been changed. That would what a responsible developer would have done, not sweeping it under the rug, or trying to end the conversation with asking Troy if he thinks anything is more important than what's on Troy's mind...

@brecht A week ago some kids proudly premiered their lock-down film produced in Blender on Blendernation. This film was greyed out because of the filmic setting. If you look at animations and tutorials on Youtube, you can easily see which one where edited in the VSE, because they're all greyed out. This has been going on for years. A solution should have been planned and implemented right when Filmic was set up as default View Transform, or that setting shouldn't have been changed. That would what a responsible developer would have done, not sweeping it under the rug, or trying to end the conversation with asking Troy if he thinks anything is more important than what's on Troy's mind...

I just wanted to check if @troy_s was giving relevant feedback that would affect my proposed solution, or if he was making a point about prioritization. I don't actually agree with him about prioritization.

I just wanted to check if @troy_s was giving relevant feedback that would affect my proposed solution, or if he was making a point about prioritization. I don't actually agree with him about prioritization.

This film was greyed out because of the filmic setting. If you look at animations and tutorials on Youtube, you can easily see which one where edited in the VSE, because they're all greyed out.

Like all the other analysis you offer, this is false.

Here’s the thing Peter — Blender handles encodings.

If they made the work in Blender via rendering, the VSE is supposed to honour the rendering. But because it was designed years ago, and continues to be kludged along with a broken design, it doesn’t.

In the end, the VSE’s complete design is a broken shambles, not proper handling of encodings.

> This film was greyed out because of the filmic setting. If you look at animations and tutorials on Youtube, you can easily see which one where edited in the VSE, because they're all greyed out. Like all the other analysis you offer, this is false. Here’s the thing Peter — Blender handles encodings. If they made the work in Blender via rendering, the VSE is supposed to honour the rendering. But because it was designed years ago, and continues to be kludged along with a broken design, it doesn’t. In the end, the VSE’s complete design is a broken shambles, *not proper handling of encodings*.

VSE-blueprint.jpg

![VSE-blueprint.jpg](https://archive.blender.org/developer/F9896674/VSE-blueprint.jpg)

This task is about a specific problem regarding defaults and usability, and you're not saying how it would be addressed.

This task is about a specific problem regarding defaults and usability, and you're not saying how it would be addressed.

and you're not saying how it would be addressed

Seems problematic to consider "defaults and usability" when the crux of the issue is mishandling of encodings and, as a result, pixel handling?

Flipping a default that results in mishandling doesn't seem like the most prudent approach, nor does it make much sense to change a transform that is set and assumed elsewhere in the pipeline such as coming from rendering?

> and you're not saying how it would be addressed Seems problematic to consider "defaults and usability" when the crux of the issue is mishandling of encodings and, as a result, pixel handling? Flipping a default that results in mishandling doesn't seem like the most prudent approach, nor does it make much sense to change a transform that is set and assumed elsewhere in the pipeline such as coming from rendering?

For example, a user might edit screen recordings to make a tutorial. In that case they do not want a film-like view transform to be used, but they are not aware of this and get bad results. I don't see how that is caused by "mishandling of encodings", but it's a pretty vague statement so I'm not sure.

For you the most important color management issue in the sequencer might be something else than what is being discussed in this task, that's fine, but it's not relevant here.

For example, a user might edit screen recordings to make a tutorial. In that case they do not want a film-like view transform to be used, but they are not aware of this and get bad results. I don't see how that is caused by "mishandling of encodings", but it's a pretty vague statement so I'm not sure. For you the most important color management issue in the sequencer might be something else than what is being discussed in this task, that's fine, but it's not relevant here.

For example, a user might edit screen recordings to make a tutorial. In that case they do not want a film-like view transform to be used

This is already accounted for. See diagram.

It is a conflation of mishandling encodings, which is specifically related to unfortunate defaults.

> For example, a user might edit screen recordings to make a tutorial. In that case they do not want a film-like view transform to be used This is already accounted for. See diagram. It is a conflation of mishandling encodings, which is specifically related to unfortunate defaults.

I saw the diagram, I'm saying it does not address the usability issue. The burden is on you to explain it clearly, if not I'm not going to invest more time in discussing this topic with you.

I saw the diagram, I'm saying it does not address the usability issue. The burden is on you to explain it clearly, if not I'm not going to invest more time in discussing this topic with you.

The burden is on you to explain it clearly

The compositing of mixed imagery would be assumed to be based on a specific encoding. Assuming a generic BT.709 ready content stream from a typical camera, or sRGB such, adding an effect that provides OpenColorIO functionality allows for the content to be decoded to the same domain as the open assumed working space, or composite both in a closed domain display linear working space.

> The burden is on you to explain it clearly The compositing of mixed imagery would be assumed to be based on a specific encoding. Assuming a generic BT.709 ready content stream from a typical camera, or sRGB such, adding an effect that provides OpenColorIO functionality allows for the content to be decoded to the same domain as the open assumed working space, or composite both in a closed domain display linear working space.
tintwotin changed title from VSE: View Transform -> Filmic as default is breaking "most" VSE user's output to View Transform -> Filmic as default setting is miscoloring and slowing down VSE exports 2021-04-15 18:35:41 +02:00
tintwotin changed title from View Transform -> Filmic as default setting is miscoloring and slowing down VSE exports to View Transform -> Filmic adds 333% to VSE render time and miscolors the result. 2021-04-15 18:48:53 +02:00

@brecht I have done some research about this topic, and I think there are some issues still with approach you proposed. Please correct me if I misunderstood something or done something incorrectly.

  • Image strips should get a setting to optionally apply this kind of transform as well, similar to what the View as Render option does on datablocks.

I have hard-coded this part, see P2089, and ensured correct final view transform manually. Then I created sample scenes, rendered EXR's and done very simple alphaover compositing in VSE. Scene with objects is simple table with candle, reflection of sun (it's more a moon though) and PC monitor displaying random image. Another scene is neutral gray filter. I wanted to make sunglasses, but my modelling skills are bad.

Here you can see 2 scenes and composite in current VSE and modified as per your suggestion:
compare.png

There are differences in color of candle flame, sun reflection is a bit smudged and image displayed in monitor seems to be more flat.

I can't tell whether this is tolerable for VSE, in this regard my decisions are based only on technical correctness as far as I can understand it. Such simple compositing would apply to any overlay and to me it seems that it should work.

In #86622#1131633, @troy_s wrote:
The compositing of mixed imagery would be assumed to be based on a specific encoding. Assuming a generic BT.709 ready content stream from a typical camera, or sRGB such, adding an effect that provides OpenColorIO functionality allows for the content to be decoded to the same domain as the open assumed working space, or composite both in a closed domain display linear working space.

Currently all images should be decoded to sequencer working space. Wouldn't it make sense to assume BT.709 encoding for all sRGB images unless encoding is specified explicitly?
If I understand you correctly, the core of this problem is, that you can't specify encoding of sRGB image, is that correct?

@brecht I have done some research about this topic, and I think there are some issues still with approach you proposed. Please correct me if I misunderstood something or done something incorrectly. > * Image strips should get a setting to optionally apply this kind of transform as well, similar to what the View as Render option does on datablocks. I have hard-coded this part, see [P2089](https://archive.blender.org/developer/P2089.txt), and ensured correct final view transform manually. Then I created sample scenes, rendered EXR's and done very simple alphaover compositing in VSE. Scene with objects is simple table with candle, reflection of sun (it's more a moon though) and PC monitor displaying random image. Another scene is neutral gray filter. I wanted to make sunglasses, but my modelling skills are bad. Here you can see 2 scenes and composite in current VSE and modified as per your suggestion: ![compare.png](https://archive.blender.org/developer/F10039181/compare.png) There are differences in color of candle flame, sun reflection is a bit smudged and image displayed in monitor seems to be more flat. I can't tell whether this is tolerable for VSE, in this regard my decisions are based only on technical correctness as far as I can understand it. Such simple compositing would apply to any overlay and to me it seems that it should work. > In #86622#1131633, @troy_s wrote: > The compositing of mixed imagery would be assumed to be based on a specific encoding. Assuming a generic BT.709 ready content stream from a typical camera, or sRGB such, adding an effect that provides OpenColorIO functionality allows for the content to be decoded to the same domain as the open assumed working space, or composite both in a closed domain display linear working space. Currently all images should be decoded to sequencer working space. Wouldn't it make sense to assume BT.709 encoding for all sRGB images unless encoding is specified explicitly? If I understand you correctly, the core of this problem is, that you can't specify encoding of sRGB image, is that correct?

In #86622#1152369, @iss wrote:
I have hard-coded this part, see P2089, and ensured correct final view transform manually. Then I created sample scenes, rendered EXR's and done very simple alphaover compositing in VSE. Scene with objects is simple table with candle, reflection of sun (it's more a moon though) and PC monitor displaying random image. Another scene is neutral gray filter. I wanted to make sunglasses, but my modelling skills are bad.

Right, this doesn't work by itself. There would need to be a way to apply the view transform while keeping the image in scene linear space. The problem is that Filmic currently has the view transform and display transform baked into one. A workaround would be to chain transforms to go back to scene linear through the default transform, but the proper fix would be to improve the OpenColorIO config to be able to do this directly.

> In #86622#1152369, @iss wrote: > I have hard-coded this part, see [P2089](https://archive.blender.org/developer/P2089.txt), and ensured correct final view transform manually. Then I created sample scenes, rendered EXR's and done very simple alphaover compositing in VSE. Scene with objects is simple table with candle, reflection of sun (it's more a moon though) and PC monitor displaying random image. Another scene is neutral gray filter. I wanted to make sunglasses, but my modelling skills are bad. Right, this doesn't work by itself. There would need to be a way to apply the view transform while keeping the image in scene linear space. The problem is that Filmic currently has the view transform and display transform baked into one. A workaround would be to chain transforms to go back to scene linear through the default transform, but the proper fix would be to improve the OpenColorIO config to be able to do this directly.

In #86622#1152373, @brecht wrote:
Right, this doesn't work by itself. There would need to be a way to apply the view transform while keeping the image in scene linear space. The problem is that Filmic currently has the view transform and display transform baked into one. A workaround would be to chain transforms to go back to scene linear through the default transform, but the proper fix would be to improve the OpenColorIO config to be able to do this directly.

No, the approach works as you proposed, my point rather was that after you apply Filmic view transform, compositing and other operations doesn't work as they should, because pixel values are no longer linear.

I guess way around this would be to composit in linear space until you are requested to composite with non-linear image, but this seems rather fragile to me.

If there is some mostly universal transformation that could applied to sRGB images to bring them closer to scene linear colorspace, that would be best way to solve this IMO. Even if it wouldn't be exact, it could be good enough solution for users that don't care that much about exact color profile of their source media.

> In #86622#1152373, @brecht wrote: > Right, this doesn't work by itself. There would need to be a way to apply the view transform while keeping the image in scene linear space. The problem is that Filmic currently has the view transform and display transform baked into one. A workaround would be to chain transforms to go back to scene linear through the default transform, but the proper fix would be to improve the OpenColorIO config to be able to do this directly. No, the approach works as you proposed, my point rather was that after you apply Filmic view transform, compositing and other operations doesn't work as they should, because pixel values are no longer linear. I guess way around this would be to composit in linear space until you are requested to composite with non-linear image, but this seems rather fragile to me. If there is some mostly universal transformation that could applied to sRGB images to bring them closer to scene linear colorspace, that would be best way to solve this IMO. Even if it wouldn't be exact, it could be good enough solution for users that don't care that much about exact color profile of their source media.

Currently all images should be decoded to sequencer working space. Wouldn't it make sense to assume BT.709 encoding for all sRGB images unless encoding is specified explicitly?

Depends on the EXRs, no? Imagine rendering in a wide gamut in Cycles X, and popping them into the VSE for proper handling / dissolving / and colourist work. Why would it assume BT.709?

If I understand you correctly, the core of this problem is, that you can't specify encoding of sRGB image, is that correct?

If it’s an sRGB image, it needs to be honoured.

The problem is that renders aren’t. They are radiometric-like RGB coming out of the path tracer. The compositing should match either an open domain composite or a closed, image maker choice depending. The primaries need to be honoured. The colour grading secondaries also need to composite accordingly.

compositing and other operations doesn't work as they should, because pixel values are no longer linear.

Bingo. And remember, there are two linear domains that are equally valid with equal creative facets; open domain, like that from a render, and closed domain linear, such as a signal compressed and ready for display.

Then factor in the primaries issues. It’s not just about the two domains of radiometric-like linear light.

I guess way around this would be to composit in linear space until you are requested to composite with non-linear image, but this seems rather fragile to me.

It is. Doesn’t work.

If there is some mostly universal transformation that could applied to sRGB images to bring them closer to scene linear colorspace

This is a woeful, albeit seductive, idea.

Just assume it’s an open movie render, and assume the image maker wants to grade it, dissolve it, and interact with it. None of that works without addressing the core fundamentals properly, without hacks.

> Currently all images should be decoded to sequencer working space. Wouldn't it make sense to assume BT.709 encoding for all sRGB images unless encoding is specified explicitly? Depends on the EXRs, no? Imagine rendering in a wide gamut in Cycles X, and popping them into the VSE for proper handling / dissolving / and colourist work. Why would it assume BT.709? > If I understand you correctly, the core of this problem is, that you can't specify encoding of sRGB image, is that correct? If it’s an sRGB image, it needs to be honoured. The problem is that renders aren’t. They are radiometric-like RGB coming out of the path tracer. The compositing should match either an open domain composite or a closed, image maker choice depending. The primaries need to be honoured. The colour grading secondaries also need to composite accordingly. > compositing and other operations doesn't work as they should, because pixel values are no longer linear. Bingo. And remember, there are two linear domains that are equally valid with equal creative facets; open domain, like that from a render, and closed domain linear, such as a signal compressed and ready for display. Then factor in the primaries issues. It’s not _just_ about the two domains of radiometric-like linear light. > I guess way around this would be to composit in linear space until you are requested to composite with non-linear image, but this seems rather fragile to me. It is. Doesn’t work. > If there is some mostly universal transformation that could applied to sRGB images to bring them closer to scene linear colorspace This is a woeful, albeit seductive, idea. Just assume it’s an open movie render, and assume the image maker wants to grade it, dissolve it, and interact with it. None of that works without addressing the core fundamentals properly, without hacks.

Bad idea. You just broke colour management for everyone on a display that isn’t sRGB. Why not address the problem?

Bad idea. You just broke colour management for everyone on a display that isn’t sRGB. Why not address the problem?

In #86622#1152506, @troy_s wrote:

If I understand you correctly, the core of this problem is, that you can't specify encoding of sRGB image, is that correct?

If it’s an sRGB image, it needs to be honoured.

The problem is that renders aren’t. They are radiometric-like RGB coming out of the path tracer. The compositing should match either an open domain composite or a closed, image maker choice depending. The primaries need to be honoured. The colour grading secondaries also need to composite accordingly.

Right, this report isn't about renders though, workflow may apply, but I can't check that currently because I am not sure if I get workflow correctly.

If there is some mostly universal transformation that could applied to sRGB images to bring them closer to scene linear colorspace

This is a woeful, albeit seductive, idea.

Just assume it’s an open movie render, and assume the image maker wants to grade it, dissolve it, and interact with it. None of that works without addressing the core fundamentals properly, without hacks.

AFAIK you still need to have some pre-defined profiles to do this, which are currently missing. Am I correct?

Can I ask you about particular case? It may be dumb, but it is my artistic need. Consider following image:
Bez názvu.png

I want to overlay this image over my work, but white area is not white, it's gray as pixel value in linear space is only 1.0. What can I do, if Blender had all the tool it needs? Or in other words what tool is missing to achieve this?

> In #86622#1152506, @troy_s wrote: >> If I understand you correctly, the core of this problem is, that you can't specify encoding of sRGB image, is that correct? > > If it’s an sRGB image, it needs to be honoured. > > The problem is that renders aren’t. They are radiometric-like RGB coming out of the path tracer. The compositing should match either an open domain composite or a closed, image maker choice depending. The primaries need to be honoured. The colour grading secondaries also need to composite accordingly. Right, this report isn't about renders though, workflow may apply, but I can't check that currently because I am not sure if I get workflow correctly. >> If there is some mostly universal transformation that could applied to sRGB images to bring them closer to scene linear colorspace > > This is a woeful, albeit seductive, idea. > > Just assume it’s an open movie render, and assume the image maker wants to grade it, dissolve it, and interact with it. None of that works without addressing the core fundamentals properly, without hacks. AFAIK you still need to have some pre-defined profiles to do this, which are currently missing. Am I correct? Can I ask you about particular case? It may be dumb, but it is my artistic need. Consider following image: ![Bez názvu.png](https://archive.blender.org/developer/F10051970/Bez_názvu.png) I want to overlay this image over my work, but white area is not white, it's gray as pixel value in linear space is only 1.0. What can I do, if Blender had all the tool it needs? Or in other words what tool is missing to achieve this?

I want to overlay this image over my work, but white area is not white, it's gray as pixel value in linear space is only 1.0.

Which is exactly why you manage things. That “white” might actually be magenta. You don’t know and can’t know from the encoding values.

The proper way to have it all work is to simply manage the system light data.

What happens if it is managed?

  1. Your smiley face is loaded. You specify the encoding. The encoding is taken to the working space properly.
  2. You do your over. It is composited properly.
  3. You play it back on your Display P3 device and it renders properly. Or your BT.1886 device. Or your sRGB device.

Compare that to what happens now and you can probably see why pixel management matters.

> I want to overlay this image over my work, but white area is not white, it's gray as pixel value in linear space is only 1.0. Which is exactly why you manage things. That “white” might actually be magenta. You don’t know and can’t know from the encoding values. The proper way to have it all work is *to simply manage the system light data*. What happens if it is managed? 1. Your smiley face is loaded. You specify the encoding. The encoding is taken to the working space properly. 2. You do your over. It is composited properly. 3. You play it back on your Display [P3](https://archive.blender.org/developer/P3.txt) device and it renders properly. Or your BT.1886 device. Or your sRGB device. Compare that to what happens now and you can probably see why pixel management matters.

This issue was referenced by 9576612d45

This issue was referenced by 9576612d45dc062c85ec66250e2c9824ea4026a5

Changed status from 'Confirmed' to: 'Resolved'

Changed status from 'Confirmed' to: 'Resolved'
Richard Antalik self-assigned this 2021-05-06 03:28:58 +02:00

I forgot to close this task by commit - fixed in 9576612d45

I forgot to close this task by commit - fixed in 9576612d45

Added subscriber: @intracube

Added subscriber: @intracube
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
8 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#86622
No description provided.