Page MenuHome

Cycles Color space flexibility
Open, NormalPublic

Description

This task is mainly for following up on the discussion regarding Cycles Color management (http://lists.blender.org/pipermail/bf-committers/2015-November/046131.html).
Essentially, Cycles has color management issues in two places:

  • Blackbody, Wavelength and Sky nodes
  • Byte textures

The nodes need a way to convert XYZ to the scene linear space, while byte textures need a way to convert whatever color space they are in to the scene linear space.
XYZ to scene linear should be a simple 3x3 matrix, while the texture handling will need LUTs (due to non-linear encoding).

I could do the Cycles-side coding, @Troy Sobotka (sobotka) agreed to update the OCIO configs.
Any thoughts/problems/suggestions/improvements?

Details

Type
Design

Event Timeline

There's two separate topics here, one is arbitrary linear working spaces, and the other is improving color space support for image textures.

I assumed only the first would be tackled now, in which case we can keep assuming the byte textures are in sRGB color space, and no LUTs are needed, just a 3x3 matrix from linear sRGB to the working space.

If the plan is to improve color space support for image textures as well, then you either need LUTs for conversion in the kernel, or we can do conversion to linear (half?) float outside the kernel, at the cost of increased memory usage. Non-sRGB byte textures are probably quite rare in practice, so it might not be worth the complexity to deal with LUTs in the kernel.

If the plan is to improve color space support for image textures as well, then you either need LUTs for conversion in the kernel, or we can do conversion to linear (half?) float outside the kernel, at the cost of increased memory usage.

I wonder about the latter suggestion as it solves a number of issues in potentially a number of places across Blender, so it may very well be worth the investment to lay a solid foundation.

16 bit half floats, all technical issues aside (lack of bare metal support etc.) solves some issues regarding painting, VSE, and a number of other places where proper blending etc. requires a linearized reference space and the more shallow 8 BPC can't hold the linearization. I know @Antony Riakiotakis (psy-fi) and myself have chatted about a 16 bit half float that is always transformed to reference as an ideal solution for performance intensive operations, so it seems there is a large overlap with this sort of issue here.

Also, regarding reference space issues within Cycles, it makes for a very tough sell to not make it entirely colourspace agnostic. The reason is that it would be extremely complex to add merely say, a wide gamut reference space via OCIO and then deal with the transformation of sRGB based textures to that wide gamut.

Dead simple within OCIO, but not so simple in some sort of hybrid / custom nightmare.

Non-sRGB byte textures are probably quite rare in practice, so it might not be worth the complexity to deal with LUTs in the kernel.

I believe that the vast majority of asset collection is done using DSLRs which are typically shot using the default AdobeRGB colour space transform for the widest default gamut. Of course, you can use a custom transform as well to eke out every possible bit of gamut from the DSLR as well.

Just donned on me that we can't leave the sRGB transfer curve hard coded as it makes no sense.

If we set our reference space to ACEScg or DCI-P3, and we load an sRGB 8 bit image in as a texture, we would still need to convert the primaries, which involves a matrix.

I don't see any way around this without fixing the 8 bit junk. Promote to 32 bit and roll through OCIO is the only option currently.

A hardcoded sRGB curve might make sense as a memory / speed optimization for common cases. Maybe LUTs are simple and fast enough and it doesn't matter. But it's really not complicated to make the kernel generally agnostic, and then optimize some specific common cases if it turns out there's a performance regression.

In any case, byte textures must stay supported one way or another, because the memory increase to half float or float is not acceptable for GPU rendering.

In OpenGL there are the so called sRGB textures. Not sure if they are supported in CUDA too though. These are supposed to do automatic conversion to linear rgb through hardware.
Textures don't need to be converted to float16 before uploading: The conversion can be done in the shader itself. The linear values are temporary and just used for the lighting calculation. Then we just need to convert the final shaded result, which is stored in linear float format, to srgb. There is no loss of precision in this case: srgb byte to linear float is a lossless operation.

The issue comes when the intermediate results of those operations get converted to a lower resolution buffer. Those buffers, if persistent should stay in higher precision and linear space to
ensure that further operations do not introduce cumulative errors. This is where painting/sequencer suffer.

Texture assets/source videos though, can be left in original space and converted as needed. It's a classic speed/memory tradeoff of course.

Okay, here's the first version of the first part: Making Cycles respect the specified scene_linear colorspace instead of using hardcoded sRGB/Rec709 transforms.
Essentially, this changes the luminance calculation and the parts of the code that depend on XYZ color (Sky texture, Wavelength node, Blackbody node).

To get this info, a XYZ->scene_linear matrix is obtained (first patch, credit for that one goes to @Brecht Van Lommel (brecht)). This matrix and its inverse is then stored in the Film structs (second patch). During rendering, this info is then used to get correct behaviour regardless of the scene_linear space (third patch).

One point that still worries me is the use of average() to get e.g. sampling weights, since colors that are out-of-gamut in scene_linear will produce negative components (for example, the new blackbody code might produce XYZ colors that are outside of the scene_linear gamut). The easy approach here would be to clip everything to the scene_linear gamut and expect the user to pick a space that is large enough for his needs (like ACEScg or Rec2020).

Textures are currently still assumed to be sRGB (with or without the transfer curve applied), this will be handled properly in the second part (hopefully coming soon).



Essentially, what this patch provides is that no matter which scene_linear you use, you won't get weird results. As an extreme test, I included the "Linear BGR" colorspace, which is just the current Linear colorspace with the R and B components swapped. This might seem contrived, but if that space fails, all non-sRGB/Rec709 spaces will.
When testing in 2.76, using the Blackbody or Wavelength node will give totally wrong results when scene_linear is set to "Linear BGR" in the OCIO config. With this patch, you can see that if you render a reddish Blackbody with "Linear BGR" and look at the color values in the image editor, the "regular" values (displayed left) will show a blueish color (since the channels are swapped), but the color-corrected values at the right and the image itself will be a reddish color, just as it should be.

Brecht Van Lommel (brecht) moved this task from Backlog to UI on the Cycles board.Feb 21 2016, 2:49 PM

Soo, after a long while, here's an improved version of the patch, now also with proper color management for textures!

The commit messages should explain it pretty well, essentially the host side extracts a transfer curve and a matrix for every texture color space and applies them while rendering, which should work just as well on GPUs (I actually haven't tested GPU support yet).

Here's the patches, to be applied on top of rB4e4ff72d:




@ the devs: Do you think the patches are ready for review? If yes, should I post them incrementally or squashed?

@Lukas Stockner (lukasstockner97), if you submit it as a differential revision it'll be easier to apply and test.

From the quick glance over the patches i'm not really convinced in all this.

Exposing color management calls via render API is kind of wrong:

  • It was already requested to have color utils in bpy. That's also something skeptical but it's less crap than doing it for just a render API.
  • It is limiting color space things to only Cycles inside of Blender, even standalone Cycles will have support of this feature. Surely Cycles is mainly for Blender, but at this point it's nice to think of solution which will be flexible enough for every users of Cycles.

Colorspace management for image textures is also wrong:

  • It looks like you're adding per-node colorspace settings, which is bad by design. It is up to the image datablock to define how exactly it is to be used. You should never "override" color management settings set in the datablock.
  • So basically you should just remove any color management specific settings from the node and use Image datablock settings instead.

However, making Cycles aware of Image datablock colorspace settings is like re-implementing Blender's color management pipeline in Cycles itself. It is almost easier to simply drop OIIO and use ImBuf directly (similar to packed textures). That would :

  • Avoid duplicated color management in both sides (Blender and Cycles), making sure color management is also happening correct (does the patch even respect alpha mode of the image?)
  • (Hopefully) make it easier to fit into other's application's color management.

It's all more a global questions, but i'd rather see a proper solution. Just wacking per-node colorspace settings is not a right solution here at all.

  • Yes, moving the color stuff out of the Render API makes sense - I'll try to figure out how to do that ;)
  • I don't really see how this patch causes problems for using Cycles in other tools or as standalone - just supply a hardcoded XYZ->Rec.709 matrix and a color-conversion callback that always applies sRGB, and you have the old behaviour.
  • Of course the image colorspace should be used, I somehow forgot about it... I just went for the node since it had the "Non-Color" vs. "Color" option before. One issue here might be workflow: If you load an image directly via the open button in the node, you'll have to switch to the Image Editor just to set the color space...
  • I haven't really tested alpha yet
  • Where is color management duplicated? The host application (Blender Cycles standalone or whatever) provides some information and Cycles uses it. There's no OCIO to be found in Cycles at all - although it might be interesting to add it to Cycles Standalone at some point.
  • As for fitting into other applications - Cycles doesn't really care what the color space index means, it's just supplied by the host application and used as an argument for the color conversion callback in the ImageManager. So, if you have e.g. three hardcoded colorspaces in your application, just have three hardcoded XYZ conversion matrices and use 0, 1 or 2 as the index.

Yes, moving the color stuff out of the Render API makes sense - I'll try to figure out how to do that ;)

That might be a bit involved since those bindings are not covered by C++ RNA. But if we'll figure that out thne we'll have a really nice system :)

I don't really see how this patch causes problems for using Cycles in other tools or as standalone - just supply a hardcoded XYZ->Rec.709 matrix and a color-conversion callback that always applies sRGB, and you have the old behaviour.

How would you render a scene which is not in rec709 linear space with standalone? If we wouldn't care about making Cycles a more mature renderer engine, suggesting builds for downloads and so we could just ignore this fact. But we kind of worrying about it now, so worth investigating all and possible ways.

It is kind of a question if we want it to be a callback based or just make Cycles aware of OCIO on itself. Both ways have cons and pros.

Like, if Cycles uses OCIO then some bits would be duplicated with Blender (config open, roles handling?) but would allow to deal with all colorspace in standalone and other applications. On another hand other applications might not care about OCIO at all and not even use it.

So what i mean is, some thoughts about what's the best way to go for us here are needed. And at least think we should agree on what we do now (to improve Blender itself, and how we can make it to work for everyone in a nice way).

Of course the image colorspace should be used, I somehow forgot about it... I just went for the node since it had the "Non-Color" vs. "Color" option before. One issue here might be workflow: If you load an image directly via the open button in the node, you'll have to switch to the Image Editor just to set the color space...

You just open properties panel in node editor and do color space changes there.

Where is color management duplicated? The host application (Blender Cycles standalone or whatever) provides some information and Cycles uses it. There's no OCIO to be found in Cycles at all - although it might be interesting to add it to Cycles Standalone at some point.

ImBuf is already doing all the color space conversion, making sure alpha mode is correct and so on. Duplictaed in terms ImageEditor will be doing exactly the same thing to ensure images are in compatible colorspace/alpha mode as it is currently implemented in ImBuf.

Additionally packed images are always screwing things up..

Well, my design suggestion would be to keep OCIO out of the Cycles core (render/, kernel/ etc.).

  • For Blender, the blender/ code uses RNA functions to supply the needed values.
  • For Cycles standalone, the app/ code could include OCIO
  • For other applications that use Cycles, well, they can either attach their own color management or use a sRGB fallback.

That way, we can have full OCIO CM support in Cycles standalone, while avoiding any duplication when compiling with Blender, and other applications aren't forced to have OCIO as a dependency.

You just open properties panel in node editor and do color space changes there.

Oh, nice, I didn't know that.

That might be a bit involved since those bindings are not covered by C++ RNA. But if we'll figure that out thne we'll have a really nice system :)

Hm, well, if it's more complex, I'd suggest to leave it in the Render API for now - the functions are basically just wrappers, moving them later once we figured it out shouldn't be too hard ;)

ImBuf is already doing all the color space conversion, making sure alpha mode is correct and so on. Duplictaed in terms ImageEditor will be doing exactly the same thing to ensure images are in compatible colorspace/alpha mode as it is currently implemented in ImBuf.
Additionally packed images are always screwing things up..

Well, we can't just let the ImBuf do all the work without either using higher bit depths (and more memory) or losing significant amounts of precision (after all, the texture would be linearized as well...).
Also, with the current code, I don't see where packed images are a problem?

Oh, and something else I forgot to mention: OSL is pretty much broken currently. Either we'd need a way to call the colorspace conversion from OSL code (impossible afaics) or do it right in the texture code, but how to pass the colorspace index there?

The design might work, but having easier to follow patch will help a lot. Please create differential revision for that.

Hm, well, if it's more complex, I'd suggest to leave it in the Render API for now - the functions are basically just wrappers, moving them later once we figured it out shouldn't be too hard ;)

There's no such thing as "for now" in Python API. We shouldn't put anything into it knowing that in the next release it'll be completely changed.

Better "for now" policy would be to use direct cal, similar to BKE_image* in blender_utils.h.

I don't see where packed images are a problem?

Problem with them is that they've got reasonable amount of differences in behavior in Cycles already and there are cases when packing image will cause different render result.

In this particular case imagine you're using float texture in ACES space, scene linear space is rec709. If the image is unpacked, you'll need to perform linearization from ACES to rec709 on Cycles side. However, if the image is packed, it is already in the scene linear space (since it was linearized on ImBuf load) and nothing is to be done from Cycles side.

Bug yet again however, byte images are never linearized by ImBuf and render-time linearization will need to be done.

Oh, and something else I forgot to mention: OSL is pretty much broken currently. Either we'd need a way to call the colorspace conversion from OSL code (impossible afaics) or do it right in the texture code, but how to pass the colorspace index there?

You can do similar trick as we've got with slots for packed image. Kinda crap tho.

The way OSL is intended to deal with this is using OIIO itself to take care of colorspaces (for that OIIO needs to be compiled with OCIO support).