Page MenuHome

Cycles: Use proper XYZ <-> Scene Linear conversion instead of assuming sRGB
Needs RevisionPublic

Authored by Lukas Stockner (lukasstockner97) on May 8 2016, 1:18 AM.


Brecht Van Lommel (brecht)
Group Reviewers

This is the first part of the Cycles color management implementation. It still has some ToDos
and areas to be cleaned up, but it should be ready for general review now.

More details on this patch can be found in T46860, this diff contains a cleaned-up version of
the first three patches.

Diff Detail


Event Timeline

Lukas Stockner (lukasstockner97) retitled this revision from to Cycles: Use proper XYZ <-> Scene Linear conversion instead of assuming sRGB.
Brecht Van Lommel (brecht) requested changes to this revision.May 8 2016, 3:04 AM
Brecht Van Lommel (brecht) added inline comments.

I like the term 'color-critical' :). But yes, Blender vertex colors conventions is sRGB still.


This function is not defined anywhere either? Is that part of another patch or was it supposed to be included in this one?


NUM_COLORSPACES is not defined anyway, guess you forgot to include a file?


This is the code path in case there is no GLSL / half float support for old GPUs, which will be removed in Blender 2.8.

Don't worry about making this work with arbitrary display transforms, hardcoding sRGB here is fine.


Is this a TODO or did you actually manage to simplify the code this much?

This revision now requires changes to proceed.May 8 2016, 3:04 AM

While we are at this patch, the default config should be chucked out. The ACES bits are entirely wrong, the legacy copied transforms from the SPI pack is completely wrong in this context, and a hundred other errors.

Lukas has a cleaned OCIO configuration in his GitHub branch, and it would at least be a beginning of fixing this abomination.


That looks like transfer curve only, which will be problematic displayed on anything with alternate primaries, including every new generation Apple display.


Will it?

How about P3 or any other display such as a broadcast display?

Let's not tie a revised OCIO config into this code review, one thing at a time.


Mesh vertex colors by convention use the same primaries as e.g. material colors, but with an sRGB transfer curve to compress them to 8 bit more efficiently.

Display spaces are irrelevant in this part of the code.


There might not even exist any GPU where this code runs now. It's an old fallback that is only used when the OCIO shader is not supported.

Lukas Stockner (lukasstockner97) edited edge metadata.

It seems I screwed up the splitting into different patches a bit, some stuff that was in here
actually belongs to the second patch (texture handling), while another change wasn't included.

Regarding the new Blackbody code: The new approximation comes from Wikipedia (article "Planckian locus"),
which in turn cites "Design of Advanced Color Temperature Control System for HDTV Applications" (can be
found easily on Google). I haven't verified it myself yet, but it seems to be quite accurate.

Brecht Van Lommel (brecht) requested changes to this revision.May 10 2016, 12:36 AM
Brecht Van Lommel (brecht) edited edge metadata.

The color output from the blackbody node looks quite different with this patch, even at the default temperature of 1500 K.

Maybe the RGB values in the tables in the original code can be normalized, converted from Rec.709 to XYZ, and unnormalized again? With some luck the fit is still close in XYZ space. Another option would be to add a Rec.709 to working space matrix, but would prefer not to.

Other than that it looks good to me.


So we should clarify this commit a bit to say something like:

Vertex colors are in scene linear color space, compressed to 8 bit using sRGB transfer curves. They are assumed to have the same primaries as e.g. material colors, so no extra conversion is necessary.

This revision now requires changes to proceed.May 10 2016, 12:36 AM
Lukas Stockner (lukasstockner97) edited edge metadata.

Okay, I just decided to kick the curve fitting approximation approach completely and use lookup tables.
This affects both the Wavelength node (which had a hardcoded table directly in the function before), which now uses the regular CIE XYZ curves, as well as the Blackbody node.
I generated the blackbody tables directly from Planck's law and the XYZ curves in Octave, so it should be as accurate as it gets. The difference between the old and new code
has a deltaE value of around 0.01 (assuming a D65 white point), which is way too low to make a real difference. For the extreme regions (below ~1500K and above ~10000K) the
difference is larger, but that's because the old code simply didn't cover that range afaics.
For indexing I just went with reciprocal Kelvins since they're way more perceptually uniform than the temperature itself. The spacing is still not perfect, the values in the
reddish ares are ~2 times denser than in the bluish areas. In the future we might use a better distribution, but my experiments didn't turn out so great yet (just squaring
the index provided more resolution in the bluish colors, but the middle area, which is the most interesting one, was even sparser - and directly evaluating the difference,
numerically inverting it and fitting a function worked extremely well with a sum of two exponentials, but the fitted function couldn't be inverted analytically...).
However, to be honest, I don't think anybody will ever notice the deviations, we're talking about a difference of ~1e-4 here.

To clean up the lookup table handling, I generalized the Beckmann lookup code a bit to make adding new tables easier. Also, the nodes now request the tables they need,
which avoids loading them in case they're not needed. For committing, I'd split that from the main patch.

Weused ot have lookup table to blackbody which then we replaced with an approximation by @lockalas in D1280, which gives quite reasonable speedup (up to 30% in a fire scene). Before going back to lookup table speed is to be investigated again and if we'll have penalty there then it's something to concern about (having slowdown due to support of a case which normal blender being almost never uses is mwua).

Do you have plot of your lookup table? Did you try to curve-fit it?

This revision now requires changes to proceed.Jul 6 2017, 2:38 AM

Any chance we can move a bit on this and T46860 before SIGGRAPH 2018?

Hi, any update here? @Sergey Sharybin (sergey) the LUT comes from which has a graph view if you really want a graph. As for curve fitting, @Lukas Stockner (lukasstockner97) have you seen this paper from NVIDIA?

The paper develops a multi-Lobe, piecewise gaussian fit with:

squared error rates below the within-observer variance in the experimental measurements used to form the CIE standards

It even gives some example c code for formulas to calculate the individual XYZ values:

// Inputs: Wavelength in nanometers
float xFit_1931( float wave ) {
float t1 = (wave-442.0f)*((wave<442.0f)?0.0624f:0.0374f);
float t2 = (wave-599.8f)*((wave<599.8f)?0.0264f:0.0323f);
float t3 = (wave-501.1f)*((wave<501.1f)?0.0490f:0.0382f);
return 0.362f*expf(-0.5f*t1*t1) + 1.056f*expf(-0.5f*t2*t2)
- 0.065f*expf(-0.5f*t3*t3);
float yFit_1931( float wave ) {
float t1 = (wave-568.8f)*((wave<568.8f)?0.0213f:0.0247f);
float t2 = (wave-530.9f)*((wave<530.9f)?0.0613f:0.0322f);
return 0.821f*exp(-0.5f*t1*t1) + 0.286f*expf(-0.5f*t2*t2);
float zFit_1931( float wave ) {
float t1 = (wave-437.0f)*((wave<437.0f)?0.0845f:0.0278f);
float t2 = (wave-459.0f)*((wave<459.0f)?0.0385f:0.0725f);
return 1.217f*exp(-0.5f*t1*t1) + 0.681f*expf(-0.5f*t2*t2);

@Lukas Stockner (lukasstockner97) are you able to update the patch with this and master?

If someone wants to easily test these values generated by these equations I created a quick LUT that can be used instead of the one that is in this patch to test if there is a difference in color.

This comment was removed by Troy Sobotka (sobotka).