Page MenuHome

Cycles: Implement optional normal adjustment that ensures valid reflections for normal maps and bump maps

Authored by Lukas Stockner (lukasstockner97) on Mar 22 2017, 9:55 PM.



This patch adds an option to the Normal Map and Bump nodes that will automatically reduce the strength of the effect if using the full strength would result in invalid reflections and therefore black spots for the given incoming direction.
The result is not perfect of course, but definitely better than black spots.

This patch is WIP, for example, the OSL implementation is missing - but it should be good enough for some initial testing.

The initial idea came from this comment by @Miki (MeshLogic), but the actual approach is different afaics.

A quick example:


Diff Detail

rB Blender

Event Timeline

The idea is indeed nice, but i don't think this should be an option. Just implement it in a way that it does not cause slowdown for regular renders ;)

The reason of that is that you wouldn't be able to reliably predict behavior of your normal map in the real shot, where you have animation of both geometry and possibly shaders. So to save render times caused by artifacts artists will always enable that option. Disabling that for production files will be quite dangerous as well.


Code style.

What is the general idea of the algorithm? I agree we should aim to just always enable this, if possible.

Some things to look at might be:

  • What happens when using this with a bump node that has constant height, what changes?
  • This might even help on meshes with the terminator problem?
  • How does this work with varying amounts of roughness?

This can be dot(Ng, R) >= min_angle?

This test seems fast enough to me, if the more expensive only runs when the correction is needed that seems quite acceptable.

min_angle could be hardcoded in this function instead of being passed as a parameter, unless there is a plan to vary it somehow.


Typo: lcoal

I agree, enabling by default makes sense, since it's hard to imagine that any user actually wants the black spots.

Regarding terminator problem - yeah, could work, you'd just have to call ensure_valid_reflection when calculating the smooth normal. However, testing is required to check whether the result is somewhat close to what the user would want.

As for rough reflections, that of course makes it impossible to define *the* reflection direction. In practise, in my tests the reflection would get darker as a larger part of the reflection lobe goes below the surface. But afaics, since Beckmann/GGX work by sampling a microfacet normal "around" the main closure normal, the current approach should result in a closure normal for which 50% of the sampled rays are valid. Since there isn't a single correct solution anyways, I think that just reusing the same approach as for specular reflection makes the most sense.

For the algorithm itself, I can add some comments on what's actually going on.
Short(ish) version:
The Strength option basically interpolates between the normal from the normal map and the original normal - or, in other words, all of the possible interpolated normals are in the plane spanned by the two normals.
Since the idea of the patch is to adjust strength, the goal essentially is to rotate the normal in that plane until the reflected ray is above the surface.
Therefore, we have to find a N that lies in the plane spanned by Ng (the original normal) and N_orig (the normal-mapped normal) so that dot(Ng, 2*dot(N, I)*N - I) = min_angle.
To make that easier, we move to a coordinate system in which Ng is the Z axis and the plane of Ng and N_orig is the X-Z-plane (which can be done by using the normalized part of N_orig that's orthogonal to Ng as the X axis).
Now, N.y is zero since N will be in the X-Z-plane as well, and since Ng is the Z axis, dot(Ng, R)=R.z.
So, the equation becomes 2*(N.x*I.x + N.z*I.z)*N.z - I.z = min_angle. With the knowledge that we want N.y=0 and len(N)=1, we get N.x=1-sqrt(N.z^2), which reduces the equation to a single unknown. From there on, it's just regular algebra to get the solution (technically, there's two positive solutions. I'm not sure whether both exist sometimes or even always, I'd have to test that.).
Knowing N.z, we can calculate N.x and then transform back into global coordinates.


Yes to both points - should be dot(Ng, R) and min_angle can be hardcoded.

I found a weird shading bug for wide surfaces. It behaves alike when using procedural texture for bump mapping or in displacement slot

The black reflections come back in this file:
(also sky texture has a different color cross-testing official blender vs experimental build (rB82bcfb9b342a)

iRay seems to use a very similar solution, see A.3 in this paper.

We achieve that by computing the perfect reflection vector (according to shading
normal) and then, if that extends below surface, we “pull it up” towards the geometric normal
such that it lies slightly above the tangent plane (see Fig. 21).
With this new reflection vector a new shading normal can be computed (as the halfway vector
to the incoming direction) that fulfills our requirement.

Lukas Stockner (lukasstockner97) marked an inline comment as done.

Here's an updated version, now with OSL support.

I also tried to replace the cutoff at 0.05 with a smooth remapping from [-1:0.05] to [0.01:0.05], but the results were pretty much unchanged, so I decided it's not worth the slowdown.

Brecht Van Lommel (brecht) requested changes to this revision.May 6 2018, 1:18 AM

I tested this patch on our regression tests and a bunch more complex files, and didn't find problems.

So in my opinion we can just remove the option to turn off this reflection fix, and only add it back if we get user feedback that it's problematic. This code could probably also be able to help with smooth normal shading issues, so having an option on the bump node is maybe not the right place anyway.

If that's done this can be committed I think.

This revision now requires changes to proceed.May 6 2018, 1:18 AM
This revision was not accepted when it landed; it landed in state Needs Revision.Jul 26 2018, 5:02 PM
This revision was automatically updated to reflect the committed changes.

Is it possible that this is missing a normalize() at the end? I'm seeng non-unit length normals being passed into the BSDFs now.
For example the classroom benchmark scene asserts with NaN pixels now.

I'm not sure if the math is supposed to work out so that no normalize is needed, @Lukas Stockner (lukasstockner97)? But it seems fine to add an extra normalize if only for a bit better precision, since this code only runs when a correction is needed which is a small subset of pixels.

I'm not sure if the math is supposed to work out so that no normalize is needed, @Lukas Stockner (lukasstockner97)?

That was the intention, but I must have overlooked a few special cases (see e.g. T56175) so I wouldn't be surprised if the normalization fails sometimes as well :/
I'll try to find time to redo the entire derivation and handle all the special cases, but I can't guarantee that that is even possible...