Page MenuHome

Cycles: Experiment with getting rid of ray_offset() function
Needs ReviewPublic

Authored by Sergey Sharybin (sergey) on Apr 1 2015, 7:10 PM.



Major issue with current ray_offset(0 was that it might have pushed
ray far too away in some axis, causing it to go into surface leading
to shading artifacts.

The idea of this change is to simply skip doing preliminary ray push
and check distance to triangle during intersection. This is relatively
cheap operation because there was already a check for sign of the

This change solves such issues as reported in T43835.

Another thing we can do with intersection-time distance check is to
scale minimal distance to intersection by the magnitude of determinant.
Such an additional trick might cause some speed losses, this is to
be investigated still. This partially solves T43865 (it solves the
huge floor plane case, but cornell box scene is still having some

There's no changes in the hair intersection code yet, so it might
give artifacts for until it's properly implemented.

Diff Detail

rB Blender
Build Status
Buildable 874
Build 874: arc lint + arc unit

Event Timeline

Sergey Sharybin (sergey) retitled this revision from to Cycles: Experiment with getting rid of ray_offset() function.
Sergey Sharybin (sergey) updated this object.

Solved darkening in the cornell box scene

Campbell Barton (campbellbarton) added inline comments.
1107–1112 ↗(On Diff #3878)

this looks like its not working with big endian.

Made needed modifications to curve intersection code

Still having some shading artifacts which are most easy to notice with hair BSDF closures.

Forgot some pieces of code in previous update

Updated against latest master

Just updating against latest master with some minor changes

Fixed missing proper ray_tnear in some places of curve intersections.

Unfortunately, there's still some darkening including very simple
scenes, would need to have a closer look to nail down exact circumstances.

Still think doing something like this is kind of beneficial to avoid

Update against latest master

Updated against latest master

Also did some research on shading artifacts.

Seems they only happens on Hair BSDF and the reason for those
artifacts seems to be caused by those BSDFs reflecting rays
by a shallow angle which was not causing self-shadowing in the
past because ray was offsetting, but now it hits internal side
of the surface.

This part needs to be investigated still.

@Brecht Van Lommel (brecht), maybe you can have some extra look here? :)

Update against latest master.

I've also investigated the darkening of hair BSDFs. This is caused by the model
reflecting rays into the medium (aka, act as a transmission instead of
reflection). Here is a simple file


What happens prior to this fix is:

  • Ray hits the sphere.
  • Hair Reflection BSDF gives a direction which points into the sphere.
  • Surface bounce pushes intersection point a bit outside of the sphere, so the new ray hits sphere again in almost the same spot.
  • This time hair BSDF gives proper reflection vector and ray hits the light.

What happens with this patch is:

  • Ray hits the sphere.
  • Hair Reflection BSDF gives a direction which points into the sphere.
  • Since we don't push point outside of a sphere again, the ray transmits into the geometry, which produces darkening.

The file contains script which visualizes light path in before and after this
change (you need to uncomment master and comment branch points in the script,
and run it for each of configurations).

Now, where is the root of the issue? :) It doesn't sound correct that hair BSDF
can give such a direction, and it's only like a co-incident that reflection was
properly happening with an extra bounce. But, is it a problem in the model
itself? Or is it a legitimate situation that for certain incoming rays and
tangent hair strand will not give any of reflection? Or maybe we can just add
some abs() somewhere, but then what is the ground truth render result here?
And why the used model / paper is not even mentioned?..

Regarding the hair BSDF, as we discussed it is designed to work only with curves and ignore backfacing intersections. So if the result is different on meshes that's not really a problem.

I did some tests with this patch at different scene sizes, and moving the scene away from the origin. Unfortunately the regression at size 1e3 is quite serious. I guess a fixed 1e-5f epsilon doesn't really work reliably.

Size 1Size 1e-3Size 1e-5Size1e3Size 1e5Origin 1e3Origin 1e5

Here's the .blend with multiple scenes. I'll add regression tests like this.

So what the existing code does is offset the position along the geometric normal, with some epsilon that depends on the distance from the the origin. This is needed because if you do e.g. 1e5 + 1e-5, that 1e-5 will be rounded away. With this new patch, there is effectively no epsilon with the large sizes and far away from the origin.

Where the existing code seems to fail in my test scene is in corners, where after offsetting along the normal it still finds wrong intersections with faces orthogonal to that. An epsilon that is not dependent on the normal like in the new patch can help avoid that. Though I'm not sure how to reliably figure out on which side of the orthogonal face the point should fall.

In the file from T43835, the dark cracks also happen in corners that are not particularly sharp. The object translation there is 31 -156 4. Changing that to e.g. -156 -156 -156 reduces the cracks, probably because we apply the epsilon per XYZ component while the triangle intersection algorithm might spread the error to other components. It doesn't eliminate the error completely though, which I don't fully understand.

Another source of error is instancing. If you have a position P in object space, and two objects with translations T1 and T2, then what we end up doing is: (P + T1) - T2. If P is a small value and T1 is large (for example when the scene is translated far away from the origin), there will be significant loss of precision. For self-intersection it would help to keep the position in object space, or in general with multiple objects we could do P + (T1 - T2), under the assumption that nearby objects likely have a similar translation.

This would require computing an object space position + object translation and pass that to the intersection function, so that when entering an object we can cancel out the object translations first. With rotation and scale composing the two matrices could be quite slow though, and that cancelling out itself might have some error.

@Brecht Van Lommel (brecht), so what would be the reliable way of solving the issue? Scale-aware ray_tnear? Offset scene so its visible part is closer to the origin?

CJ L (pitibonom) added a comment.EditedJan 31 2019, 9:21 PM

@Brecht Van Lommel (brecht), so what would be the reliable way of solving the issue? Scale-aware ray_tnear? Offset scene so its visible part is closer to the origin?

hi all :))
I formerly am a previous programmer who modeled earth. ( in a rougth way i gotta say :P )
it's just simply impossible ( except with discretization method wich stacks errors by adding them ( better than the global way of multiplying errors wich....
errr.... multiply them XD ) ) to model large scale things with precision using floats.

imagine you need a mesh of 40 million meters ( the earth at equador ) and you need.... 10cm accuracy ?
forget about it
you need almost 1000 times more accurate numeration system.

years ago, the float was THE way.
indeed it was !
I won't explain. all of you know why.
today computer graphics go deeper in accuracy ans wider in scale.
the way. the ONLY ( fuckin' ) way is..... double point precision.
there's no other way
64 bits is the only solution to all our accuracy probs of today.
just select all 'float' variables in a code, replace it with 'double' and this will solve many accuracy problems....
of course not all !!!! for thos wanting to model the solar system in 10cm accuracy, doubles will show it's limits.....
but for now, this is the way ! for most projects !

forbid any 0.0f in code, and just make it 0.0 ( C and C++ and C# are natively using doubles )

okay 64 is heavier than 32 ^^ what about perf ?
some years ago i hardly believed it !
handling doubles instead of floats will be muuuuuuuuuuuuuuch slower !!!!! at least.... 2% !
we guys programmers, we're still playin in a sandbox :P

double is THE way.

just as an history wink: 32 bits floats were damn fuxxing expensive in CPU time for those like me running POVRAY on a 386
emulated FPU
now 32bits floating calcs are hardcalculated on your crappy CPU :P
and 64 bits floating calcs are also !!!!

today FPU is not the bottleneck anymore. Data cache IS !
as we greed more and more data....

prohibit all 'const float' in code for anything that needs accuracy on wide scale.
Am damn surprised Cycles ( oh i damn fucking love Cycles, and it took time for me to do it :P ) uses floats

At last..... but not least ( coz our memory space is precious and we don't wanna spare it.... ) making all floats to doubles is not the perfect way ^^
most of triangles ( except those that are kilometers long ) can discretize all calcz in a float way ! sparing, not CPU time, but mem space !

I know the hardest way is to make calcz generic. mebe it's time to say what Blender can do and what it cannot :P
well anyway.... i think it's just an app we all love and when we kick gently some asses it's just in a helpish way :))

BEST BEST regards to all !!!

EDIT: Just..... temporary getting rid of this problem.....
..... till it get resolveds by programmers....
I moved my objects ( i only need baking wich is not the case of you all ) so that fucked-up normals artifacts disappear. Then i save the mesh at its proper position.....

yes.... it's shitty ! but it's just a workaround till things are done properly....

happy blending :)