Page MenuHome

Calculate epsilon values for interp_weights_poly to improve accuracy
ClosedPublic

Authored by Sebastian Parborg (zeddb) on Mon, May 18, 3:22 PM.

Details

Summary

I got notified that surface deform would behave strangely when the vertices were close to the target mesh edges.
After looking into this I realized that the issue was that the hard coded epsilon value was too large for the current floating point resolution.
However, changing it to a smaller value would lead to breakage in other scenarios as the epsilon value would then be too small.

To solve this I implemented a method to actually calculate the correct minimal epsilon value that can be used.

Using this, the uv bevel example mentioned in the code works:

T36105 (and also an other older ticket T31581)
And my Surface Deform example file works as expected when rebinding the modifier:

There is still more work to be done with this patch if this approach is deemed exceptable.
If not, I would at least want to have a alternate method that the surface deform modifier uses that has this feature.
Otherwise what is left to do is:

  1. Move the epsilon calculation out into a utility function.
  2. Make other functions use this that might need it. (The _v3 variant of weights_poly for example).

Diff Detail

Repository
rB Blender

Event Timeline

Sebastian Parborg (zeddb) requested review of this revision.Mon, May 18, 3:22 PM
Sebastian Parborg (zeddb) created this revision.
source/blender/blenlib/intern/math_geom.c
4321

Perhaps I should add a comment that (1 / pow(2, 23)) == FLT_EPSILON ?

I think the idea is fine, but not the specific implementation. I would expect either an empirically determined fudge factor, or an exact error derivation based on the implementation of mean_value_half_tan_v2 and dist_squared_to_line_segment_v2.

I don't think there is a need to go through integers and back to float. This is almost doing 2^log2(x), which equals x. There is some rounding to integers happening, but it's not clear to me how exactly that helps. It usually works to do compute the maximum float value, and then use:

eps = multiplier * FLT_EPSILON * max_value;

Or maybe:

eps = multiplier * FLT_EPSILON * max_ff(max_value, 1.0f);

Where multiplier is a value >= 1 that is empirically determined. It depends on the specific math operations involved. Computing the exact error bound is hard and not really worth it.

source/blender/blenlib/intern/math_geom.c
4323

This is dividing by 2 rather than squaring.

source/blender/blenlib/intern/math_geom.c
4323

Right, so perhaps I should rename the variable? Squaring here would lead to errors, at least in my tests.
Squaring the precision error doesn't really make sense I think.

For example, if we just have FLT_EPSILON that is in the 1e-7 range.
If we square it, we will get into the 1e-14 range, well outside what floats can represent.

It was a while since I took my numerical error course at uni, but I don't think you ever squared the expected error like that.

source/blender/blenlib/intern/math_geom.c
4323

Or I should say, that you can of course have a float that is below 1e-14. But when we have a precision error of 1e-7, then I do not thing that having a check for a much smaller number than that makes any sense.

Now that you point it out, I also agree that I should probably just skip the log and shifting back and forth.
I was so focused on the formula that I didn't see the simple solution.

Are you sure that you want to just do a simple max_ff without flooring or casting to int?
If not then the fudge factor would not be constant for each precision step, which seems a bit weird to me.

So something like this would make more sense to me:
eps = multiplier * FLT_EPSILON * max_ff(floor(max_value), 1.0f);

Do you think that would be ok?

source/blender/blenlib/intern/math_geom.c
4323

It's used for x*x + y*y < eps*eps, which to me seems fine if the error of x and y is smaller than eps / 2.

If you plotted the error per max_value, maybe it looks like a step function. But where those discontinuities are would depend on the specific math operations, they would not be at floor(max_value) in general.

So I don't see the purpose of flooring here.

Updated the patch to contain much more simple logic.

The new epsilon values were derived by trail and error.
Now it seems to work nicely with both test files. I also tried the new values out at different scales (scale object, apply scale) and it seems to work.

I hope this is good enough.

source/blender/blenlib/intern/math_geom.c
4308

Using 1e-4 doesn't seem right. This is used for testing a squared distance, so the epsilon should be squared too.

For example if max_value, the distance and squared distance are all 1.0f. Then eps_segment would be about 1e-11f which is too small.

eps_point also seems too small. If you have a point at -1.0f and at 1.0f, then max_value will be 1.0f and distance will be 2.0f. But FLT_EPSILON is too small for 2.0f, and that's disregarding any error introduced by math operations.

So perhaps it's something like, or another multiplier:

const float eps_distance = 16.0f * FLT_EPSILON * max_value;
const float eps_squared_distance = square_f(eps_distance);
source/blender/blenlib/intern/math_geom.c
4308

I'm getting a bit confused there.

As I understand it, the eps_point check checks if the next point can be described as unique point or if the precision error will cause it to occupy the same point in space as the previous point.

If we are in the 1.0f range, then if two points have a distance value less than FLT_EPSILON, the two points will be too close together and we treat this as one point moving forward to avoid any error in the algorithm. If the length between two points are in the 2.0f then there will be no issue at all here... I'm not sure what you are trying to say...
This check only comes into play when we have a second point that is very close to the current one.

eps_segment is used to see if we are too close to an edge and thus we should check if we should simply derive the weight by interpolating between the edges. A too big value here will lead to "edge snapping" as observed in the surface modifier test file and a too small will lead to other errors as seen in the bevel uv file.

These are the values I came up with after testing and they seem to work fine.

Maybe it is important to figure out why the current tests are checking for small distances of the point to the line segment. That seems to be just a heuristic someone guessed would make the mean_value_half_tan_v2 errors small enough. But is that heuristic correct? The formula for mean_value_half_tan_v2 is num / denom where num is the difference of two values each of which is the square of a distance, and the denom is a cross product, which also involves a lot of squares of distance differences. So it seems likely that the error is going to depend on how fuzzy are we with squares of distances, not distances. Meaning the heuristic test maybe should have been: is the square of the distance to a line too close to the place where values are indistinguishable, rather than the current code with is testing whether the distance to the segment is being tested for too-closeness. If this is true, this might justify the patches use of a linear function of (FLT_EPSILON * max_value) rather than a linear function of the square of that.

@Howard Trickey (howardt), it may be that by coincidence the micro-optimization for distance comparison and math in mean_value_half_tan match up like that.

But I don't know if the formula in this patch is actually better than squaring the epsilon as we did before, or if it's just the first thing that was tested to work.

source/blender/blenlib/intern/math_geom.c
4298–4303

I believe this will be more accurate when far away from the origin.

float max_value = 0.0f;
for (int i = 0; i < n; i++) {
  max_value = max_ff(max_value, fabsf(v[i][0] - co[0]));
  max_value = max_ff(max_value, fabsf(v[i][1] - co[1]));
}
4308

Let me put it another way. dist_squared_to_line_segment_v2 is an micro-optimization to avoid a sqrtf call in dist_to_line_segment_v2. It changes sqrtf(x) < eps to x < eps*eps. It makes no sense to me that such a micro-optimization would affect the logic for computing the epsilon.

Maybe it works anyway because of the specific math here, or maybe there are untested cases where it fails.

Anyway, I don't want to spend too much time on this. The main thing to test that T31581 and T36105 remain fixed. And then also test at a small/large scale and far away from and near and the origin. If it works then fine.

I've done some more excessive testing and these are my findings:

Both Brechts suggested values and my initial values seems to work just as well. There doesn't seem to be any difference at all in results.
So I guess we go with the 16.0f one?

Further more, the addition of fabsf(v[i][0] - co[0]) seems to indeed help (at least with the surface deform modifier) when at far away distances.

However, I did notice that we have an other issue here with the surface deform modifier.
At 100m, the binding coordinates seems to be ok. So interp_weights_poly_v2 seems to work.

But when we use the vertex weights to get our position back, there will be floating point precision errors as we add many small and large numbers together.
This happens before this patch, as this is an error with adding up the output values in deformVert in MOD_surfacedeform.c.
It is noticeable even at 1-2m from origin, but the further off, the worse it gets (ofcourse).

But fixing that should probably be for an other review?

Also, should I copy this to the v3 function or should I leave it as is?

Or hmm... It seems like I was wrong there is still precision errors happening at 1m from origin with interp_weights_poly_v2 with the surface deform modifier. Just a lot less severe than before.
Let me check a bit more.

Ok, the errors seems to the there regardless of which epsilon value I choose, so I guess that this is just regular computation errors that we can't side step.
They seems to be in line with the expected error margin.
At 1-2m the error is around 1e-7, which is expected.

Example output values at 1m from origin:

ngon point_v2: 1.00639236 -0.00171839
ngon co_proj: 1.00639224 -0.00171839 0.00000000

point_v2 is the 2d coordinate send into inverp_weights_poly_v2 (which in this case is the same as in 3d space (z = 0).
co_proj is the coordinate derived when we multiply and add the calculated vertex weights together again.

Sebastian Parborg (zeddb) retitled this revision from Calculate the floating point epsilon for certain math functions to avoid precision errors when using surface deform. to Calculate epsilon values for interp_weights_poly to improve accuracy.
Sebastian Parborg (zeddb) edited the summary of this revision. (Show Details)

I think the _v3 version should be updated as well.

The change for _v2 looks good.

This revision is now accepted and ready to land.Mon, May 25, 3:20 PM

@Brecht Van Lommel (brecht) quick question. Should I commit this for both 2.83 and 2.90 or just 2.90?

Probably best to leave this for 2.90 to be sure, since it's difficult to predict if this will cause new problems in real world scenes.

We can always backport it to 2.83.1 once it has had some testing.