Page MenuHome

Fix for the T53581
AbandonedPublic

Authored by Daniel Santana (dgsantana) on Jun 11 2019, 3:03 PM.

Details

Summary

Replace relative threshold for pseudo inverse in sharp remeshing modifier with 0.1 as proposed in the original paper: https://www.cse.wustl.edu/~taoju/research/dualContour.pdf

Change pseudo-inverse implementation that works with dynamic heap-allocated matrix to static 3x3 version.

Diff Detail

Repository
rB Blender

Event Timeline

Can confirm this result in better results for the file in T53581.
Note though there was F1442978 already in the bugreport [which does the same thing - and additionally claims to improve performance (havent checked this though...)]

This should give the same result as the F1442978, since the template function will expand at compile time to the Matrix3f version, which is the what is proposed in the diff, there should be no difference in speed between the 2.

There is also a difference in that with this patch the tolerance stays relative to the maximum singular value, while the other patch made it absolute. Relative should be better though, especially if the epsilon is as high as 0.01.

Is the 0.01 the lowest value that works?

I will do some tests. I upped the remesh voxel up to 9 with and no problems with 0.01 on a default torus with subd mod of 4.

Default Torus, SubMod: 3, Remesh: Depth: 9 (0.01f epsilon) - Good

Default Torus, SubMod: 3, Remesh: Depth: 9 (0.0001f epsilon) - Good

I think going much lower than this doesn't make sense for a single precision calculation, the error introduced will only get bigger and bigger.

Do you guys want me to edit the diff to 0.0001f?

Lower the epsilon to 0.0001 with no visible side effects on tests done.

This should give the same result as the F1442978, since the template function will expand at compile time to the Matrix3f version, which is the what is proposed in the diff, there should be no difference in speed between the 2.

Sorry, but I disagree here. I have never heard that the compiler can optimize dynamic Eigen matices to compile-time sized ones. Maybe there won't be an a visible speed difference, but it would be cleaner to use the statically sized matrix.

There is also a difference in that with this patch the tolerance stays relative to the maximum singular value, while the other patch made it absolute. Relative should be better though, especially if the epsilon is as high as 0.01.
Is the 0.01 the lowest value that works?

I disagree here as well in this particular case. The implementation of the pseudo-inverse is only used to solve QEF minimization problem for dual contouring. Since the matrix is solely composed of (normalized) normals the matrix is independent of scale. Therefore, an absolute threshold (such as 0.1 as mentioned in the original paper) should be more stable.

If the function assumes it only gets a matrix composed from normals as input and should not be used for anything else, it should be renamed or commented to make that clear.

I'm not sure how an absolute threshold is necessarily more stable, but as long as it works it's fine with me.

This should probably be evaluated in the following sense and for the following reason:

The torus obviously has a high mean curvature everywhere and is, therefore, maybe not an optimal test object.
Lower thresholds would probably introduce problems with almost (but not quite) co-planar surfaces. Maybe the algorithm could be tested using a slightly deformed cube (or similar).

I do believe that the pseudo-inverse, since is only used in this case, could use statically sized matrix. I already implement this algorithm in Rust, and hand converted the original paper code. Maybe reducing the use of Eigen library with handwritten versions could improve compiler performance (no template instancing), and probably to some extent the overall function speed. But I think for now that is a bit out of scope, to fix this trivial error. I will change this to 0.1 and make the function static typed.

Looking into F1442978, that should be the one accepted, since it solves all the proposed problems with a simple enough fix.

Results with the F1442978 applied. (all good at 0.1, 0.01 still gives artefacts)

Without the fix. (full of artefacts)

Blend file:

Great. I think then 0.1 should be used as mentioned in the paper.
I'd be quite happy with the final solution as mentioned by Daniel Santana =).

Dan, can you review your fix (F1442978), so it uses the 0.1 tolerance? It should be that one that gets committed ;).

Dan Koschier (dankosc) edited the summary of this revision. (Show Details)Jun 15 2019, 3:01 PM

I've update the patch to fit the current master branch. I wasn't however able to modify the diff submission. I therefore created a new one: D5078.
Otherwise I've additionally attached the new patch to this message.