This design task is about making the Boolean modifier and intersect tool handle special cases and be more robust, so that users can depend on it and not complain about things Carve could do that the BMesh boolean cannot.
A number of bug reports about Boolean have been captured in T47030. They can be mostly grouped into these kinds of problems:
- Not handling cases where there is coinciding or nearly coinciding geometry (typically edges and faces that intersect other edges and faces at more than one point, or vertices that intersect the middle of a face or edge). By design the current BMesh Boolean code does not handle these cases, putting it on the user to move the geometry around some so that the desired effect can be gotten without these special cases. But this annoys users, who regard it as a bug, and also sometimes misses the point of what they are trying to do (e.g., bring in a bunch of "roads" in a plane and intersect them).
- Not dealing with precision problems (e.g., near-zero area faces). I suspect but am not positive that when the intersection objects have very small faces, say the result of sculpting on a high-res mesh, that the precision problems start to become abundant.
- (For Boolean only) Not dealing with non-hermetically sealed intersection objects. Again this is by design in the current code, but some users miss the behavior of Carve, which would sometimes do useful things in such cases. Also, by requiring hermetically sealed objects, we miss the opportunity to use Boolean for cleaning up objects that have self-intersection.
- Performance issues, usually when there are a very large number of intersections in a single face. E.g., T52471
Some design decisions to make:
- What algorithm(s) should be used?
- What kind of arithmetic to use? Possibilities are: floats, doubles, or exact arithmetic (would likely involve porting some exact arithmetic library to Blender).
- Should this be done by evolving / modifying current BMesh code, or starting from scratch? If the latter, do we try to follow the lead of the current code in trying to do the work as much as possible in BMesh structures themselves, and reusing BMesh elements wherever possible, or do the calculations in some different structure and then rebuild all affected BMesh geometry?
I surveyed a number of papers on how to do Booleans. Two of the most promising papers, in my opinion, are:
- Handles coplanar, self-intersections, non-manifold edges (but not ‘flaps’ nor open volumes; uses exact arithmetic; requires closed volumes, but mentions generalized winding number concept (which could fix the requirement about no-open-volumes but looks a bit messy to incorporate).
- Handles everything in closure of boolean ops on half-spaces - so, non-manifold edges, topological singularities, etc. Uses exact arithmetic. Basic idea is to represent 3d topology by “Sphere Maps” which show which volumes are in/out around vertices. They have a representation called “Selective Nef Complex” (SNC) which is a lot like BMesh but with sphere maps at the vertices instead of just vertices. There’s an algorithm that creates SNC’s from just sphere maps. And another algorithm that does boolean operations by first doing all intersections, then getting sphere maps at each intersection point according to boolean operation and in/out labels, and finally synthesizing resulting SNC. This is what CGAL implements now.
The Zhou et al. paper pays more attention to things like coplanar faces, and looks easier to implement, so I have a preference for that approach.
An important subproblem that comes up as soon as we want to handle arbitrary intersections of faces, edges, and vertices in a single plane, is how to do that. The current Blender code uses the BM_face_split_edgenet function to do the work of remaking faces when there are edges intersecting it. So one approach is to generalize the triangle-triangle intersection code in Blenlib to handle all the results of coplanar intersections, and use that to figure out what edges to feed BM_face_split_edgenet. The latter would need to be slightly generalized to handle isolated vertices in a face too; treating them as zero-length edges will work (I tried this). One issue with this approach is that it only deals with intersections with one given face: if the things being intersected with the face intersect with each other, then the code doesn't work, and needs considerable development in order to make it work.
Another approach to the planar intersection problem is to use Constrained Delaunay Triangulation, with an algorithm that will discover all the intersections, coincident vertices, and overlapping edges, as a by-product of doing that triangulation. Afterwards, most of the triangulation edges can be removed, leaving only enough that valid BMesh faces can be formed out of what remains. A promising paper for doing CDT, paying close attention to overlaps etc., and keeping track of how the output relates to the input, is "Fully Dynamic Constrained Delaunay Triangulations”, by Kallmann, Bieri, and Thalmann. I have this implemented now and plan to soon propose putting it in Blenlib, as I believe it can be useful for other things, including Python API users.
We had some discussion of arithmetic in general, and somewhat about Boolean in particular, in this devtalk thread. A general consensus was that using double arithmetic internally to Boolean would likely be helpful. But what about exact arithmetic? Many of the papers on Boolean and other intersection problems have reached the conclusion: just use exact arithmetic and many problems go away; it is a lot slower, but can be sped up some by tricks that use floating point in many cases an only resorting to the more expensive techniques sometimes. I see three main problems with this approach: (1) the code to implement exact arithmetic is complex; there are libraries that we could import into Blender (maybe -- have to check licensing) but do we need to do that? (2) I'm pretty sure our users don't actually want the result of doing exact arithmetic: in many cases where faces are almost coplanar, they would not appreciate have two separate planes joined by extremely long and skinny triangles. (3) Do we really want the performance hit? For all of these reasons, I think we should use doubles (with careful epsilon tests) instead of exact arithmetic.
I have tried for a number of months to use the "evolve the current code" approach. It is kind of working and I'll continue to do that for a while. But the code needed to deal with merged vertices and edges has gotten increasingly complex and hard to do correctly, so I hit a kind of wall. I think switching from trying to use BM_face_split_edgenet to the CDT approach may unblock me, so I will continue down this path for now. But there are attractions to the approach of building the geometry up separately from existing BMesh, and then putting it back into BMesh at the end. That's the approach I took for Bevel, and I do not regret that decision. So if the code starts to look too hairy, I might try to switch to that approach.