Page MenuHome

Constrained Delaunay Triangulation
Closed, ResolvedPublicDESIGN


Problem Description

This design task is about making a 2D Constrained Delaunay Triangulation (CDT) library function for Blenlib. A Constrained Delaunay Triangulation is a triangulation with "nice edges" that includes "constraints", which are vertices, edges, and faces that must appear in the output. For edges and faces, the output may have to subdivide them in order to make a triangulation. A byproduct of doing the CDT is that the algorithm will discover all the intersections, coincident vertices, and overlapping edges, as a by-product of doing that triangulation.

This task has several motivations.

  1. As mentioned in the parent task T67744 on Boolean Redesign, many users expect Boolean operations to work when there are coplanar intersections. While the existing BM_face_split_edgenet function handles come cases of this (edges coplanar with a single face), it by no means handles the whole problem, and the algorithm looks difficult to generalize to the general case. Using CDT for its byproduct of discovering all coplanar intersections is the approach I want to take.
  2. Users in forums have sometimes asked for a good way of importing 2d diagrams and getting all of the edges properly intersected. example. Either this can be something that the general mesh intersect routine can apply or it could be something to enhance SVG import, or both.
  3. Add-on writers have asked for this functionality. example 1, example 2, example 3. The Archipack developer (last example) had to resort to loading scipy in order to get fast enough Delaunay triangulation -- the python version he was using was too slow for large inputs.

Some requirements arise from these motivations:

  • The algorithm needs to handle large inputs (at least hundreds of thousands of input constraints) fairly efficiently. An O(n^2) algorithm is likely going to be too slow.
  • It needs to be robust, both in terms of how it handles numeric approximations and arithmetic, and in terms of being able to handle coinciding/overlapping or nearly coinciding/overlapping input elements. There probably needs to be a user-specifiable epsilon that says how close things can be before we regard them as "at the same place".
  • The input and output parts of the API should be amenable to making into a usable Python API.
  • We need to be able to figure out for each output vertex, which input vertices mapped to it (it could be more than one, because of the epsilon considerations; or maybe the user just repeated some input coordinates). We need to know that so that we can properly assign Mesh attributes (e.g., vertex group membership, bevel weights) to the output vertices. Similarly, we need to be able to figure out for each output edge which input edge(s) they are for. And similarly for output faces. In all cases, so that we can figure out what attributes (e.g., UV coordinates) to assign if we build a BMesh out of the result.
  • We probably need some flexibility for which triangles are in the output. One case is: all of them, with the outer boundary being the convex hull of the input vertices. But for the Boolean case I really just want the intersections -- except that I need to keep in enough of the extra edges that all the faces that need to be made are valid BMesh faces (no repeated vertices, holes properly connected).


A promising paper for doing CDT, paying close attention to overlaps etc., and keeping track of how the output relates to the input, is "Fully Dynamic Constrained Delaunay Triangulations”, by Kallmann, Bieri, and Thalmann.

This works by incrementally adding new constraints. A CDT is always maintained, and each new added vertex or edge modifies the CDT. It first needs to find the location of where to insert the vertices. It does that by searching through triangles (using "Do these three points form a clockwise angle?" tests) to get to the triangle where the point belongs. It may actually coincide with an existing vertex, or it may be on an existing edge. After points are inserted to make a triangulation, edges are flipped until the CDT condition still holds. Edge constraints are more complicated: after the endpoints are found and inserted if necessary, a search from the beginning vertex to the end vertex discovers which edges and vertices are crossed. If a constraint edge is crossed, it needs to get the intersection point inserted. Other crossed edges (non-constraint ones) can be removed, and then afterwards all the non-triangle faces are retriangulated, ensuring CDT condition during retriangulation. Face constraints are just a closed cycle of edge constraints, so the same algorithms apply.

To keep track of which input elements go with which output elements, a list of "input ids" is kept with each vertex, edge, and face in the CDT. Those ids are updated appropriately as edges and faces are split, and as vertices are merged into other vertices.

The running time of the algorithm from the paper is O(n^(4/3)) if point location uses a method where you sample O(n^(1/3)) places in the CDT and find the one closest to the point-to-be-inserted, and then start to search from there. One could also use a kd-tree and make a faster O(n log n) running time.

I propose use doubles for all the arithmetic in the routine, to help minimize the precision problems that happen whenever one does intersections of nearly parallel lines.


The input can be described by separate arrays of vertices, edges, and faces. The vertices need to have 2d coordinates. The edges and faces can be described with indices into the vertex list. The faces are really arrays of variable length arrays, which are hard to do in a C API. I propose putting all the coordinates for all faces in one array, and using separate arrays to store the starts and lengths of each individual face. This is similar to how Blender represents Mesh faces.

The output can be similar for vertices, edges, and faces. It is very hard to predict how much space will be used for these (at least without grossly overestimating in the usual case). So it is hard to follow common Blenlib practice of having the caller pass in the allocated space required. I propose passing back the output as allocated arrays, which the user must free. A convenience routine can be added to the API to free the whole output structure and its allocated arrays.

Along with vertices, edges, and faces, we need to let the caller know how the output elements relate to the input elements. I propose using the same flattened array with extra start and length arrays used for representing faces to represent the list of input ids that go with each output element. They will be returned in the output structure too.