A new implementation of the Array Modifier, without BMesh, gives 100 fold performance improvement #39566

Closed
opened 2014-04-02 21:55:05 +02:00 by Patrice Bertrand · 9 comments

Patch D443 is a proposition for a new implementation of the Array Modifier, that does not use BMesh.
Modifiers using BMesh have very poor performance.

This patch is a proposed new implementation. It is identical in its features and results. It gives a performance improvement of more than 100 fold and more when merge option is not selected, and of around 10-20 with merge option checked. These improvements are measured without OpenMP, another /2 or /4 factor could be gained through multi-threading, which is much easier to implemement on the simple loops of direct derived mesh processing.

In this patch is a proposed implementation of doubles detection that is inspired from the algorithm used in the "Remove Doubles" operator, but with a few differences, using separate sorted arrays for target and source, and I think a slightly improved performance. This map_doubles() function could be added to cdderivedmesh.c, and made available to other modifiers.

The new implementation of the array modifier also uses CDDM_merge_verts() from cdderivedmesh.c, which up until now was only called by the mirror modifier.

I understand there is some risk in overhauling such rock-solid old work horse as the array modifier, but a 100 times gain is a lot, I believe it's definitely worth it, it can be the difference between a 10 seconds wait and a 0.1 second result.

Although the implementation is fully functionnal and has been tested in various conditions, it is meant for evaluation purposes only at this time. Depending on the "count" value of modifier, it will call the former implementation (for odd count values), or the new implementation (for even count values). It will also print the delay to console so that one can compare both implementations.

I have another refinement in mind, where the mapping of doubles performed at rank n of the array would be cached and re-applied with an offset at rank n+1. This would approximately cut time by half when merge is on.

One last word: the doubles mapping algorithm, very much inspired from the remove doubles operator and slightly improved, uses a sorted array of vertices according to the sum of x+y+z. Then candidate doubles are tested from within -3d to +3d of a given vertex. Actually 3 could be replaced by sqrt(3), or ~1.74, which is a (admittedly minor) optimization that could be put into remove doubles operator as well.

Patch [D443](https://archive.blender.org/developer/D443) is a proposition for a new implementation of the Array Modifier, that does not use BMesh. Modifiers using BMesh have very poor performance. This patch is a proposed new implementation. It is identical in its features and results. It gives a performance improvement of more than 100 fold and more when merge option is not selected, and of around 10-20 with merge option checked. These improvements are measured without OpenMP, another /2 or /4 factor could be gained through multi-threading, which is much easier to implemement on the simple loops of direct derived mesh processing. In this patch is a proposed implementation of doubles detection that is inspired from the algorithm used in the "Remove Doubles" operator, but with a few differences, using separate sorted arrays for target and source, and I think a slightly improved performance. This map_doubles() function could be added to cdderivedmesh.c, and made available to other modifiers. The new implementation of the array modifier also uses CDDM_merge_verts() from cdderivedmesh.c, which up until now was only called by the mirror modifier. I understand there is some risk in overhauling such rock-solid old work horse as the array modifier, but a 100 times gain is a lot, I believe it's definitely worth it, it can be the difference between a 10 seconds wait and a 0.1 second result. Although the implementation is fully functionnal and has been tested in various conditions, it is meant for evaluation purposes only at this time. Depending on the "count" value of modifier, it will call the former implementation (for odd count values), or the new implementation (for even count values). It will also print the delay to console so that one can compare both implementations. I have another refinement in mind, where the mapping of doubles performed at rank n of the array would be cached and re-applied with an offset at rank n+1. This would approximately cut time by half when merge is on. One last word: the doubles mapping algorithm, very much inspired from the remove doubles operator and slightly improved, uses a sorted array of vertices according to the sum of x+y+z. Then candidate doubles are tested from within -3d to +3d of a given vertex. Actually 3 could be replaced by sqrt(3), or ~1.74, which is a (admittedly minor) optimization that could be put into remove doubles operator as well.

Changed status to: 'Open'

Changed status to: 'Open'
Patrice Bertrand self-assigned this 2014-04-02 21:55:05 +02:00

Added subscriber: @PatriceBertrand

Added subscriber: @PatriceBertrand
Member

Added subscriber: @gandalf3

Added subscriber: @gandalf3

Added subscriber: @dragostanasie

Added subscriber: @dragostanasie

Added subscriber: @derekbarker

Added subscriber: @derekbarker

Added subscriber: @mont29

Added subscriber: @mont29

Hi,
For those who may be interrested, I'd like to say a few words on the merge doubles algorithm based on the sum of x, y, z coordinates. I picked it up from the remove doubles operator, and tried to improve it very slightly. But this algorithm is actually very poor. When I first saw it, I thought "How clever, the way it manages to handle all 3 coordinates into a single dimention array, which will be sorted, then processed in order". But this transformation of a 3 dimension complex problem into a 1 dimension simple problem is really a complete illusion: it brings very little gain as compared to an algorithm that would simply process vertices based on just their X coordinate, and for each x, would scan all vertices with X within [x-d; x+d]. Think of it this way, in 2 dimensions for a start : imagine all vertices are random points within a square of size DxD. Algorithm A1 would sort by x coordinate only, and compare all vertices within [x-d; x+d], scanning x for all source vertices. Algorithm A2 sorts by (x+y) and compares all vertices such that s=x+y is within [s-2d; s+2d], or best [x-sqrt(2)d; x+sqrt(2)d]. The average number of vertices to be scanned is, in both cases N x d x D. The only difference is that instead of scanning vertical strips, we are scanning diagonal strips around the line of equation y= x0 - x .

This algorithm is an illusion. For a given d (merge distance), the amount of processing is still in N^2 in 2 dimensions, and in N^3 in 3 dimensions. The only benefit maybe, is that in many mesh objects, there is some kind of alignment of vertices with one or more of the X, Y, Z axis, so scanning in diagonals might have a tiny bit of benefit.

The algorithm which I propose in the latest diff is totally different. It implies some setup overhead, but can be hugely better when d/D (merge distance over size of object) gets big, and thus the number of candidates to be processed explodes.

Hi, For those who may be interrested, I'd like to say a few words on the merge doubles algorithm based on the sum of x, y, z coordinates. I picked it up from the remove doubles operator, and tried to improve it very slightly. But this algorithm is actually very poor. When I first saw it, I thought "How clever, the way it manages to handle all 3 coordinates into a single dimention array, which will be sorted, then processed in order". But this transformation of a 3 dimension complex problem into a 1 dimension simple problem is really a complete illusion: it brings very little gain as compared to an algorithm that would simply process vertices based on just their X coordinate, and for each x, would scan all vertices with X within [x-d; x+d]. Think of it this way, in 2 dimensions for a start : imagine all vertices are random points within a square of size DxD. Algorithm A1 would sort by x coordinate only, and compare all vertices within [x-d; x+d], scanning x for all source vertices. Algorithm A2 sorts by (x+y) and compares all vertices such that s=x+y is within [s-2d; s+2d], or best [x-sqrt(2)d; x+sqrt(2)d]. The average number of vertices to be scanned is, in both cases N x d x D. The only difference is that instead of scanning vertical strips, we are scanning diagonal strips around the line of equation y= x0 - x . This algorithm is an illusion. For a given d (merge distance), the amount of processing is still in N^2 in 2 dimensions, and in N^3 in 3 dimensions. The only benefit maybe, is that in many mesh objects, there is some kind of alignment of vertices with one or more of the X, Y, Z axis, so scanning in diagonals might have a tiny bit of benefit. The algorithm which I propose in the latest diff is totally different. It implies some setup overhead, but can be hugely better when d/D (merge distance over size of object) gets big, and thus the number of candidates to be processed explodes.

Changed status from 'Open' to: 'Archived'

Changed status from 'Open' to: 'Archived'

Closing that task, no need to keep both open. :)

Closing that task, no need to keep both open. :)
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
5 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#39566
No description provided.