Page MenuHome

A new implementation of the Array Modifier, without BMesh, gives great performance improvement

Authored by Campbell Barton (campbellbarton) on Apr 2 2014, 9:47 PM.



This is a proposition for a new implementation of the Array Modifier, that does not use BMesh. Modifiers using BMesh have very poor performance.

This patch is a proposed new implementation. It is identical in its features and results. It gives a performance improvement of more than 100 fold (really it does) when merge option is not selected, and of around 10-20 with merge option checked. These improvements are measured without OpenMP, another /2 or /4 factor could be gained through multi-threading, which is much easier to implemement on the simple loops of direct derived mesh processing.

In this patch is a proposed implementation of doubles detection that is inspired from the algorithm used in the "Remove Doubles" operator, but with a few differences, using separate sorted arrays for target and source, and I think a slightly improved performance. This map_doubles() function could be added to cdderivedmesh.c, and made available to other modifiers.

The new implementation of the array modifier also uses CDDM_merge_verts() from cdderivedmesh.c, which up until now was only called by the mirror modifier.

I understand there is some risk in overhauling such rock-solid old work horse as the array modifier, but a 100 times gain is a lot, I believe it's definitely worth it, it can be the difference between a 10 seconds wait and a 0.1 second result.

Diff Detail

Event Timeline

@Patrice Bertrand (PatB),

Quick note to say, +1 for moving away from BMesh.

This was mainly done because we had to get all modifiers working with BMesh but it gave a big performance hit too.

Provided cleaned up version of the patch which compiles in recent master.

Previous cleanup had error (source/target mixup), move into its own function svert_from_mvert.

Theres a problem I ran into, if you enable merge and increase the level, it gives this assert.

BLI_assert failed: /src/blender/source/blender/blenlib/intern/edgehash.c:94, edgehash_keyhash(), at 'v0 < v1'

	abort	/usr/lib/		0x7ffff50c1788	
	edgehash_keyhash	edgehash.c	94	0x1257dff	
	edgehash_lookup_entry	edgehash.c	171	0x1257dff	
	BLI_edgehash_lookup_p	edgehash.c	340	0x1257dff	
	CDDM_merge_verts	cdderivedmesh.c	2486	0xf9d574	
	arrayModifier_doArray_2	MOD_array.c	829	0xd9f345	
	applyModifier	MOD_array.c	1114	0xda0347

Whats happening is edges with 2 of the same verts are being created. Infact this looks more like a bug in CDDM_merge_verts (it doesn't support 2 different vertices pointint to the same target).

With only a small change this assert can be resolved.

 	/* now go through and fix edges and faces */
 	med = cddm->medge;
 	c = 0;
 	for (i = 0; i < totedge; i++, med++) {
		const unsigned int v1 = (vtargetmap[med->v1] != -1) ? vtargetmap[med->v1] : med->v1;
		const unsigned int v2 = (vtargetmap[med->v2] != -1) ? vtargetmap[med->v2] : med->v2;
		if (LIKELY(v1 != v2)) {


Didn't commit this since I'm not sure its correct, since the resulting mesh can still be corrupt. (checked and with old code its not corrupt).

Attached file, with array_modifier_cleanup_f34cb95_v2.diff and the snipped above:

  • load:
  • press the Apply button.
  • enter editmode
  • crash/assert (Im using a debug build)

The error is that a loop has a vertex which is not in the loop's edge, see bmesh_core.c:

	if (!BM_vert_in_edge(l_iter->e, l_iter->v) || !BM_vert_in_edge(l_iter->e, l_iter->next->v))
		err |= (1 << 20);

So we need to know if CDDM_merge_verts needs to be able to support more use-cases, or if the data passed to it is incorrect.

Thank you for your thorough examination Campbell. I'm off for 3 days, but will do my best to correct this next week.

I cannot reproduce the crash. I checked out latest master, applied Campbell's array_modifier_cleanup_f34cb95_v2.diff, and loaded sample crash blend file, but no crash.

However, I do agree with the proposed patch in CDDM_merge_verts. actually, it seems likely that this was the intention of the programmer: not "I will skip edges where v1==v2", but rather "I will skip edges where both ends are mapped to the same point".

There is an important note in the header CDDM_merge_verts:

This function is currently only used by the Mirror modifier, so it skips any faces that have all vertices merged (to avoid creating pairs of faces sharing the same set of vertices). If used elsewhere, it may be necessary to make this functionality optional.

And indeed, this leads to a bug in situations such as this simple use case: take a cube, delete the YZ face to the right (with the greatest X value), then add array modifier on X axis with merge. The YZ face to the left disappears, because all of its vertices have been merged. But in this case, it is no good reason for deleting the face. So as the note says, this behaviour needs to be optionnal.

Another important remark is that doubles removal picks the first target vertex that it finds within the merge distance. Thus, the order in which candidates are tested matters, and it may be that in some cases this new algorithm will pick another nearby vertex as the former algorithm. Would this matter ? There is no precise promise as to which vertex will be picked, but it could cause (very minor) differences in old blend files.

Note that tot_time_doubles must be initialized at 0 not 1. This has no consequence since it is only there to measure the total time of calling find_doubles().

I propose a tiny optimization: write the sum_v3 result into each SortVertsElem instead of the coordinates, this will use up less memory, and do fewer calls to sum_v3.

A much more potent optimization would be to cache the vertex-doubles mapping of chunks N on N-1, and apply it again, with just a translation on vertex indices, to N+1 on N. On large meshes, the calls to find_doubles() add up to approximately half of the total time of do_array(). So this would be almost a 50% gain. Do you see any risk that it gives wrong results ? (Of course only for array chunks, not caps, or first-last).

I was not using a debug build, that's why I did not reproduce the assert(v1!=v2) crash. Ok now.

@Patrice Bertrand (PatB), suggest to first get this working without crashing before optimizing further.

Of course after that its worth exporting ways to speed it up more.

cache sum_v3 is simple speedup.

Not sure what you mean cache the vertex-doubles mapping of chunks N on N-1. I suppose you mean cache doubles in the middle of the array?

This might not always work, if the array scales... but we could just ignore that and treat all doubles as 1.0 scale.

Patrice Bertrand (PatB) updated this revision to Unknown Object (????).Apr 10 2014, 6:16 PM

There was a bug remaining in CDDM_merge_verts when processing mpolys, a little bit further. This patch corrects it.
It also implements translation of doubles mapping when handling similar chunks of mesh in the array.
And it implements a second mode for CDDM_merge_verts. The first mode is that used by the Mirror Modifier, the second mode is meant for the Array Modifier. In the first mode, a mpoly where all vertices are mapped (no matter to what), is dumped. This is not the desired behaviour in the case of the Array Modifier. Take for instance a cube with a deleted face to the right, and an array modifier: the second instance of the cube will have all its left-side vertices mapped, but the target vertices do not make up a face, so the second cube's left face should not be dumped. Thus the new merge mode for CDDM_merge_verts dumps a mpoly only if the target vertices make up an identical face. However, handling this condition is painstakingly complex and costly, so I am not satisfied with it at the moment.

@Campbell Barton (campbellbarton): I follow up on the patch's comment.

First, the remaining bug in CDDM_merge_verts, just a few lines below the one you had corrected. Indeed your correction was trashing edges where vertices are mapped to same target, but when handling polys, the mpolys using these edges were not dumped, which would lead to a corrupt mesh. So the same correction is added in the poly loop handling:

for (j = 0; j < mp->totloop; j++, ml++) {
 			unsigned int v1, v2;
 			med = cddm->medge + ml->e;
 			v1 = (vtargetmap[med->v1] != -1) ? vtargetmap[med->v1] : med->v1;
 			v2 = (vtargetmap[med->v2] != -1) ? vtargetmap[med->v2] : med->v2;
 			if (LIKELY(v1 != v2)) {

It seems to not crash anymore, as far as I can tell.

I included the translation of doubles mapping, which is quite simple and has a huge impact for high counts. Indeed I would have forgotten about scaling, which will increase the distances past the threshold. Now translation occurs only when there is no scaling.

I do have a tough problem that remains. As I mentionned, the way CDDM_merge_verts is dumping faces when all vertices are mapped is not correct for the array modifier. The correct behaviour, as I see it is: if all vertices are mapped AND the target vertices make up a face that is identical (same vertices, same order, or reverse order), then dump the face.

So I worked on implementing this as an alternative mode in CDDM_merge_verts. This required a function CDDM_poly_compare, that compares two polys of the mesh, after applying mapping to the first one. This function was very complex to write in a clean way, gave me a real headache. It works now, but shows poor performance, really spoiling all the benefits.

What we would need to make this faster is an equivalent of edgehash.c for polys: polyhash.c. But polys are much more difficult to compare than are edges. We could use a simplified hash table just to check that a match is possible. For example if we just hash together all vertices indices and check in the table that there is some poly with this hash. Then another poly can quickly be tested for possible existence. If the check is positive, then we do a full comparision. What do you think ?

It is not an easy task, but it seems to me that is the only way to quickly check the delete-face condition in the array modifier.

Just an additionnal point. When using caping, and in edit mode, the mesh of the cap objects are displayed, while in the original implementation they are not. Do you have any idea why ?

I found the cause of abnormal delay. Will post a correct and faster diff soon. Including using a GHash for identifying same polys fast.
In the meantime, no need to look at it... thanks.

Patrice Bertrand (PatB) updated this revision to Unknown Object (????).Apr 12 2014, 12:28 PM

This release of the bmesh-less array modifier has:

  • a correction in CDDM_merge_verts when handling verts mapped to same target
  • a second mode in CDDM_merge_verts, used by the array modifier, that implements a different rule for dropping a poly.
  • a translation of doubles mapping when count is >2 and when there is no scaling.

There are now 2 modes in CDDM_merge_verts, specified by caller:
In mode CDDM_MERGE_VERTS_MIRROR: a poly is dropped if all of its vertices are mapped.
In mode CDDM_MERGE_VERTS_ARRAY: a poly is dropped if all of its vertices are mapped AND the target vertices already make up another poly.

It is still a test implementation since it will still call the old BMesh Array modifier for odd counts. And it has a few printf remaining, on purpose, both for performance evaluation, and to assess how efficient poly comparision is.

@Campbell Barton (campbellbarton): I follow up on the latest patch's comment.

The latest release is super-fast now. On this test file F84638}, based on a Suzanne with subsurface at level 2, thus about 80kV, I get the following benchmark:

The Suzannes are overlapping,

  1. With Merge ON, distance is 0.1

1.1) No Caps, Count=50
Old array modifier: 3.53 seconds
New array modifier: 0.24 seconds, ratio is x15

1.2) With Caps (same Suzanne used as cap start & end):
Old array modifier: 6.76 seconds
New array modifier: 0.27 seconds, ratio is x25

Behaviour of old modifier with caps is slow in an abnormal way, this has been reported recently.

  1. With Mergo OFF,

2.1) No Caps, Count=50
Old array modifier: 3.11 seconds
New array modifier: 0.028 seconds, ratio is x110

2.2) With Caps (same Suzanne used as cap start & end):
Old array modifier: 3.13 seconds
New array modifier: 0.025 seconds, ratio is x125

Now I do have some afterthoughts that I wish to share here.

  1. Comparing 2 polys with arbitrary number of loops, one of them being mapped, is very complex. I strived to make it clean and readable, but it still is complex and thus not very satisfactory. Has this need been met and handled before ?
  1. So as to limit the number of full polys comparisions, I am using a GHash object for a first level parsing. The Ghash is using MPolys as keys, the hash function is the sum of all vertices, and the compare function adds comparing a Xor of all vertices. I did not find anything better, since 2 polys can be identical with different starting loops, and different direction. Still this first-level parsing of identical polys does give a significant filter. In the case of the Suzanne sample file, I get:
  2. processed polys: 425 088
  3. polys with all vertices merged: 27 083
  4. polys that matched in GHash: 1585
  5. polys that matched complete comparision: 1364

So this GHash stuff avoided 95% of full poly compare.

All of this is pretty much optimized, but it is complex. So the question is: is this level of optimization worth all the added code, which of course comes with added possible bugs...

Patrice Bertrand (PatB) updated this revision to Unknown Object (????).Apr 12 2014, 2:24 PM

Was calling wrong function to free GHash. Corrected.

@Patrice Bertrand (PatB)

Checked the new patch and it works very well, though I didn't go over new merge code line-by-line to validate.

Only one concern with the changes, that is with CDDM_merge_verts,

I'd rather not have CDDM_merge_verts accept args depending on each modifier, Instead pass in an argument to define which cases to test for.

For eg it could take a flag, rather then naming flags ARRAY or MIRROR, name based on what they do, can add comments within enum { ... } noting flags typical usage.

This makes the function a bit more re-usable and we can add more flags if we need without having some other part of the code call CDDM_merge_verts with ARRAY arg, just because that happens to match up with what it requires at the time.

Apart from this minor stuff, it seems good for inclusion in Blender.

Included a cleanup patch:

  • use camel case for struct name.
  • no need to pass dm to cddm_poly_compare
  • style.

Cleaned up diff:

Diff ontop of your diff (only so you can see what changed):

@Campbell Barton (campbellbarton):

Thanks. I'm glad you find it ok, and thrilled to get my first contribution into blender. Maybe second if the noise/displace makes it !

I will try to get the minor style things right on my next job.

You are right regarding CDDM_merge_verts not using the caller's name as an argument but something more meaningful instead. It could be CDDM_MERGE_VERTS_DUMP_IF_MAPPED and CDDM_MERGE_VERTS_DUMP_IF_EQUAL. It's a bit long, but quite meaningful.

Last thing would be to remove the old_if_odd_count / new_if_even_count switch, and the one remaining printf, and it's ready to go !

One additional minor point: do you agree that

dist3 = 3 * dist;

Could be replaced by:

dist3 = 1.733 * dist;

Where 1.733 is > sqrt(3).

Indeed if sqrt(x^2 + y^2 + z^2) < d, then the greatest possible value of (x+y+z) is, because of symetry, reached with x=y=z, and thus at sqrt(x^2 + x^2 + x^2), which is sqrt(3) * |x|.

It won't have a big impact on performance, but it is mathematically valid, and could be used in other remove_doubles stuff.

@Patrice Bertrand (PatB)

Rather just leave it at 3 for now, and prepare the patch for inclusion in master (after 2.71 release).

Then we can check on other changes.

Patrice Bertrand (PatB) updated this revision to Unknown Object (????).May 8 2014, 8:44 PM

Array modifier without bmesh.
This release finally replaces old implementation by new one (previous patches had both side by side).
It also removes a few leftover traces and time probes.
And corrects one remaining bug when releasing memory in CDDM_merge_verts.
Should be ready for trunk.
Refer to previous patch comments for detailed performance improvements.

@Campbell Barton (campbellbarton): oups! I just realized I forgot your cleanup patch of april 24th. Sorry. I will integrate it and post a new patch quickly.

Patrice Bertrand (PatB) updated this revision to Unknown Object (????).May 11 2014, 11:17 AM

Integrates Campbell's code style cleanup of april 24th

Patrice Bertrand (PatB) updated this revision to Unknown Object (????).May 16 2014, 4:10 PM

This is a final optimization patch, only diffed to previous patch for clarity.
3 lines have changed:

(1) Using sqrt(3) instead of 3 for filtering double candidates
(2) Using precalculated sum of coordinates instead of adding again.

Using sqrt(3) or 1.7321 instead of 3 seemed like a very minor optimization, but running MSVC12 profiler showed that this distance comparision was the single line of code most executed in the array modifier with merge. This little change gave a 20% improvement in performance.

The other point is more straightforward: since sum_co = x + y + z has already been calculated, no need to call sumco() again, just use the calculated value.

Patrice Bertrand (PatB) updated this revision to Unknown Object (????).May 19 2014, 10:57 AM

Yet another improvement of the array modifier, related to merging.
It is a significant change, that has high impact (x10 and more) on very large meshes.
Note that this diff is experimental, it has a few traces left on purpose.
It will use the new algorithm if G.debug_value==1001, and the old algorithm otherwise, so that performance and results can easily be compared.

Merging (or doubles detection) in Blender uses an algorithm based on sorting vertices according to its sum of coordinates s=x+y+z.
That algorithm seems clever but actually the filter it provides is not very good, and the number of comparisions explodes with N the number of vertices.

This revision implements a totally different algorithm.
It is based on the notion of aligned micro-cubes of side dist, the merge limit.
Each vertex V(x, y, z) can be assigned to a micro-cube, that starts at

xd= (int)floor(x / dist)
yd= (int)floor(y / dist)
zd= (int)floor(z / dist)

In a first phase, we go through all target vertices, and each vertex is inserted into a GHash, with the coordinates of its micro-cube as a Key. Several items may share the same Key, and the GHash allows duplicates.

In the second phase, for each source vertex, we scan the 3x3x3=27 neighbouring micro-cubes, which is relatively fast with the GHash. When a micro-cube is found, we scan the items in the GHash that share this same Key.

Note that doing this implies a new feature in ghash.c, the BLI_ghashIterator_ that allows scanning items that share a given key. Actually it is strange that this did not exist since it is a natural feature in a Hashtable.

This algorithm has been found to detect exactly the same number of doubles as the previous one, although in a few cases they may be different (because all such algorithms pick the first matching double).

For meshes of over 100 kV, the difference in performance is x3.
For meshes of over 500 kV, the difference starts exploding, over x10.

For smaller meshes, it may be actually slower than the previous one. But it is obviously when things get hard that we need the very best performance, not when it's a difference between 0.001 second and 0.003 seconds.

Sample performance

With dist3 = 3 * dist
v1: 0.078, 0.086, 0.073, 0.086 - compared: 4 378 816
v2: 0.032, 0.036, 0.033, 0.037 - compared: 23 983

With dist3 = sqrt(3) * dist
v1: 0.052, 0.042, 0.056, 0.045 - compared: 2 437 606
v2: 0.032, 0.036, 0.031, 0.033 - compared: 23 983

With another subsurf, 39.6 kV
With dist3 = sqrt(3) * dist
v1: 0.72, 0.72, 0.71 - compared: 3 382 689; match 2533
v2: 0.14, 0.14, 0.15 - compared: 319 899; match 2533

WIth yet another subsurf, 134.2 kV
v1: 11 seconds - compared 609 M; match 10180
v2: 0.85 s - compared 4 750 670; match 10180

Yet another Suzanne. 141 kV
v1: 0.28 seconds - compared 6 M: match 33
v2: 0.78 seconds - compared 80 <--- MORE

Other distance 0.01:
v1: 1.3 seconds - compared 64 M: match 964
v2: 0.60 seconds - compared 964

Other distance 0.02:
v1: 2.42 seconds - compared 172 M: match 1945
v2: 0.55 seconds - compared 47 K: match 1945


Cylinder 384 V
v1: 0.001 seconds - compared 607: match 96
v2: 0.0036 seconds - compared 96: match 96

Hey, great work!
The performance improvements look nice.

I wonder if you also plan to look into other changes here, like make the Array Modifier create instances. That would be really nice. Sorry for the off topic, but I thought I mention this, in case this is related to this refactor.

@Thomas Dinges (dingto) What do you mean "make the Array Modifier create instances" ? Like what's happening when you apply a modifier ?

I hear the big thing that could happen one day with respect to modifiers is having them node based. Although in most cases the modifier stack being sort of linear (one after the other) is fine. But with a node based scheme, you could for example have an array node, then a separate-loose-parts node, then a select-random-part node that picks one loose part, then another modifier applied to the selected loose parts. Or another use case would be a duplicate node with outputs A and B, then different modifiers applied to each outputs, then a Join node. Or different displace-only modifiers merged into a mix node...

My latest diff related to the new merging is maybe a little far-fetched, and still experimental. I wouldn't want it to delay the rest, which I believe is ready for the trunk.

Another word on the present algorithm based on a filter on the sum of coordinates: my tests show that the number of actual distance comparisions is linear with the distance used in sum compare, so that replacing 3*dist with sqrt(3)*dist divides the number of comparisions by almost 2. And the profiler says this actual distance comparisions is the single most executed line in the merge.

@Patrice Bertrand (PatB). are you available to update the patch and prepare for inclusion in master?

@Patrice Bertrand (PatB): As far as I know, when you use the Array Modifier at the moment, the results are not instanced for the render engines. So if you take the default cube, and add an array modifier, you then have N cubes, but they are all different meshes, instead of 1 instanced mesh, which saves memory then during render time. @Campbell Barton (campbellbarton) probably knows this better. :)

@Thomas Dinges (dingto): I don't know what you mean, can you explain some more ? It seems to me the array modifier, in its new implementation, does the same as all modifiers do: it takes a derived mesh as input, and produces a derived mesh as output.

@ dingto, array modifier is a generate modifier as is its purpose is to generate geometry that can be take into account by other modifiers.

When Brecht talked about removing dupliverts old code, we claimed for not removing it without a feature to replace it.
Instances distributions are managed by particles or dupliverts.

Of course a tool like array modifier to manage instances would be interesting and easy to understand.
In fact, I already often use dupliverts with a duplivert parent mesh made with array modifier.

But is modifiers' stack a good place for tools about instance ?

I guess you know what Instancing means? :)

When you have 4 spheres, duplicated with ALT+D, the objects share the same mesh datablock, the render engine will then instance the spheres, and we don't waste our memory.
If you use the Array Modifier to duplicate the sphere, the result is not instanced. I don't know how to simplify this more.

Modifiers don't support instancing, if someone plans to work on this rather discuss elsewhere.

@Patrice Bertrand (PatB), The patch doesn't apply to master, would you be able to update it and double check it works correctly? (also remove/comment printf's and generally make it ready to commit)

Patrice Bertrand (PatB) updated this revision to Unknown Object (????).Jun 30 2014, 9:40 AM

Array without BMesh in its (hopefully) final release. Rebased on master. And fixes 3 remaining bugs

  • For "fit length" count, length was not initialized
  • Merge was Last onto First, now changed to First onto Last
  • Vertex flags set to zero when adding cap objects (this gave weird things in edit mode in particular).

I have one question related to which dm should be used for cap objects.
The current (BMesh) array modifier takes the cap object WITH its modifiers applied.
I did not find the function that will give me this.
I was previously using get_dm_for_modifier(), which gave nothing when cap object had modifiers: add a subsurf to the cap object, and the an array cap would vanish.
I am now using mesh_get_derived_deform(), which does not make cap object disappear when it has modifiers, but does not apply the modifiers.


Committed temp-array-modifier branch rB6647a3517e7865c6936d81d084630ffb9811612f

To finalize & test the patch, its less trouble then commandeering the patch off each other,

I'd like to give you commit access to Blender, I think it would be good if you could finalize the patch in the branch and then commit to master.

Being you coding on this modifier, wouldn't it be a good moment to add a randomization parameter to each transform channel? Once i saw an addon with this trivial yet useful feature.

Campbell Barton (campbellbarton) edited edge metadata.

Committed rBab06ec7a24821bd0ee968e72e28c0d9298c68b7d

With some edits

  • replace ghash with gset
  • set orig-index of cap geometry to NONE
  • get capping meshes using same method as in master
  • some cleanup, function renaming
This revision is now accepted and ready to land.Aug 12 2014, 5:58 AM

@Patrice Bertrand (PatB), just wanted to give a quick feedback:
We're now able to create huge scenes that we were not able to create before because we needed so much geometry it always crashed blender - now we don't even feel that the array mod is working. This is outstanding and much much appreciated! Great work, thanks so much.

I'm personally still hoping to have the mod stack rewritten soon to get your other modifier idea accepted - mabye @Campbell Barton (campbellbarton) can put this on the table on the next dev talk.

Thanks a lot for your feedback @Thomas Beck (plasmasolutions), this is really a heartwarming reward.

I do hope all the work on noise modifier will not be trashed, I am convinced it brings usefull features, randomness being key to realism, and many expressed interrest.

As for the talked-about node-based modifier design, it would probably start with a design document giving overall purpose and principles. I would be very interrested to read how knowledgeable developpers see it, and would be happy to participate.

As I wrote earlier, I don't understand why things like noise/displace modifier should be kept on hold waiting for global modifier overhaul.

There is also one interresting sideline development that I believe deserves some more thoughts, although I did not retain it in the final array modifier. It's a totally new algorithm for finding doubles within a mesh, that gives huge performance gains for very heavy meshes (see comment here above for details). Which also made me think that "remove doubles" could be a modifier all by itself. It would not alleviate the need for a "merge option" within other modifiers such as mirror or array, because checking for merge in a specific use case can be optimised better (such as translating merge mapping as we do in the array modifier). But there are cases where the possibility of stacking modifiers and finally removing doubles would be very usefull. The "remove doubles" modifier would be very easy to implement now, since we have all the algorithms ready, after work on the array modifier merge. If it was developped, then it could, and should, use my proposed algorithm whenever nverts is above some threshold, around 50k. But I'm digressing here.

I just discovered this patch, and i want to thanks you a lot @Patrice Bertrand (PatB)
As @Plasmasolution said, this is outstanding and super usefull for my everyday work. Great job !

About remove doubles modifier:
As we are in place for it, I take Array modifier as example.
Merging array ends after curve deform on a circle is such a problematic case.

Here you can see arrayed cubes (8 parts, faces along curve deleted) following a nurbscircle (stretch + bounds clamp) to build an animation-flexible mesh ring.
This shouldnt present only a simple tube, rather serve as an example to demonstrate the case for all geometry.
Because curve modifier have to be under array modifier in the stack, there isnt a way to close the mesh without applying the modifiers, which means to lose control for animation (curve) and detail for the mesh (array).

A supplementary merge/remove doubles modifier would fit here very well.

Sry for posting a little offtopic. (good idea!@Patrice Bertrand (PatB))

@Campbell Barton (campbellbarton): do you support the idea of a dedicated "merge doubles" modifier. It would fit in the general idea that what can be done as an non-reversible operation should also be possible to achieve through a modifier. If OK, I can deliver this within a week or two.

@Patrice Bertrand (PatB), re: remove doubles modifier - Blender modifiers almost never make doubles.

(Edge-Split is exception, but in that case it makes no sense... so split then remove)

@Karja Krähwald (karja)'s Example is a good one, In that case we could have a remove-doubles modifier, that only takes boundary-verts into account.

Are there any other examples where you might use a remove doubles modifier? - I'd like to check on possible uses first.

Campbell, this would also be useful when using the solidify modifier and you need some areas of the object to have zero thickness (right now you can't do that as otherwise you get Z-fighting and shading errors).

Otherwise the scene might require more objects, and sometimes it is useful to have one big object for some scene elements as opposed to a bunch of smaller ones.

Note that although we call it "remove doubles", (as in "problem"), the operation is really "remove near vertices". In that sense, even though few modifiers would actually create doubles, many could generate very close vertices. The array modifier would do this simply if using scaling (through a scaled empty object), then each instance gets smaller and smaller, and vertices get nearer. Shrinkwrap modifier could also quite often yield vertices that are very close to each other and could be simplified by a "remove doubles".

Maybe a "limited dissolve" modifier could be useful too. The idea in both cases being to simplify the final result of modifiers.

Is there a reason why this differential is still open (apart from talking about new ideas ;) )? Since it was commited I guess we can close it (in that case @Patrice Bertrand (PatB), simply 'abandon' it).

I will prepare a few examples where i think its useful, but it takes 1 or 2 days.

This revision now requires review to proceed.Sep 7 2014, 4:10 AM

Have experimented with some ideas.
Main purpose i have found in operations where you want to

  • tear
  • pluck
  • peel
  • break
  • (and so on)

geometry, and of course the other way around.

Benefits i see are especially for animation and deform on smooth meshes. With a "merge"-modifier you could assign overlapping edges to 2 or more different vertexgroups and move them with bones or curves asunder.

Rough example. The crack or whatever can be handled by Edge Split very good, but i dont know a good possibility to go any further.
My idea for a "merge"-modifier is to create two or more parts with overlapping verts and edges and hide the dividing line with the modifier.
Moving the parts beyond the merge limit splits the mesh.
Fracture could have an extra layer when combined with Edge Split.

A merge option for meshobject + meshobject (like target object) could handle more - Linked Duplicates with Mask would be responsible for more fragments, so you have only one mesh to work on.
Also comes in place where two sets of modifiers should not affect eachother (Solidify always takes whole mesh) and 2 objects are needed, but result object shall appear as connected.
A little like Array cap settings without modifier dependencys. I dont know if this is feasible.

This can also be done with Array merge, but it dont allows to animate seperate banana peels, or to texture it as a whole.
Last one is Skin modifier (maybe a flower would fit better). Skin should be generally not bad to use, because you can set higher merge limits and stick how you like.
Also its very organic - a bit like metaballs when combined with a "merge"-modifier + Subsurf.
But i think the radius and root after merge operation could be a hot spot.

@Karja Krähwald (karja)
Added: T41748, please move discussion on remove-doubles modifier there