- User Since
- Feb 6 2014, 10:25 AM (258 w, 2 d)
Sat, Jan 12
I'm the one who started this thread, and as @Rainer Trummer (aliasguru) says, I am not a blender developper anymore. Back in 2014, it seemed any work on new modifiers was prohibited because a change of paradigm towards node-based modifiers was envisaged. I had built a 'randomizer' modifier that was trashed for that reason.
Aug 19 2015
@Aaron Carlisle (Blendify), it seems there is an ongoing debate as to which multi threading technology is best for Blender and until it is settled little work is undertaken in this field. Yet, with cpu improvements now beeing almost entirely focussed on getting more cores packed, and considering many operations in Blender come down to applying the same algorithm to millions of vertices, it does seem to be a huge performance possibility, as indeed simple experiments on modifiers have shown. But I have learned that working on anything that is not 100% approved is likely to be a waste of time, so it seems to me the first step will be for Blender to have an official choice in multi threading technology.
Jan 13 2015
It's ok, I did get the re-engineering of the array modifier into the 2.72, so I will have left a tiny little trace into Blender source code.
Jan 12 2015
Dec 22 2014
I don't think working on loose parts was the actual blocking factor. Determining loose parts is actually pretty fast, and I had implemented caching of loose parts, so it was not a performance issue. The very notion of loose parts may have been a workaround for finding the original object (in most cases) after array modifiers have been applied. But actually loose parts do have a topological meaning of their own, and so it is not strange or dirty to have something act on loose parts. Many people had suggested that other modifiers could benefit from some loose parts setting.
Dec 11 2014
Nov 5 2014
Got it !
After further investigations, here's where I am:
The issue relates to array caps with merge option, caps having a subdivision surface active.
is in new implementation. So maybe the "broken" indication is wrong ?
I will look at it.
Oct 2 2014
If there was a "remove double" modifier, then it would be simple to add settings based on vertex groups. (1) limit the merge to a vertex group, but also (2), limit with given source group onto given target group.
I believe in lightweight modifiers with limited features, used as legos. If and when some node based modifier scheme is introduced, then we would probably use more atomic feature nodes assembled at will. When this comes, it is likely the current modifiers will not be trashed, but simply ported to nodes. In that perspective, it does make sense to introduce such light or atomic modifiers, later to be turned into nodes.
Sep 23 2014
Thank you. I will try the workaround.
Just so I can subscribe and follow up, could you please give me the bug reference ?
Sep 9 2014
It's interresting to see, in these examples that the effect of the "remove doubles" may be usefull not only after (after in terms of time not of stack order) other modifiers have acted, but also before, that is to say mesh parts are glued and merged to start with, then because of various modifiers they are torn appart, double vertices get separated and the "double modifier" ceases to operate.
Sep 8 2014
Sep 6 2014
Note that although we call it "remove doubles", (as in "problem"), the operation is really "remove near vertices". In that sense, even though few modifiers would actually create doubles, many could generate very close vertices. The array modifier would do this simply if using scaling (through a scaled empty object), then each instance gets smaller and smaller, and vertices get nearer. Shrinkwrap modifier could also quite often yield vertices that are very close to each other and could be simplified by a "remove doubles".
Sep 5 2014
@Campbell Barton (campbellbarton): do you support the idea of a dedicated "merge doubles" modifier. It would fit in the general idea that what can be done as an non-reversible operation should also be possible to achieve through a modifier. If OK, I can deliver this within a week or two.
Aug 13 2014
Thanks a lot for your feedback @Thomas Beck (plasmasolutions), this is really a heartwarming reward.
Jul 29 2014
@Matt Outlaw (outlaw3d): note that the suggested compositing is not using the "shadow pass", just the regular image output. Subtracting image with shadows from image without shadows (actually without the objects that cast the shadows) has the effect that everything that was unchanged by shadows is cancelled out, leaving the shadows isolated.
This assumes that you can distinguish between the shadow-casting objects and the shadow-receiving objects, using separate layers. It is definitely just a workaround, not a long term solution.
It seems to me that one could get the same result through compositing, having one layer "WithShadows", another layer "WithoutShadows" where the shadow-casting object does not belong, then using a subtract node in the compositor to isolate what the shadows have altered. Isn't it ?
The nodal modifier design seems to be the holy grail. And yet it could take years to come, and we should not, in the meantime, give up on interresting features just because they would fit so much better with nodes. I think new features can be added within the current model when relevant, and may be then replaced by their nodal equivalent in the future.
Jul 28 2014
@Thomas Beck (plasmasolutions), @Bastien Montagne (mont29): Hello Thomas. I would love to see it in 2.72 as well. Last time I worked on it was to merge it into the displace modifier. In April, the revision was commandeered by Bastien. He did some cleanup and it seemed ready for release. Maybe Bastien can tell if it's possible to get it into 2.72..
May 21 2014
May 20 2014
For those who may be interrested, I'd like to say a few words on the merge doubles algorithm based on the sum of x, y, z coordinates. I picked it up from the remove doubles operator, and tried to improve it very slightly. But this algorithm is actually very poor. When I first saw it, I thought "How clever, the way it manages to handle all 3 coordinates into a single dimention array, which will be sorted, then processed in order". But this transformation of a 3 dimension complex problem into a 1 dimension simple problem is really a complete illusion: it brings very little gain as compared to an algorithm that would simply process vertices based on just their X coordinate, and for each x, would scan all vertices with X within [x-d; x+d]. Think of it this way, in 2 dimensions for a start : imagine all vertices are random points within a square of size DxD. Algorithm A1 would sort by x coordinate only, and compare all vertices within [x-d; x+d], scanning x for all source vertices. Algorithm A2 sorts by (x+y) and compares all vertices such that s=x+y is within [s-2d; s+2d], or best [x-sqrt(2)d; x+sqrt(2)d]. The average number of vertices to be scanned is, in both cases N x d x D. The only difference is that instead of scanning vertical strips, we are scanning diagonal strips around the line of equation y= x0 - x .
May 19 2014
May 6 2014
Thank you for your analysis. Still it seems strange to call this anything else than bug. Particularly since it worked fine before. I do agree it is not a major problem, it is not very often that cpu and gpu are used together and must match. I think using shrinkwrap with a very small offset is pretty commonplace.
Apr 29 2014
Apr 25 2014
Not exactly a duplicate, since T39761 states that cycles render in view port is not right, while it is correct in a real render. In my case, it is the full render that is different.
Apr 24 2014
I tested the file on a 2.68RC1, and the 2 renders are identical, so something happened between 2.68 and 2.70, I don't have many versions at hand to try.
My specs: Windows 64bits, GeForce GTX560Ti.
Apr 23 2014
Apr 11 2014
version 2.67 gives same results.
I ran some tests. Taking a Suzanne with subsurf at 2, thus 7958 verts each item. Another identical Suzanne is used for caping.
Apr 5 2014
Apr 4 2014
Apr 3 2014
Apr 2 2014
Mar 30 2014
Mar 29 2014
Mar 28 2014
I created diff D433 with 20 something tests for Vector functions.
Mar 27 2014
I'll take this one.
Mar 23 2014
Mar 21 2014
Mar 18 2014
Mar 11 2014
Mar 10 2014
Mar 8 2014
I gave it a try, and have pinpointed the bug, but only partially solved. Tried to upload the patch, but ended up with a new Task. Sorry, I don't master the process. See comments associated with task T39056. I copy hereunder:
Mar 7 2014
Mar 6 2014
Feb 17 2014
Feb 15 2014
I have chosen to implement this as "noise modifier", of which I just submitted the patch. So this is closed.
Feb 11 2014
Thank you for your feedback.