- User Since
- Feb 8 2015, 11:47 AM (111 w, 3 d)
Sat, Mar 11
OK. So. Managed to reproduce this. Does appear to be connected to Fluid boundary -> Surface -> Smoothing option, but only appears at higher resolutions which makes it a pain to debug.
Thu, Mar 9
Could this be an effect of "Remove air bubbles" option (under "Fluid boundary")?
Apr 15 2015
@Chad Gleason (fahr), the patch wasn't included in official 2.74 release (committed only 6 days before release...), but it is in the newest build from buildbot (https://builder.blender.org/download/) and everything should work fine there :)
Apr 7 2015
Apr 3 2015
- Yes, it does help, and probably was done like this at some point (typecasting to double all over the place), but from the point of numerical stability, this is much better anyway - I don't know how extreme would the conditions have to be, but I can imagine the problem showing up with doubles too (it's like postponing the issue until there is more powerful hardware to do very large bakes). We can change to double anyway if you think so but I bet it's not necessary...
- This function for some reason is used to compute barycentric UV for each triangle on the bake. In case of long triangles, and high resolution of the bake - the errors add up and UVs are not distributed uniformly, hence the tearing (see the bug).
Apr 1 2015
Hey, I can confirm this bug.
- I have Win7 64, SP1, Nvidia GTX 560Ti, driver 347.88 (17 March).
- Latest buildbot, and my own build from latest master both have this.
Mar 31 2015
@Thomas Beck (plasmasolutions) - Yeah, I opened this exact file, and also made my own test (make hair particle system, assign a texture to density, even without custom material). It was strange because when I made my own test, it also didn't happen a few times, and now it happens always. Maybe there are even more specific conditions?
@Thomas Beck (plasmasolutions), that's weird... I have this bug with the latest build. Win7 64. Do you have a different platform? Also, the conditions are pretty specific. This doesn't occur with child particles enabled for example. And when setting density influence to sth different than 1.0, occurs only after second on/off switch.
Mar 29 2015
Hey guys, @marc dion (marcclintdion) was partly right (it's about floating point precision). Took a while to find and fix it though... I hope I've done it right (D1207) :) Any idea who could do review??
Mar 25 2015
Mar 12 2015
Mar 6 2015
OK, quick update of what I was doing the last couple of days:
Mar 2 2015
- The idea behind find_first_points() was to evaluate the function only at "lattice points" to cache them in the hash table. The function evaluation (bvh traversal and actual densfunc()) take 95% of the time in polygonization, so any gain made in this area contributes a lot to speed. But, to be honest, I just coded it this way out of laziness temporarily :-) I could make a version with 26 directions and check how it behaves. If all values will be cached, the speed shouldn't be much worse, and surface accuracy is number 1 priority after all.
- I don't exactly know what you mean by caching the surface? Updating only the part of the surface that was influenced by a moving metaball or something else? This could be a serious improvement, but not in the case when all metaballs are moving (eg. a fluid simulation). Could you explain it more?
- Sorry, but I don't have the energy to look at these papers right now (very tiring day at uni, a lot of science done :-)). They look very interesting at first glance. Will check them out in detail later.
Feb 26 2015
Eh, of course screwed up the last version (bug in makecubetable, I didn't copy everything properly). Here is fixed diff:.
Yes. This should work fine:.
No functional changes made.
Feb 25 2015
Hey, made some improvements to converge(). Basically went back to blend of bisection method and linear interpolation from original algorithm. Metaballs should look good now at every threshold and stiffness. Removed the threshold cap. Still, this would need some tweaks to be optimal but it's very hard to test every combination of threshold, stiffness and size.
Feb 19 2015
OK. Here is the patch with normals from meta:
Feb 16 2015
Cool! Sorry for reordering stuff. As for the normals - I've actually made a quick comparison, have a look:
This is some "benchmark" fluid sim I made for timing the algorithm - 5000 particles towards the end of simulation. "Avg. time" is the average time in milliseconds of 250 frames, only bvh build + tessellation, measured using PIL_check_seconds_timer() on my machine. As can be seen, there definetly is a speed increase (of around 30%) when using angle-weighted accumulation. However, as you said, the surface looks less smooth - this is evident especially on the ball in the middle-lower picture, on which there are some strange features. I think the cause of this is that sometimes during tesselation very thin/small polygons are made, which mess things up. Ways I could think of to make it right:
- stick with one of the ways to calculate normals or think of even a better one (area-weighted? non-weighted?)
- make a "fast normal" switch in the mataball panel
- run the resulting surface through something like "edge collapse" modifier, which would get rid of thin polygons and maybe make the surface look a little bit better
Maybe I'll try checking out the third option (in a few days), to see what the speed impact of this would be. Definitely looking forward to hearing from you!