I've been experimenting with metaballs code. Managed to improve polygonization times with many metaballs from slow as hell to just barely slow, without sacrificing too much accuracy. Example of my results: 40k metaballs at resolution 0.05 (this is relative so image attached) in ~12s on my Phenom II 955. Measured with a stopwatch so don't quote me on this.
Changes to make this possible include:
- Switched the octree for a binary BVH built using top-down method, dividing along spatial median (middle of longest axis).
- Normals of the surface calculated using angle-weighted accumulation.
- Finding first points of surface no longer "wastes" density function evaluation: every result is cached. Also, finds max. 6 starting points.
- Converge procedure: better surface approximation through multiple linear approximations.
- Using MemArena instead of new_pgn_element.
- Proper calculation of metaelem bounding box.
- Removal of mball_count procedure: mballs are now counted "on the go".
Wow, that is a lot of changes... Btw sorry for this technical talk. I started working on this for myself, just to see how fast I can polygonize those balls - blender was my sandbox, and it turned into this. Actually tried a lot of different optimization structures. I am looking for feedback(does the surface look good? what is your performance?). Is there a chance to get my code (or a part of it/after some adjustments) into blender? Note that metaballs are still somewhat broken. I'd be happy to work on them further but it involves deeper changes than this.
Here is the patch:
PS I moved global functions in mball.c to the bottom of the file since I didn't change them (also this is the only file changed), so that's why the diff looks terrible.
PPS I'm totally new to making diffs, contributing to an open source project etc. (to any project to be exact). If I f***ed something up, sorry. I don't even know if I'm posting in the right section.