Page MenuHome

Performance: mesh batch cache creation
Open, NormalPublic

"Love" token, awarded by Krayzmond."Love" token, awarded by MrMargaretScratcher."Love" token, awarded by hadrien."Love" token, awarded by Zen_YS."Love" token, awarded by Alrob.
Assigned To
Authored By


This is a part of performance regression investigation task.

Creation of batch caches for meshes is much more involved than drawing from DerivedMesh. There is no single hotspot. Here is some breakdown on what pops up in profiles.

*_has_visible_e* functions. While it is true that truth is returned on first iteration, those functions does a lot of indirect memory accesses, which has affect on cache coherency., Guess this is why they are quite visible in the profiler.

mesh_render_data_edge_flag. Think this is similar to the case above. But it also does things like BM_edge_in_face() where there will be few iterations, for every edge.

GPU_normal_convert*. This is just a lot of normals to be converted. Making those functions inlined doesn't make visible difference.

In the attached file my computer s[pends 0.3sec in DRW_mesh_batch_cache_create_requested, which is half of the overall time when moving a single vertex.

Possible solution here is to run different stages of DRW_mesh_batch_cache_create_requested() in parallel (should be possible to update vertex buffers in parallel with face buffers i would think).


To Do

Event Timeline

Sergey Sharybin (sergey) triaged this task as Normal priority.

Possible solution here is to run different stages in parallel

Yes! that's why I moved all batch generations to a single function. Only a handfull of functions that use BMesh tagging cannot be parallelized (or they must use own "tag" buffer and maybe that could be even faster).

Maybe some areas can be rewritten to be more data driven (like compression using GPU_normal_convert) but this can use more RAM and i'm not sure how much more beneficial it would be.