Page MenuHome

Blender editing performance with many datablocks
Confirmed, HighPublicTO DO

Tokens
"Burninate" token, awarded by Dir-Surya."Love" token, awarded by radi0n."Like" token, awarded by Moult."Love" token, awarded by greyoak."Yellow Medal" token, awarded by 1D_Inc."Love" token, awarded by calli."Mountain of Wealth" token, awarded by duarteframos."Love" token, awarded by ofuscado."Love" token, awarded by CobraA."Love" token, awarded by bintang."Love" token, awarded by franMarz."Love" token, awarded by dcolli23."Love" token, awarded by aditiapratama."Love" token, awarded by tiagoffcruz."Love" token, awarded by xdanic."Like" token, awarded by amonpaike."Like" token, awarded by knightknight."Love" token, awarded by Tetone."Love" token, awarded by symstract."Love" token, awarded by Ztreem."Love" token, awarded by Dangry."Like" token, awarded by ThinkingPolygons."Grey Medal" token, awarded by Regnas."Love" token, awarded by Fracture128."Like" token, awarded by TheRedWaxPolice."Like" token, awarded by Lumpengnom."Like" token, awarded by andruxa696."Like" token, awarded by MetinSeven."Like" token, awarded by wilBr."Like" token, awarded by MichaelWeisheim."Like" token, awarded by Frozen_Death_Knight."Like" token, awarded by Biaru.
Assigned To
None
Authored By
Dalai Felinto (dfelinto)
Jan 24 2020, 1:01 PM

Description

Status: 1st milestone almost complete, need final optimizations.


Team

Commissioner: @Brecht Van Lommel (brecht)
Project leader: @Bastien Montagne (mont29)
Project members: @Sebastian Parborg (zeddb)
Big picture: In heavy scenes, new features in Blender 2.8x are making the inherently linear (O(N)), when not quadratic (O(N²)), complexity of most operations over data-blocks more problematic than in 2.7x era. The general goal is to bring those operations back to a constant (O(1)) or logarithmic (O(log(N))) complexity.

Description

Use cases:
Need specific test cases.

  • Undo changes in object mode in a heavy scene.
  • Undo changes in pose mode in a heavy scene.
  • Duplicating objects in a heavy scene.
  • Adding many objects in a scene.

Note: There are two types of “heavyness” in scenes (that can be combined of course):

  • Scenes that have many objects and/or collections.
  • Scenes that have very heavy geometry, either in the geometry data itself, or generated from modifiers like subdivision surface.

Design:

  1. Dependency graph should not have to re-evaluate objects that did not change.
    • This is mainly a problem currently during global undo, as all data-blocks are replaced by new ones on each undo step.
  2. Handling naming of data-blocks should be O(log(N)) (it is currently close to O(N²), or O(N log(N)) in best cases).
    • This will require caching currently used names.
      • We will probably also have to handle smartly the numbering in that cache, if we really want to be efficient in typical “worst case” scenarii (addition of thousands of objects with same base name e.g.).
  3. Caches/runtime data helpers should be always either:
    1. Lazily-built/updated on-demand (code affecting related data should only ensure that the 'dirty' flag is set then).
      • This approach can usually be easily managed in a mostly lock-less way in a threaded context (proper locking is only needed when actually rebuilding a dirty cache).
    2. Kept valid/in sync all the time (code affecting related data should take care of updating it itself, code using the cache can always assume it is valid and up-to-date).
      • Such cache should be easy to incrementally update (it should almost never have to be rebuilt from scratch).
        • This implies that the cache is highly local (changes on a data-block only ever affect that data-block and a well-defined, easy to reach small number of “neighbors”).
      • In general keeping this kind of cache valid will always be harder and more error-prone than approach #A.
      • This approach often needs proper complete locking (mutexes & co) in a threaded context.

        Note: The ViewLayer's collections/objects cache e.g. currently uses the worst mix of #A and #B - it is always assumed valid, but updating it requires a complete rebuild from scratch almost all the time.

        Note: Approach #B is usually interesting only if a complete rebuild of the cache is very costly, and/or if the cache is invalidated very often while being used.

Engineer plan: -

Work plan

Milestone 1 - Optimized per-datablock global undo
Time estimate: 6 months

Milestone 2 - Lazy collection synchronizations
Time estimate: 1 month

Milestone 3 - Data-blocks management performances with many of them
Time estimate: 5 months

  1. Investigate how to best handle naming conflicts issues.
  2. Investigate the best way to cache bi-directional relationships info between data-blocks.
  3. Fix/Improve handling of the Bone pointers (which are sub-data of Armature ID) stored in poses (which are sub-data of Object ID).
    • Ideally such horrible pointers should not exist ever. They are a very solid recurrent source of bugs in Blender.
    • Not sure how to best deal with them, at the very least we could add some generic ID API to ensure those kind of caches are up-to-date?

See also T68938: Blender editing performance with many datablocks.

Later

  • Dependency graph rebuild to be O(1) or O(log(N)) (it is also O(N) at the moment). ???? would assume linear would already be nice performance for such a code?

Notes: -


Event Timeline

Edited task, sorted stage 2 sup-steps in order I think is the most relevant, and added sub-tasks for all those three ones.

Please not that this is still very vague, and mostly there to keep track of random ideas at this point. Proper technical design is still required, stage 2 is not likely to happen in immediate coming months anyway.

Dalai Felinto (dfelinto) renamed this task from Scene editing in object mode to Blender editing performance with many datablocks.Jan 28 2020, 12:24 PM
Dalai Felinto (dfelinto) updated the task description. (Show Details)
Campbell Barton (campbellbarton) changed the task status from Needs Triage to Confirmed.Feb 12 2020, 8:29 AM

Please not that this is still very vague, and mostly there to keep track of random ideas at this point. Proper technical design is still required, stage 2 is not likely to happen in immediate coming months anyway.

Of course, algorithmization is primarily a complex mathematical problem.
(One of my favorite explanations for the big O notations)