Page MenuHome

Optimized per-datablock global undo
Confirmed, NormalPublicDESIGN

Authored By
Brecht Van Lommel (brecht)
Jan 21 2019, 5:11 AM
"Love" token, awarded by HEYPictures."Love" token, awarded by silex."Mountain of Wealth" token, awarded by TheAngerSpecialist."Like" token, awarded by chironamo."Love" token, awarded by ReinhardK."Love" token, awarded by Shimoon."Love" token, awarded by Leroy."Love" token, awarded by brilliant_ape."Love" token, awarded by MrJomo."Love" token, awarded by radi0n."Like" token, awarded by TheCharacterhero."Love" token, awarded by lopoIsaac."Evil Spooky Haunted Tree" token, awarded by AnityEx."Love" token, awarded by franMarz."Burninate" token, awarded by DotBow."Burninate" token, awarded by Chromauron."Burninate" token, awarded by achtrounf."Like" token, awarded by Moult."Love" token, awarded by MetinSeven."Love" token, awarded by Yegor."Like" token, awarded by higgsas."Burninate" token, awarded by Alumx."Love" token, awarded by A.Lex_3D."Love" token, awarded by Dangry."Burninate" token, awarded by miller."Like" token, awarded by UrbenLegend."Love" token, awarded by bblanimation."Love" token, awarded by Brandon777."Love" token, awarded by andruxa696."Burninate" token, awarded by LapisSea."Love" token, awarded by bintang."Like" token, awarded by GeorgiaPacific."Love" token, awarded by Kickflipkid687."Love" token, awarded by xdanic."Love" token, awarded by Kubo_Wu."Burninate" token, awarded by 245."Like" token, awarded by charlie."Love" token, awarded by nosslak."Like" token, awarded by Fracture128."Burninate" token, awarded by hitrpr."Love" token, awarded by Blendork."Yellow Medal" token, awarded by rawalanche."Burninate" token, awarded by ogierm."Cup of Joe" token, awarded by z01ks."Love" token, awarded by mazigh."Burninate" token, awarded by demmordor."Burninate" token, awarded by wo262."Love" token, awarded by Tetone."Love" token, awarded by S_Jockey."Love" token, awarded by maxivz."Love" token, awarded by bnzs."Burninate" token, awarded by Schamph."Like" token, awarded by dulrich."Burninate" token, awarded by FeO."Love" token, awarded by jeacom256."Like" token, awarded by samgreen."Burninate" token, awarded by sinokgr."Like" token, awarded by Piquituerto."Burninate" token, awarded by Koriel."Love" token, awarded by Cirno."Yellow Medal" token, awarded by amonpaike."Like" token, awarded by d.viagi."Burninate" token, awarded by crantisz."Burninate" token, awarded by mauriciomarinho."Love" token, awarded by razgriz286."Burninate" token, awarded by lordodin."Love" token, awarded by dgsantana."Love" token, awarded by jsm."Burninate" token, awarded by Plyro."Yellow Medal" token, awarded by Way."Love" token, awarded by dpdp."Love" token, awarded by michaelknubben."Love" token, awarded by Frozen_Death_Knight."Love" token, awarded by Krayzmond."Love" token, awarded by symstract."Like" token, awarded by Scaredyfish."100" token, awarded by Zino."Like" token, awarded by billreynish."Love" token, awarded by lcs_cavalheiro."Like" token, awarded by zanqdo."Love" token, awarded by 0o00o0oo."Like" token, awarded by DaPaulus."Love" token, awarded by verbal007."Like" token, awarded by aliasguru."Love" token, awarded by juang3d.


Global undo is slow on scenes with many objects. A big reason is that undo writes and reads all datablocks in the file (except from linked libraries). If we can only handle the ones that actually changed, performance would be much improved in common scenarios.

This proposal outlines how to achieve this in 3 incremental steps. Only 1 or 2 of these steps would already provide a significant speedup.

At the time of writing there is no concrete plan to implement this, but I've had this idea for a while and it seemed useful to write down in case someone likes to pick it up.

1. Read only changed datablocks DONE

Linked library datablocks are already preserved on undo pop. We can do the same thing for all non-changed datablocks, and only read the changed ones.

We can detect which datablocks were changed by checking that their diff is empty. Addition and removal of datablocks also needs to be detected. We need to be careful to exclude runtime data like LIB_TAG_DOIT from writing, to avoid detecting too many datablocks as changed. Most changes to runtime data go along with an actual change to the datablock though.

Since unchanged datablocks may point do changed datablocks, we need to reload datablocks in the exact same memory location. We can safely assume the required memory size stays the same, because if it didn't those other datablocks' pointers would be changed as well. In principle other datablocks should not point to anything but the datablock itself, but there may be exceptions that need to be taken care of (e.g. pointers to [armature] bones hold by [object] pose data...).

Main fields other than datablock lists may need some attention too.

2. Optimize post undo/redo updates DONE

The most expensive update is likely the dependency graph rebuild. We can detect if any depsgraph rebuild was performed between undo pushes, and if so mark that undo step as needing a depsgraph rebuild. This means the undo step is no more expensive than the associated operation.

This requires the dependency graph to be preserved through undo in scene datablocks, similar to how images preserve image buffers. This would then also preserve GPU data stored in the dependency graph. If the dependency graph supports partial rebuilds in the future, that could be taken advantage of.

There may be other expensive updates, to be revealed by profiling.

3. Rethink how we handle cache preservation across undo steps.

T76989 seems to point to some pointer collision issue, similar to the reason that lead us to add session uuid to IDs when we started re-using existing pointers in undo.

Note that according to the report, there were already issues before undo refactor - hard to tell though if they had the same root cause?

This needs to be investigated further.

4. Write only changed datablocks

While undo/redo is usually the most expensive, the undo push that happens after each operation also slows down as scene complexity increases. If we know which datablocks changed, we can only save & diff those.

This could be deduced from dependency graph update tags. However there are likely still many operations where tags are missing. If it is not done correctly, then instead of a missing refresh there is a more serious bug of not undoing all changes. So this needs verification throughout the Blender code for correct tagging, before it can be enabled.

Debug builds could verify correctness by testing the diff is empty whenever there was no tag.

Current Status and Plan

There is an experimental option in 2.83 to address step 1 and (part of) step 2. This was developed in branches and is now in master. Implementation of step 3 has not started.

  • id-ensure-unique-memory-address implements the idea of never re-using a same memory address twice for IDs. This is required to allow usage of ID pointers as uids of datablocks across undo history.
    • Add storage of addresses history for each ID (has to be stored in blend memfile, can be ignored/reset when doing an actual blendfile reading).
    • Discuss how to best implement the 'never get same allocated address again' process, current code here is basic, we could probably be much smarter and efficient with some changes to our MEM allocator itself.
    • We are going to try and use session-wise uuids for data-blocks instead.
    • Make new undo behavior controllable from Experimental preferences area.
  • undo-experiments Implements detection of unchanged datablocks at undo, and re-using them instead of re-reading them.
    • Switch to using id addresses instead of names to find back a given data-block across undo history (depends on completion of work in id-ensure-unique-memory-address branch).
  • undo-experiments-swap-reread-datablocks implements re-using old ID address even for changed IDs that need actual reading from memfile. Also adds support for more refined depsgraph update flags handling from undo. Both changes combined allow to reduce dramatically depsgraph work needed during most undo steps.
    • Switch to using id addresses instead of names to find back a given data-block across undo history (depends on completion of work in id-ensure-unique-memory-address branch).
  • Investigate crashes due to invalid or used-after-free memory in depsgraph after undo step.
  • Investigate why some undo steps are still using full reload of the memfile. Some may be necessary, but think we can improve here.
    • In particular, the absolute last undo step (the first to be undone) is always seen as 'all data are changed', for some reasons...
  • Investigate how to discard runtime data from 'data is changed' detection (probably by nullifying those pointers at write time?).
  • Fix slow undo of adding/duplicating/deleting datalocks. The undo diffing does not appear to be smart to match up the right datablocks in such cases, causing many unrelated datablocks to be marked as changed, which also means excessive memory usage. The session_uuid could be used to find matching datablocks.

Related Objects

ConfirmedTO DONone
ConfirmedDESIGNBastien Montagne (mont29)

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes
Yegor (Yegor) added a subscriber: Yegor (Yegor).

Will this optimization also be beneficial to the frustrating undo / redo behaviour in Sculpt Mode? I truly hope that will be fixed soon.

Does preserving/re-using unchanged datablocks/memory mean that this Gotcha will no longer be true 100% of the time? (for data/IDs that didn't change)

@Chris Kohl (ckohl_art) the pointer will remain the same if the object still exists after undo. But it can of course get deleted, so you still can't rely on it.

Bastien Montagne (mont29) updated the task description. (Show Details)
Bastien Montagne (mont29) updated the task description. (Show Details)

Looking into the current state I found a few issues. Some require potentially deeper design changes to get maximum performance.

Double Undo Step

Currently every object mode undo will undo twice. Once doing a complete file read to restore the current state, and then a second one partially reading only changed datablocks. Since the current bottleneck is often still the depsgraph this is not always a big problem, but it's still something we should avoid.

A simple fix for that would be to initialize memory chunk is_identical_future to true. Or better, to skip reading such an undo step which has no changes relative to the current state entirely.

However, the problem is that we don't actually know for certain that there have been no changes relative to the current state. For a few reasons:

  • Nearly all operators will do an undo push after making changes, but not all.
  • Dependency graph evaluation may flush back some data to the original, and this happens after undo push.
  • Python app handlers may modify the scene in arbitrary ways.

One way to solve this would be to perform an undo push right before we do an undo pop. This would let us reliably detect when the scene has actually changed, and allow the contents of unchanged datablocks to stay the same which should help the depsgraph.

Of course, writing the entire database to avoid reading the entire database is still not particularly efficient. Once step 3 of this design task is implemented, we would however be able to write just the changed datablocks which would make it faster.

Further, we could consider changing when undo pushes happen. Maybe after dependency graph updates and (some but not all) Python handlers, or even UI drawing. Or we could perform undo pushes only before operators run. However this has significant implications for how operators perform undo, also those implemented in Python add-ons breaking API compatibility.

Depsgraph Update Flags

D7274 fixes an issue with meshes being detected as always changed after they have been changed once. But I still found that e.g. editing a modifier parameter will cause slower undo two steps later. The reason for this seems to be that dependency graph evaluation is performed after undo.

id->recalc is saved along with the undo step, and will be set still because the dependency graph has not been updated yet. When loading the undo step, this value is restored which means dependency graph evaluation has to be re-done. In most cases this is really only needed for data flushed back to the original datablocks, the evaluated datablocks may be in the correct state already. Performing undo pushes after dependency evaluation would help with this, though as explained above is not a simple change.

The other issue that accumulation of recalc flags also happens after undo. This means they will be included in the next undo step which may be for a completely different operator. One way to solve this would be to accumulate the recalc flags set by operators, not after depsgraph flushing or evaluation. Basically, accumulate all flags passed to DEG_id_tag_update(). For correct results, after undo we would then call DEG_id_tag_update() with those flags, so any indirect flags are set as well. Doing this would limit slowness only to one step later.

Short term I suggest to do the following:

  • Get artists at the Blender Studio to enable this option and test it.
  • Clear common runtime fields for writing. Particularly runtime members and id->tag.
  • Before undoing, do an undo write to replace the current state on the stack. Then if that is functioning, we can set accurate is_identical(_future) and only read changed datablocks. I would do this for performance, but also because I'm not entirely sure the current code will give correct results in all cases.
  • Do more extensive testing ourselves in production files and get a list of reproducible bugs, and try to fix them.

This could get us in a state where the it's still slower than it could be in some cases, but hopefully faster in most cases and stable enough for 2.83.

Then the next step could be:

  • Identify more cases where undo is unexpectedly slow (ignoring the 1/2 step delay). This may require clearing more runtime fields, or other yet unknown issues. But at least it would be good to get a sense of how common such issues are after we've cleared the most suspect runtime fields.
  • Bing the delayed effect of slower undo back from 2 steps to 1. I feel like there is something relatively simple we should be able to do here in adjusting the logic, but I don't know yet exactly what. The delayed effect of 1 step is quite hard due to undo / depsgraph order, but 2 steps should be avoidable?

We have enough complex production files for testing at this point. The most important way to help now is finding bugs where things crash or corrupt .blend file, which any user can help with by enabling the experimental undo speedup option, and then reporting bugs to the tracker that explain how to redo the problem.

It also helps somewhat to find cases where undo is unexpectedly slow. However there are currently a number of known cases where things are still slow that are hard to explain. So it's probably more efficient for us to fix those first since otherwise those will be reported over and over.

I just tested out a scene with some heavy geo/higher object count I've been working on in 2.82. Normally it's been taking 20~ seconds to undo an object movement/non Edit Mode stuff.

Testing out the very latest 2.83 build, with the fix on, I was getting almost instant Undos now, or < 1 second! Sometimes it would stall out, as noted above in comments, but at worst it seemed to be about 12 seconds now. Sweet!!

Although, I am seeing worse view-port performance vs. 2.82. My GPU was hitting 100% in windows sometimes and stalling out a video playing in the background, for example. Also just rotating the view looked choppy vs. much more smooth/no real issues in 2.82. Could be from Undo fixes or something else going on?

In any case, thank you guys for working on this, this is my #1 fix / wish right now for Blender. Sorry if this is considered "spam" / delete if you need to.

@Brecht Van Lommel (brecht) that plan looks sound to me.

I can do the runtime fields clearing, wanted to do them asap anyway... this will just require handling more IDs with a temp copy as we already do with meshes...

I addressed some of the issue explained in T60695#899783 in D7339: Undo: change depsgraph recalc flags handling to improve performance. This doesn't try to handle any issues with Python app handlers or other operators not doing undo pushes, but I think it may be ok in practice anyway.

I found another slowness issue when adding/duplicating/deleting objects, and added that to the list in the description.

Wow, I was testing the latest build as of this morning and so far, i see pretty big speed improvements!!

In a more complex undo, duping an entire highpoly weapon and moving it and hitting undo, it is about 2-3x faster than 2.82.
Doing more normal level of undos, its 4-5x faster than 2.82, taking less than 1 second vs. 4-5 seconds on 2.82. Awesome!!

It would be really great if you guys make it even faster, but so far, I'm super happy to see this. Thank you for all your hard work so far!

Piotr (radi0n) added a subscriber: Piotr (radi0n).EditedApr 19 2020, 3:07 PM

In Blender 2.90 Alpha with new undo enabled. Undo in object mode in heavy scene (over 12M tris) increased from 33sec (Blender 2.82) to 1 second. Bravo devs!

Hi there.. is this T70814 bug related to this undo (basically it causes the first undo of a stroke to also undo brush settings)

@Leul Mulugeta (Leul), that's unrelated to this specific task.

I have experienced some weird Rigid Body behaviour when undoing lately (e.g. T76053), including some crashes, and all the weirdness seems to go away when I use legacy undo. I can't reproduce the crash reliably (I'm trying to work that out as we speak), but T76053 might be a starting point.

Not sure if it is related to the new undo, but I just faced weird bug: When in edit mode, i pressed Ctrl-Z to undo (inteded to undo scaling some vertices), and then quickly Ctrl-Shift-Z to redo, but i ended up in weird state - the mesh appear scaled, in the undo history I appear to be two steps below the tip, and if i redo those two steps, they was undoing my scaling action halfway for each of those two steps.

Sadly I can't reproduce it yet.

That is blender 2.83.15 with new undo turned on.

I saw an update posted above, about fixing some issues that caused the entire scene to be reloaded yet?
Not sure if that's in 2.9 alpha build as of today or not however. Unless is not a fix yet either?

I am still seeing long undo steps when going between object mode/edit mode. But that might still be in the works/on the books.
I feel if that can be solved / reduced even 50%, that would make things much much better :).
Overall Undo is still way faster than it's been in the past, since 2.8. But since I'm often am toggling between edit/object mode and have to undo, it can still be alot of waiting around between those steps. Or I try to look at the undo history and jump to that specific point if at all possible, to make it faster.

Windows 64bit, running 2.9 Alpha (May 30, 00:27:32 - 2ee94c954d67)

This probably isn't possible / would require massive changes, but I have no idea. But looking at 3DS Max Undo again, they are doing it in a way, where when you Undo, it doesn't matter if you are in Object or Edit mode, it will still undo those steps/not switch between states during Undo.

Meaning, if you are in Object mode, it will still Undo the Edit Mode level changes. Part of the lag I think I still experience in even 2.83, is the fact that blender has to switch to Edit mode/back and forth, in order to Undo those changes. Object mode Undo is faster now, but when it has to switch back and forth into Edit mode, it can take 5,10+ seconds yet.

I also notice things like Proportional Edit (Soft select) will Undo/get Toggled On/Off during Undo as well, which is very strange to me. It seems like some of those UI elements should not be affected/not be accounted for in the Undo, IMO. Or really maybe most UI things are ignored, except for Modifiers or that sort of thing?

The UI aspect of this may or may not contribute that much to the overall speed/is probably another task/item all together, but thought I'd mention in here as well.