Page MenuHome

Optimized per-datablock global undo
Open, NormalPublic

Tokens
"Yellow Medal" token, awarded by rawalanche."Burninate" token, awarded by ogierm."Cup of Joe" token, awarded by z01ks."Love" token, awarded by mazigh."Burninate" token, awarded by demmordor."Burninate" token, awarded by wo262."Love" token, awarded by Tetone."Love" token, awarded by S_Jockey."Love" token, awarded by maxivz."Love" token, awarded by bnzs."Burninate" token, awarded by Schamph."Like" token, awarded by dulrich."Burninate" token, awarded by FeO."Love" token, awarded by jeacom256."Like" token, awarded by samgreen."Burninate" token, awarded by sinokgr."Like" token, awarded by Piquituerto."Burninate" token, awarded by Koriel."Love" token, awarded by Cirno."Yellow Medal" token, awarded by amonpaike."Like" token, awarded by d.viagi."Burninate" token, awarded by crantisz."Burninate" token, awarded by mauriciomarinho."Love" token, awarded by razgriz286."Burninate" token, awarded by lordodin."Love" token, awarded by dgsantana."Love" token, awarded by jsm."Burninate" token, awarded by Plyro."Yellow Medal" token, awarded by Way."Love" token, awarded by dpdp."Love" token, awarded by michaelknubben."Love" token, awarded by Frozen_Death_Knight."Love" token, awarded by Krayzmond."Love" token, awarded by symstract."Love" token, awarded by koloved."Like" token, awarded by Scaredyfish."100" token, awarded by Zino."Like" token, awarded by billreynish."Love" token, awarded by lcs_cavalheiro."Like" token, awarded by zanqdo."Love" token, awarded by 0o00o0oo."Like" token, awarded by DaPaulus."Love" token, awarded by verbal007."Like" token, awarded by aliasguru."Love" token, awarded by juang3d.
Authored By

Description

Global undo is slow on scenes with many objects. A big reason is that undo writes and reads all datablocks in the file (except from linked libraries). If we can only handle the ones that actually changed, performance would be much improved in common scenarios.

This proposal outlines how to achieve this in 3 incremental steps. Only 1 or 2 of these steps would already provide a significant speedup.

At the time of writing there is no concrete plan to implement this, but I've had this idea for a while and it seemed useful to write down in case someone likes to pick it up.

1. Read only changed datablocks

Linked library datablocks are already preserved on undo pop. We can do the same thing for all non-changed datablocks, and only read the changed ones.

We can detect which datablocks were changed by checking that their diff is empty. Addition and removal of datablocks also needs to be detected. We need to be careful to exclude runtime data like LIB_TAG_DOIT from writing, to avoid detecting too many datablocks as changed. Most changes to runtime data go along with an actual change to the datablock though.

Since unchanged datablocks may point do changed datablocks, we need to reload datablocks in the exact same memory location. We can safely assume the required memory size stays the same, because if it didn't those other datablocks' pointers would be changed as well. In principle other datablocks should not point to anything but the datablock itself, but there may be exceptions that need to be taken care of (e.g. pointers to [armature] bones hold by [object] pose data...).

Main fields other than datablock lists may need some attention too.

2. Optimize post undo/redo updates

The most expensive update is likely the dependency graph rebuild. We can detect if any depsgraph rebuild was performed between undo pushes, and if so mark that undo step as needing a depsgraph rebuild. This means the undo step is no more expensive than the associated operation.

This requires the dependency graph to be preserved through undo in scene datablocks, similar to how images preserve image buffers. This would then also preserve GPU data stored in the dependency graph. If the dependency graph supports partial rebuilds in the future, that could be taken advantage of.

There may be other expensive updates, to be revealed by profiling.

3. Write only changed datablocks

While undo/redo is usually the most expensive, the undo push that happens after each operation also slows down as scene complexity increases. If we know which datablocks changed, we can only save & diff those.

This could be deduced from dependency graph update tags. However there are likely still many operations where tags are missing. If it is not done correctly, then instead of a missing refresh there is a more serious bug of not undoing all changes. So this needs verification throughout the Blender code for correct tagging, before it can be enabled.

Debug builds could verify correctness by testing the diff is empty whenever there was no tag.

Details

Type
Design

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes
Way awarded a token.Jun 4 2019, 4:07 AM
Sam (Koriel) added a subscriber: Sam (Koriel).

hi @Brecht Van Lommel (brecht) , with version 2.8, we decided to give another go to Blender and overall it is great release and we were considering switching, but this undo issue is a the biggest show stopper we found so far. I'm surprised to be honest with the fact that this is considered "normal" priority. I have never worked in a production that doesn't have a few thousand objects per scene with a few million polys. If undoing a move takes a couple of minutes, this makes blender unusable for most productions I think. I decided to add this comment with the hope to bump this up a bit. I hope that's OK.
Nick

hi @Brecht Van Lommel (brecht) , with version 2.8, we decided to give another go to Blender and overall it is great release and we were considering switching, but this undo issue is a the biggest show stopper we found so far. I'm surprised to be honest with the fact that this is considered "normal" priority. I have never worked in a production that doesn't have a few thousand objects per scene with a few million polys. If undoing a move takes a couple of minutes, this makes blender unusable for most productions I think. I decided to add this comment with the hope to bump this up a bit. I hope that's OK.
Nick

Yes I think we need to make more noise about this. It's disconcerting that this task is Normal Priority. Blender is getting a lot of hype because of Eevee and UI improvements, and many people are posting cool little video snippets of what can be done. But as soon as a scene gets a bit bigger, the simple function of Undoing becomes a major hinderance. I just finished a slightly more complex scene and the Undo performance was the biggest issue I faced throughout the whole project. It's not just that we need to wait 5-10 seconds for the Undo, it's also all the times you avoid Undoing because you know it's going to lock your computer for a while. This inhibits creativity as well, as Undo is a weapon that an artist can use to try many many different things quickly. I think Ton Roosendaal's priority has always been that Blender has to be fast. So let's bump this up to the highest priority possible!

hi @Brecht Van Lommel (brecht) , with version 2.8, we decided to give another go to Blender and overall it is great release and we were considering switching, but this undo issue is a the biggest show stopper we found so far. I'm surprised to be honest with the fact that this is considered "normal" priority. I have never worked in a production that doesn't have a few thousand objects per scene with a few million polys. If undoing a move takes a couple of minutes, this makes blender unusable for most productions I think. I decided to add this comment with the hope to bump this up a bit. I hope that's OK.
Nick

Undo lag is also for me the biggest production killer in the moment. Not only modelling, everything (except of edit poly) gets dramatically slow undoable on more complex scenes. It affects every user in mid-late state of the project, even with SSD and high end pc it's just slow. So thumb up that this feature gets more important!

AntoineR (S_Jockey) added a comment.EditedAug 16 2019, 6:22 PM

I agree with you guys, I think this should be high priority, I work for the video game industry as a Senior Character artist and I'm wasting a lot of production time because of the slow undo. Characters scenes are heavy ( around 25 millions polys ) and we use a lot sculpting tools with subdivision modifier. I'm currently testing blender 2.8 features and I'm evaluating if it make sense to use it for the whole character modeling production. Unfortunately for the moment I can't recommend it to my co-workers because of this performance issue. Blender team did a great job with this 2.8 release and I love using this software, as soon as this performance issue get fixed I think it will be awesome to use and ready for production.

I agree with the understated importance of this issue: no matter how great the advanced functions of a software are, if it's basics aren't rock solid, it's not usable in an actual production process.
The Undo function is one of the most used and the lag it currently generates kills all possible efficiency that Blender can otherwise provide.
The "undo step is no more expensive than the associated operation" is an absolute must-have before I can advocate for using Blender in a professional context, which I hope I eventually will.

Is the issue with slow undo known/reported?
Are there examples of comparable files which perform well in 2.7x and badly in 2.8x?

The code for global-undo (memfile undo internally) didn't have changes which should impact performance since 2.7x, it's quite possible the performance issue is elsewhere.

Juan Gea (juang3d) added a comment.EditedAug 19 2019, 1:20 PM

Undo is bad in both, this is not a regression, any complex file makes undo nearly a torture to use. Just animating a camera for example, you are just transforming the camera, and making undo in a big scene takes A LOT instead of being instantaneous as it should be.

Check this thread, there are a lot of people talking here about their undo experience:

https://devtalk.blender.org/t/blender-2-8-undo-improvements-it-last-45-seconds-to-undo-an-object-movement-in-a-big-scene/2554

@Campbell Barton (campbellbarton) my understanding is that reports about 2.8 undo being slower than 2.7 are not issues in the undo system itself, but rather related to features like subdivision surfaces and the dependency graph.

Still, doing improvements like these would solve performance problems that we already had in 2.7, and reduce dependency graph overhead even if that's good to optimize on its own too.

I started using blender with 2.78 and use it daily. I'm pretty sure there is no significant slowdown in 2.8 compared to 2.78.

With 2.8 there is just so much more feedback from the industry who work with big scenes, that this topic increased in attention.
Undo in blender was and still is, super slow.

@Brecht Van Lommel (brecht) I'm sorry if this is annoying, but can we have some insight into plans for this topic? Will we see something about this in the near future?
Sadly this is pretty much the most important feature for all people I know, who want to and already are working with blender. (Including me)

We consider this a high priority task and will try to get it solved sooner rather than later. It's marked as high priority in T63728: Data, Assets & I/O Module.

Oh, that's amazing. Can't wait for this to be addressed!!

That's pretty good news :D Thank you!

Will at least investigate the first part of the project, we already kind of do that with linked datablocks, so should not be too hard to handle local unchanged ones the same way...

Hi @Bastien Montagne (mont29),
That's great. If you have a build that you want us to test, can you please give us some instructions of how to download and I can compare with the existing version.

Correct me if I’m wrong, but this approach of only updating modified datablocks won’t improve performance in a scene containing a single high density mesh object. Am I right? Would this be only on the level of mesh edits, or would simple changes like location, rotation and scale also require fully reloading the mesh object?

Did some preliminary tests in undo-experiments branch, fairly dirty but working enough to get a first idea of PART 1 project.

So far, playing with undo in a big Spring production scene (lighting one, just moving around some lights and undo), avoiding reading of unchanged IDs saves about 30% of the read process time. Thing is, we are talking of around 100ms (130 ms with current master code, 90ms with code in the branch), when the actual undo step takes about 4 seconds from a user PoV. So main optimization is clearly to be sought into the scene update/rebuild happening after undo 'memfile' has been read...

A single undo in my scene, was at least a couple of minutes. My scene was an imported FBX data from a car manufacturer (and under NDA so I can't share I'm afraid).

Thanks for the tests. Step 1 by itself is not the important optimization in a file like that indeed. But it's a prerequisite for step 2, to be able to preserve most of the depsgraph.

a question...
if we have a data collection (a mesh with a certain n number of vectors) and we move the pivot of this data, blender with the undo recalculate-read&write the displacement of all the vectors of the mesh or just the displacement of the pivot point ? ... because it is obvious that in the first case it is a problem of heaviness ...