Page MenuHome

Fix T61904 & T62135: Redo transform crash (edit-mesh & shape-keys).
AbandonedPublic

Authored by Campbell Barton (campbellbarton) on Mar 6 2019, 2:02 AM.

Details

Summary

Crazy-space calculation needs the evaluated objects because of armature bbone cache.

While we could use OPTYPE_USE_EVAL_DATA on transform operators this slows down redo when it's only needed in rare cases.

Submitting as patch since this violates current conventions.

Diff Detail

Repository
rB Blender
Branch
T (branched from master)
Build Status
Buildable 3056
Build 3056: arc lint + arc unit

Event Timeline

To me that looks like an acceptable compromise, at least much better than setting OPTYPE_USE_EVAL_DATA on whole transform op indeed…

This revision is now accepted and ready to land.Mar 6 2019, 8:16 AM

I am not sure why we need this OPTYPE_USE_EVAL_DATA. This is fragile and confusing. We've got own operators which do miss this flag, and we have reports about python operators which fails due to missing USE_EVAL_DATA flag.

To me more proper and sustainable solution is to free operators from worrying about this, by either always ensuring dependency graph is at usable state, or by doing some lazy evaluation in DEG_get_evaluated_id().
This doesn't affect overall performance of the redo in the interface (it mainly moves some evaluation from post-operator-run to pre-operator run). Sure, there is worst case when things will be evaluated twice, but that is already the case for all the operators which does use OPTYPE_USE_EVAL_DATA, and the ones which don't use it are probably cheap to evaluate anyway.

The scripts might become progressively slower, but this is a good push towards ensuring that bpy API is comprehensive enough to perform modifications on data,

source/blender/editors/transform/transform_generics.c
2428–2429

Only used "for now". Might be more cases in the future, leading to very fragile chain of exceptions here.

Is also seems weird why crazy-space needs this, but not the snapping (which does ray-cast against evaluated meshes).

Sergey Sharybin (sergey) requested changes to this revision.Mar 6 2019, 11:48 AM

Requesting changes to indicate that i don't want this to be committed for until we all are absolutely sure about correctness of such direct-evaluation-requests, or until we know what is the better longer-term-sustainable solution.

This revision now requires changes to proceed.Mar 6 2019, 11:48 AM

+1 for a way for operators to request evaluated data thats lazy initialized, with another +1 if Python gets the benefit from it too.

We can't always predict ahead of time when evaluated data will be needed so a general way to handle would be good.


This doesn't affect overall performance of the redo in the interface (it mainly moves some evaluation from post-operator-run to pre-operator run). Sure, there is worst case when things will be evaluated twice, but that is already the case for all the operators which does use OPTYPE_USE_EVAL_DATA, and the ones which don't use it are probably cheap to evaluate anyway.

I'd assume this would make redo slower if it's having to calculate modifier stacks for eg twice per update (happens while dragging buttons), which is why I wanted to avoid 2x updates.

source/blender/editors/transform/transform_generics.c
2428–2429

Snapping doesn't need this because it's modal and the evaluated data is generated for the view-port, redo isn't.

+1 for a way for operators to request evaluated data that's lazy initialized, with +1 if Python can access it.

While this seems sexy, need more deeper thoughts, maybe some brainstorm/critical-looking-at-things session here with @Brecht Van Lommel (brecht).

not sure anything will change for Python. You can already access evaluated object. The only thing which will change is that that evaluated ID will all of a sudden be properly evaluated without Python script author worrying about setting up an i-need-evaluated-depsgraph flag on operator.

I'd assume this would make redo slower if it's having to calculate modifier stacks for eg twice per update (happens while dragging buttons), which is why I wanted to avoid 2x updates.

Think the part of the issue here is the fact that re-do is doing a full re-evaluation of the entire scene, just because all the evaluated_meshes are gone. This makes tweaking a slider slow no matter what else optimizations you do in other areas. It will always be slow.

The way i see things here is:
We are doing something stupid and slow as a first step of operator re-do. This forces to do an entire scene re-evaluation.
We are trying to save some CPU ticks (sure that could be a huge amount) to avoid single object's modifier stack evaluated twice. While we might be succeeded in that, is barely measurable compared to the overall redo+eval time (at least in a scenes with more than 1 object).
By trying to save those CPU ticks we introduce all those fragile sand castles.

So, is the solution to preserve an entire evaluated dependency graph? That would make overall redo so much faster. but is there a way to do so without running out of memory too quick?
Maybe with some smartness we can inform dependency graph that it's just an ID pointer which changed, and keep the evaluated mesh?

This revision was not accepted when it landed; it landed in state Needs Review.Mar 19 2019, 5:50 AM
This revision was not accepted when it landed; it landed in state Needs Review.
This revision was automatically updated to reflect the committed changes.
This revision was automatically updated to reflect the committed changes.
  • Previous update included unrelated changes.
Campbell Barton (campbellbarton) retitled this revision from Fix T61904: Redo transform crash (edit-mesh & shape-keys) to Fix T61904 & T62135: Redo transform crash (edit-mesh & shape-keys)..Mar 19 2019, 5:54 AM
  • Merge branch 'master' into arcpatch-D4455

One thing i'm not feeling great here is that this change attempts to solve similar thing to the Python's T63483. Think this to be done in a same way for Python and C operators.

I am currently working on a design document to cover such topics. There are some uncertainties there which i wanted to go over with Brecht before publishing for discussions.

source/blender/editors/transform/transform_generics.c
2433

I am not sure why do you need T_DEPSGRAPH_EVAL.
Dependency graph evaluation has an early output if nothing is tagged for an update.

2436

Not really related to this specific change, but i think BKE_scene_view_layer_graph_evaluated_ensure() itself needs to be changed. It is using BKE_scene_graph_update_tagged(), which does a lot extra things, like running callbacks. Not sure we really want those?

Now the Python aspect is re-designed. Surprisingly wasn't that much C-side changes.

Related one now is the CTX_data_evaluated_depsgraph() but doesn't seem that it'd be very handy here.

I would still like to make the call more generic, avoid any callbacks invocation if there are no tags, and avoid T_DEPSGRAPH_EVAL.

@Campbell Barton (campbellbarton), something you want to look into, or you want me to have a peek?

@Sergey Sharybin (sergey), can you finish this then?

I don't think callback invocation should be avoided. The point of them is to be able track changes in the scene or do something in response to dependency graph updates. We do not want add-ons to miss changes because the dependency graph update happened inside the transform operator.

Committed alternative fix at rBe256bc225037.