Page MenuHome

Mesh-based armature interaction ("Mesh Gizmos", "Face Maps")
Confirmed, NormalPublicDESIGN

Authored By
Demeter Dzadik (Mets)
Oct 14 2021, 4:38 PM
Tokens
"Like" token, awarded by Upliner."Love" token, awarded by Arkhangels."Like" token, awarded by erickblender."Love" token, awarded by -L0Lock-."Burninate" token, awarded by kursadk."Love" token, awarded by Stig."Love" token, awarded by mindinsomnia."Love" token, awarded by Yuro."Love" token, awarded by AndyCuccaro."Love" token, awarded by Gicheha."Love" token, awarded by jwvd.

Description

The current way of interacting with bones in a production environment is based on "Custom Shapes".
They make for a slow and frustrating animation workflow, primarily due to the shapes potentially disappearing inside the mesh, or moving far away from the mesh and being hard to find. Other issues with custom shapes exist, but are fixable. These two situations however, are totally inherent in their design. In this task I want to explore a different method of interacting with armatures, which Blender has some very, very basic support for since we got the Python Gizmo API in 2.80 or so.

The goals of this task are:

  • Gather feedback from animators and riggers.
  • Continue prototyping in the Python addon, to the extent that it allows.
  • Try to determine what improvements we need in the Python Gizmo API. Finding someone to implement those improvements is yet another matter...
  • (Perhaps the whole thing should be in Core Blender rather than an addon?)

Mesh-based Gizmos

Drawing interactable gizmos directly on the character is already possible with the Python Gizmo API:

You can download the addon as part of the Blender Studio Tools. Just extract the bone-gizmos/bone_gizmos folder into your Blender Foundation/Blender/3.0/addons folder.

Inherent limitations:

  • Authoring the correct vertex groups can be more time consuming than assigning custom shapes. Assigning custom shapes can be 98% automated by a system like Rigify. I tried automating the gizmo-setup by relying on the deformation vertex groups, but it's not great.
  • If gizmos end up overlapping, some of them could become impossible to select. Care must be taken to avoid this, but the already existing armature layer system gives enough flexibility here.
  • It's impossible to tell at a glance which controls were touched in order to deform an area, when there are multiple ways to deform said area. When using custom shapes, you might remember the state of the bones in relation to each other, in the resting pose, so you can just look at the posed bones and see which ones are posed and which ones are untouched.

I think these are here to stay, but it should still be a big workflow boost regardless.

UX Proposal

Here's my proposal, based on what I've landed on while developing the Python addon; Most of these are close to how it's implemented in the addon, but not necessarily the same.

Rigger UX

  • Enable Custom Gizmo on the bone. Gizmos are not mutually exclusive with Custom Shapes, although they will likely be used that way, but it's good to leave it up to the user.
  • Select an object. Until a vertex group or face map is selected, use the whole object. Optionally, apply the Custom Shape offset values to the gizmo. This way they are essentially just Custom Shapes with pre-selection highlighting.
  • Should support both vertex groups and face maps for masking, unless there are down-sides I'm not aware of. Hopefully in future it can use arbitrary attributes.
  • Assign default interaction behaviour for when the animator click&drags the gizmo: None, Translate, Rotate, Scale.

    The purpose of this setting is to make the most common way of interacting with an individual bone as fast as possible, but it is NOT to restrict the bone to only that method of interaction. This is different from bone transformation locks. I would argue that this can actually be used instead of transformation locks, because they give the animator a suggestion without restricting them.
    • None: Just select the gizmo when clicked. Dragging will do nothing.
    • Translate & Scale: Optionally locked to one or two axes (Until you press G,R,S or X,Y,Z).
    • Rotate: Along View, Trackball, or along the bone's local X, Y, or Z
  • Colors: The gizmo's color is determined by the bone group "normal" color. Selected/Active/Hovered states are marked by different opacity levels, which are controlled on a user-preference level. (I wouldn't mind per-bone color customization, but I hear there are plans to move Blender toward a more strict, palette-based design, so it's best to keep things simple here)

Animator UX

  • Gizmos are only visible for armatures which you are in pose mode on.
  • Clicking the gizmo always selects the bone, and starts the default transformation, if one is assigned in the rig.
  • Shift+Clicking toggles the selection.
  • Gizmo with face closest to the camera should always be picked. This means gizmos behind other gizmos can't be selected. This is a worthy tradeoff for pre-selection highlighting.
  • Gizmo opacities in different states can be customized in user preferences. A keyboard shortcut can be set up (or is set up by default) to make the un-selected opacity something other than 0, while a key is being held.

Limitations of the Python implementation

In descending importance:

  • Depth is an issue in 3 ways:
    • Mouse hovering does not take depth into account. This alone makes the current implementation pretty annoying to use.
    • Z-fighting. Gizmo points should be scaled outwards along the vertex normals a tiny amount to avoid Z-fighting while still supporting depth culling. Doing this in Python is too slow. Currently I work around it by creating helper meshes that have a Displace modifier to offset them a tiny bit.
    • Visual depth culling stops working when gizmo is being hovered. Feels like a bug?
  • Currently the mouse movement threshold user preference is not being taken into account. This means that when using auto-keying, just selecting a bone will result in inserting a keyframe, when any default transform operator is set.
  • Performance could be faster. Currently, each gizmo stores a list of vertex indicies, so that vertex groups don't need to be re-read on every re-draw. The vertices at those indicies are then grabbed from the evaluated mesh, which takes about 2ms per gizmo.
  • Interaction is always with left click and cannot be customized AFAIK.
  • Gizmo is not visible during interaction.
  • No way to access Gizmos or GizmoGroups from current context. I managed to work around this for the most part with the msgbus system, but for example hooking up a shortcut to the gizmo opacity user preferences doesn't work: There's no way to fire the color refresh function on the gizmos from a Property update() callback. You can't pass a reference to the gizmos in there, because there is no such thing as eg., bpy.context.gizmo_groups.
  • Mesh of the gizmo has no dependency: When the mesh is modified, user must press a Refresh button to bring everything in sync again.

Whether these issues would best be solved by improving the Python Gizmo API or by implementing this whole system natively in C is not yet clear to me. Maybe half/half?

Possible future features

  • Hook up a custom operator to each gizmo, similar to a KeymapItem. This would allow, for example, IK/FK switching and snapping to happen when certain gizmos are interacted. I actually implemented this in the Python prototype. It's surprisingly trivial, as long as you don't mind that there's no UI for providing the operator's name and arguments. Instead, you have to feed all that data into a custom property, stored on the rig data.

  • Let gizmo opacity increase based on distance from mouse cursor. This could result in less aimless wandering of the mouse when trying to find the right gizmo.
  • Although more of a separate project, a 2D picker system could potentially further complement this system.

Feedback and discussion welcome!

Event Timeline

Demeter Dzadik (Mets) updated the task description. (Show Details)
Demeter Dzadik (Mets) changed the subtype of this task from "Report" to "Design".
Demeter Dzadik (Mets) changed the task status from Needs Triage to Confirmed.Oct 14 2021, 4:45 PM
Lamia added a subscriber: Lamia.Oct 18 2021, 2:24 PM

This looks like tha initial FaceMap idea, what happened to that?

I believe the initial setup was made to "be left to others".
Or rather, specifically the Face attribute/python-property was for that, and the selector thing (like here) was meant to just be a proof of concept, which either succeeded or was abandoned. If the goal was to just show it was possible/an example, it succeeded; if the goal was to have a basic practical/functional version, it was abandoned.

I wasn't around when face maps were added, but everything I've seen makes me think that they should just be a generic integer attribute rather than a special thing with yet another list in mesh properties and its own operators, etc.

To make that work, I think we would just have to support assigning values to attributes in edit mode, which is something we have planned anyway.

Just looked at face maps again and that earlier proposal seems to work out of the box by itself. This approach needs a user to set it up manual, if i understood the other one correct. Both seem to have the same goal.

Just looked at face maps again and that earlier proposal seems to work out of the box by itself. This approach needs a user to set it up manual, if i understood the other one correct. Both seem to have the same goal.

The way I read your post is: "The first addon already works as-is, and this just does the same thing but with more steps".

So I was wondering if maybe the addon was updated since I last tried it, and/or I was just completely using it wrong and misremembering it.
I wasn't. I forgot the exact problems but remembered correctly that it's not practical for usage......and crashes Blender. T51675 is that development page.

My problems started with it is there not being an automated way to transfer bone-vertex groups to face maps, and before I would bother creating a script/addon to do that, I would need to verify that I would use it.
Then I find out it only works in object mode, it doesn't register an undo, and the only method of interaction is click-drag, set to Translation and only does Rotation/Scale if the Location then Rotation is locked on the bone.
Then after getting tripped up with the undo and entering the rig to clear the transforms, soon Blender crashes.
It looks great to finally be able to pose bones by just selecting the mesh itself, and without using the slow Mask Modifier method but too instable and impractical to be used right now.

That's my current thoughts and it's my same thoughts from back in 2.80.

Relooking into it, apparently there were design decisions being discussed about making this possible, and then the dev decided to just make the addon as a proof-of-concept (as stated in the addon's warning), and that was the end of it; and that this was a little over 3 years ago. There was later another page T54989 about this stuff but supposedly it's aim was slightly different, and also 3 years ago. I didn't hear about that until today, and last info on that was it would also be scrapped, last year.

I have high hopes that this time it will get done, all the way, one way or another.

I wasn't around when face maps were added, but everything I've seen makes me think that they should just be a generic integer attribute rather than a special thing with yet another list in mesh properties and its own operators, etc.

The downsides of vertex groups are:

  • When selecting 3 vertices on a quad, it will draw a triangle, since it doesn't know that there's no edge on the diagonal. Not a big problem, and could probably be avoided by smarter code.
  • One face can only be assigned to 1 face map, just like materials. This *kind of* makes sense for mesh-based selection, but I think it might be useful to assign one face to multiple selection widgets, when trying to achieve a layered system (Although this would work best if we could have groups of mutually exclusive layers, which we don't).

So I agree, the justification for the existence of Face Maps doesn't look too good.

T51675 is that development page.
There was later another page T54989 about this stuff but supposedly it's aim was slightly different, and also 3 years ago
I have high hopes that this time it will get done, all the way, one way or another.

I didn't know about those threads, so thanks for digging them up! Indeed, both seem fairly abandoned, so you can consider this a revival of those. A lot of the ideas and the conclusions that people came to in those threads seem to align with what I did in the prototype addon, so that's lucky!

Apparently Ton wanted things to work in Object mode. The LilyGizmos addon does a fantastic job of this already, but I would consider this a separate system:
Gizmos for sliding floats != Gizmos for interacting with bones.