The current way of interacting with bones in a production environment is based on "Custom Shapes".
They make for a slow and frustrating animation workflow, primarily due to the shapes potentially disappearing inside the mesh, or moving far away from the mesh and being hard to find. Other issues with custom shapes exist, but are fixable. These two situations however, are totally inherent in their design. In this task I want to explore a different method of interacting with armatures, which Blender has some very, very basic support for since we got the Python Gizmo API in 2.80 or so.
The goals of this task are:
- Gather feedback from animators and riggers.
- Continue prototyping in the Python addon, to the extent that it allows.
- Try to determine what improvements we need in the Python Gizmo API. Finding someone to implement those improvements is yet another matter...
- (Perhaps the whole thing should be in Core Blender rather than an addon?)
Drawing interactable gizmos directly on the character is already possible with the Python Gizmo API:
You can download the addon as part of the Blender Studio Tools. Just extract the bone-gizmos/bone_gizmos folder into your Blender Foundation/Blender/3.0/addons folder.
- Authoring the correct vertex groups can be more time consuming than assigning custom shapes. Assigning custom shapes can be 98% automated by a system like Rigify. I tried automating the gizmo-setup by relying on the deformation vertex groups, but it's not great.
- If gizmos end up overlapping, some of them could become impossible to select. Care must be taken to avoid this, but the already existing armature layer system gives enough flexibility here.
- It's impossible to tell at a glance which controls were touched in order to deform an area, when there are multiple ways to deform said area. When using custom shapes, you might remember the state of the bones in relation to each other, in the resting pose, so you can just look at the posed bones and see which ones are posed and which ones are untouched.
I think these are here to stay, but it should still be a big workflow boost regardless.
Here's my proposal, based on what I've landed on while developing the Python addon; Most of these are close to how it's implemented in the addon, but not necessarily the same.
- Enable Custom Gizmo on the bone. Gizmos are not mutually exclusive with Custom Shapes, although they will likely be used that way, but it's good to leave it up to the user.
- Select an object. Until a vertex group or face map is selected, use the whole object. Optionally, apply the Custom Shape offset values to the gizmo. This way they are essentially just Custom Shapes with pre-selection highlighting.
- Should support both vertex groups and face maps for masking, unless there are down-sides I'm not aware of. Hopefully in future it can use arbitrary attributes.
- Assign default interaction behaviour for when the animator click&drags the gizmo: None, Translate, Rotate, Scale.
The purpose of this setting is to make the most common way of interacting with an individual bone as fast as possible, but it is NOT to restrict the bone to only that method of interaction. This is different from bone transformation locks. I would argue that this can actually be used instead of transformation locks, because they give the animator a suggestion without restricting them.
- None: Just select the gizmo when clicked. Dragging will do nothing.
- Translate & Scale: Optionally locked to one or two axes (Until you press G,R,S or X,Y,Z).
- Rotate: Along View, Trackball, or along the bone's local X, Y, or Z
- Colors: The gizmo's color is determined by the bone group "normal" color. Selected/Active/Hovered states are marked by different opacity levels, which are controlled on a user-preference level. (I wouldn't mind per-bone color customization, but I hear there are plans to move Blender toward a more strict, palette-based design, so it's best to keep things simple here)
- Gizmos are only visible for armatures which you are in pose mode on.
- Clicking the gizmo always selects the bone, and starts the default transformation, if one is assigned in the rig.
- Shift+Clicking toggles the selection.
- Gizmo with face closest to the camera should always be picked. This means gizmos behind other gizmos can't be selected. This is a worthy tradeoff for pre-selection highlighting.
- Gizmo opacities in different states can be customized in user preferences. A keyboard shortcut can be set up (or is set up by default) to make the un-selected opacity something other than 0, while a key is being held.
Limitations of the Python implementation
In descending importance:
- Depth is an issue in 3 ways:
- Mouse hovering does not take depth into account. This alone makes the current implementation pretty annoying to use.
- Z-fighting. Gizmo points should be scaled outwards along the vertex normals a tiny amount to avoid Z-fighting while still supporting depth culling. Doing this in Python is too slow. Currently I work around it by creating helper meshes that have a Displace modifier to offset them a tiny bit.
- Visual depth culling stops working when gizmo is being hovered. Feels like a bug?
- Currently the mouse movement threshold user preference is not being taken into account. This means that when using auto-keying, just selecting a bone will result in inserting a keyframe, when any default transform operator is set.
- Performance could be faster. Currently, each gizmo stores a list of vertex indicies, so that vertex groups don't need to be re-read on every re-draw. The vertices at those indicies are then grabbed from the evaluated mesh, which takes about 2ms per gizmo.
- Interaction is always with left click and cannot be customized AFAIK.
- Gizmo is not visible during interaction.
- No way to access Gizmos or GizmoGroups from current context. I managed to work around this for the most part with the msgbus system, but for example hooking up a shortcut to the gizmo opacity user preferences doesn't work: There's no way to fire the color refresh function on the gizmos from a Property update() callback. You can't pass a reference to the gizmos in there, because there is no such thing as eg., bpy.context.gizmo_groups.
- Mesh of the gizmo has no dependency: When the mesh is modified, user must press a Refresh button to bring everything in sync again.
Whether these issues would best be solved by improving the Python Gizmo API or by implementing this whole system natively in C is not yet clear to me. Maybe half/half?
Possible future features
- Hook up a custom operator to each gizmo, similar to a KeymapItem. This would allow, for example, IK/FK switching and snapping to happen when certain gizmos are interacted. I actually implemented this in the Python prototype. It's surprisingly trivial, as long as you don't mind that there's no UI for providing the operator's name and arguments. Instead, you have to feed all that data into a custom property, stored on the rig data.
- Let gizmo opacity increase based on distance from mouse cursor. This could result in less aimless wandering of the mouse when trying to find the right gizmo.
- Although more of a separate project, a 2D picker system could potentially further complement this system.
Feedback and discussion welcome!