Page MenuHome

Mesh-based armature interaction ("Mesh Gizmos", "Face Maps")
Confirmed, NormalPublicDESIGN

Authored By
Demeter Dzadik (Mets)
Oct 14 2021, 4:38 PM
Tokens
"Love" token, awarded by activemotionpictures."Love" token, awarded by Juangra_Membata."Love" token, awarded by Draise."Love" token, awarded by bassamk."Love" token, awarded by johantri."Burninate" token, awarded by shader."Love" token, awarded by digim0nk."Love" token, awarded by Bit."Like" token, awarded by Upliner."Love" token, awarded by Arkhangels."Like" token, awarded by erickblender."Love" token, awarded by -L0Lock-."Burninate" token, awarded by kursadk."Love" token, awarded by Stig."Love" token, awarded by mindinsomnia."Love" token, awarded by Yuro."Love" token, awarded by AndyCuccaro."Love" token, awarded by Gicheha."Love" token, awarded by jwvd.

Description

Preface

The current way of interacting with bones in a production environment is based on Custom Shape Objects, which can disappear inside the mesh, be at a completely different place from the area they control, and in general require a fair bit of design from the rigger and learning from the animator to efficiently use.

Using the Python Gizmo API, I created a prototype of an alternative system, where we can click on geometry on the deformed mesh itself to select bones and control said mesh.

The goals of this task are:

  • Gather feedback from animators and riggers.
  • Finalize a design, to the extent that is possible within the limitations of a Python prototype.
  • Find developers who could implement this in core Blender.

The Prototype: "Bone Gizmos" add-on


Video showing how to set up a gizmo for a single bone


Video showcasing the add-on with a fully set up character

You can download the addon as part of the Blender Studio Tools. Just extract the bone-gizmos/bone_gizmos folder into your addons folder.
There is also a ReadMe here.

Features

A list of proposed features, partially working in the prototype, although there is plenty of space for UX discussions.

  • A gizmo's shape can be:
    • An entire object (in the bone's local space, with the already existing transform offset options applied)
    • A vertex group/face map/any attribute of an object (in world space).
  • Gizmo Colors:
    • There are three states to worry about: Unselected, selected, hovered.
    • Currently, user can specify two colors: Unselected and Selected. Hover-state is indicated by opacity ONLY. Having only one color, or three colors, is something to discuss.
    • The colors can be taken from the bone group's colors, if the bone has a group assigned, or set uniquely on a per-bone basis.
    • Colors should actually NOT be this customizable. Instead, users should be forced into a set of pre-defined colors, that can only be customized at the Theme level.
  • Gizmo Display:
    • A gizmo's visibility is based on the bone's visibility; From user POV, a bone and its gizmo should be a single entity.
    • If the selected object/vertex group cannot be used to construct any faces, draw the edges with line drawing instead.
  • Interaction:
    • Since it's just a prototype, it's currently left-click only, and doesn't account for mouse drag threshold, which is very bad for auto-keying. In the final implementation, it should behave the same way as bones do now, except:
    • Default transform mode: Translate/Rotate/Scale, including trackball rotation, rotating along view, and locking any transformation to 1 or 2 axes. These defaults are defined by the rigger on a per-bone basis, and can be bypassed by the animator with a button press, or in a nuclear fashion by disbaling them altogether as a user preference.
    • Snapping: It is possible, only through Python, to specify an operator name and parameters that should be executed when a gizmo is touched. This can be used for automatic IK/FK snapping. But it would need an interface similar to a Keymap entry, where we can pick an operator and feed it parameters.

Design Questions:
While making the prototype, I ran into some design questions where the answers aren't so simple:

  • Ways to minimize set-up time? Currently, each gizmo needs a vertex group. That's fine, but may need workflow operators to quickly switch from Pose Mode to Edit Mode on the relevant mesh.
  • How to avoid overlapping gizmos? I think there are cases where they might be necessary, and a system that allows the rigger to create bone layers that are mutually exclusive, would be a potential solution.
  • If there is more than one way to deform some geometry (eg.: Shape Key, Armature, Lattice), how to communicate which out of several potential controls was used to achieve the current shape?

There might be no perfect answer to all of these, but they are things to keep in mind and mitigate.

Prototype Limitations

These issues are listed just to clarify that these behaviours are known, and would most likely be easy to fix/avoid in the final implementation, but can't really be addressed in the prototype:

  • Mouse hovering does not take depth into account. This was fixed! [T94794]
  • Visual depth culling stops working when gizmo is being hovered. [T94792]
  • Transparent lines appear between the triangles. [T94791]
  • Currently the mouse movement threshold user preference is not being taken into account, which is terrible when trying to use the system with auto-keying enabled.
  • Performance is poor; Grabbing the vertex positions from the evaluated mesh takes about 2ms per gizmo.
  • Interaction is always with left click and cannot be customized.
  • Gizmo can not be visible during interaction.
  • When the mesh is modified, user must press a Refresh button to synchronize the gizmos.
  • Renaming bones will leave a gizmo orphaned, don't do it! :D

Feedback and discussion welcome! Also about how to structure this document.

Event Timeline

Demeter Dzadik (Mets) updated the task description. (Show Details)
Demeter Dzadik (Mets) changed the subtype of this task from "Report" to "Design".
Demeter Dzadik (Mets) changed the task status from Needs Triage to Confirmed.Oct 14 2021, 4:45 PM
Lamia added a subscriber: Lamia.Oct 18 2021, 2:24 PM

This looks like tha initial FaceMap idea, what happened to that?

I believe the initial setup was made to "be left to others".
Or rather, specifically the Face attribute/python-property was for that, and the selector thing (like here) was meant to just be a proof of concept, which either succeeded or was abandoned. If the goal was to just show it was possible/an example, it succeeded; if the goal was to have a basic practical/functional version, it was abandoned.

I wasn't around when face maps were added, but everything I've seen makes me think that they should just be a generic integer attribute rather than a special thing with yet another list in mesh properties and its own operators, etc.

To make that work, I think we would just have to support assigning values to attributes in edit mode, which is something we have planned anyway.

Just looked at face maps again and that earlier proposal seems to work out of the box by itself. This approach needs a user to set it up manual, if i understood the other one correct. Both seem to have the same goal.

Just looked at face maps again and that earlier proposal seems to work out of the box by itself. This approach needs a user to set it up manual, if i understood the other one correct. Both seem to have the same goal.

The way I read your post is: "The first addon already works as-is, and this just does the same thing but with more steps".

So I was wondering if maybe the addon was updated since I last tried it, and/or I was just completely using it wrong and misremembering it.
I wasn't. I forgot the exact problems but remembered correctly that it's not practical for usage......and crashes Blender. T51675 is that development page.

My problems started with it is there not being an automated way to transfer bone-vertex groups to face maps, and before I would bother creating a script/addon to do that, I would need to verify that I would use it.
Then I find out it only works in object mode, it doesn't register an undo, and the only method of interaction is click-drag, set to Translation and only does Rotation/Scale if the Location then Rotation is locked on the bone.
Then after getting tripped up with the undo and entering the rig to clear the transforms, soon Blender crashes.
It looks great to finally be able to pose bones by just selecting the mesh itself, and without using the slow Mask Modifier method but too instable and impractical to be used right now.

That's my current thoughts and it's my same thoughts from back in 2.80.

Relooking into it, apparently there were design decisions being discussed about making this possible, and then the dev decided to just make the addon as a proof-of-concept (as stated in the addon's warning), and that was the end of it; and that this was a little over 3 years ago. There was later another page T54989 about this stuff but supposedly it's aim was slightly different, and also 3 years ago. I didn't hear about that until today, and last info on that was it would also be scrapped, last year.

I have high hopes that this time it will get done, all the way, one way or another.

I wasn't around when face maps were added, but everything I've seen makes me think that they should just be a generic integer attribute rather than a special thing with yet another list in mesh properties and its own operators, etc.

The downsides of vertex groups are:

  • When selecting 3 vertices on a quad, it will draw a triangle, since it doesn't know that there's no edge on the diagonal. Not a big problem, and could probably be avoided by smarter code.
  • One face can only be assigned to 1 face map, just like materials. This *kind of* makes sense for mesh-based selection, but I think it might be useful to assign one face to multiple selection widgets, when trying to achieve a layered system (Although this would work best if we could have groups of mutually exclusive layers, which we don't).

So I agree, the justification for the existence of Face Maps doesn't look too good.

T51675 is that development page.
There was later another page T54989 about this stuff but supposedly it's aim was slightly different, and also 3 years ago
I have high hopes that this time it will get done, all the way, one way or another.

I didn't know about those threads, so thanks for digging them up! Indeed, both seem fairly abandoned, so you can consider this a revival of those. A lot of the ideas and the conclusions that people came to in those threads seem to align with what I did in the prototype addon, so that's lucky!

Apparently Ton wanted things to work in Object mode. The LilyGizmos addon does a fantastic job of this already, but I would consider this a separate system:
Gizmos for sliding floats != Gizmos for interacting with bones.

Albert (wevon) added a subscriber: Albert (wevon).EditedDec 22 2021, 9:46 PM

Just a couple of ideas.
If the preselection were gradual as the cursor approaches the Face Maps, this could be done by progressively illuminating the contour of the Face Maps until the cursor is over the face group and the faces are already illuminated, in this way I think that The Christmas tree effect would be avoided.
Second, Vertex Maps could also be used to animate details. In the same way, when approaching the cursor to the vertex sets they could gain opacity.

I have made a video with a non-real face, to show the effect that could be produced when moving the cursor closer to certain vertices or edges. It is less blinking than with faces, although in a way everything is complementary.

Love the idea. I think this will avoid cluttered in rig and keep only the nearest and necessary controller visible at the same time +1

Just a couple of ideas.
If the preselection were gradual as the cursor approaches the Face Maps, this could be done by progressively illuminating the contour of the Face Maps until the cursor is over the face group and the faces are already illuminated, in this way I think that The Christmas tree effect would be avoided.
Second, Vertex Maps could also be used to animate details. In the same way, when approaching the cursor to the vertex sets they could gain opacity.

I have made a video with a non-real face, to show the effect that could be produced when moving the cursor closer to certain vertices or edges. It is less blinking than with faces, although in a way everything is complementary.

Philipp Oeser (lichtwerk) changed the status of subtask T94791: GizmoAPI: Lines appear between triangles using "DEPTH_3D" option. from Needs Triage to Needs Information from User.Jan 20 2022, 1:26 PM
Philipp Oeser (lichtwerk) changed the status of subtask T94792: GizmoAPI: No depth culling on mouse hover from Needs Triage to Confirmed.Jan 20 2022, 1:34 PM

I've made some tweaks to the add-on, added a new video about how to set up a single bone gizmo, and re-structured the task description. I think it's clear to me now that this task shouldn't be about improving the Python Gizmo API (although that is still welcome). Instead, mesh gizmos NEED to be a core Blender feature to have a chance to shine, especially due to performance. So the task has been re-focused to present the add-on purely as a prototype/testing ground of a potential future system.

I would like to bring this up in a module meeting soon, and try to figure out what could be the next step from here and when that would be done and by whom. I think my main question will be this:
Do we do more in-depth design discussions and testing with this, or is this already good enough to throw a developer at it to come up with a technical roadmap or just really go ham and implement a better prototype in C/C++, and iterate from there?