How to attach hair to a surface #95776

Closed
opened 2022-02-14 14:32:35 +01:00 by Jacques Lucke · 26 comments
Member

@brecht and I (@JacquesLucke) talked about this. We agreed to move forward with the following approach.

  • A Curves object has an optional pointer to a scalp mesh object. The original mesh of that object is used as scalp, instead of the evaluated mesh.
  • If a curves object references a base mesh, two additional attributes are added to the curves domain:
    • MLoopTri index: Index of the triangle that the root hair point is attached to.
    • Barycentric coordinates: Position within the referenced triangle.
  • The exact naming convention for these attributes still has to be decided.
  • Vectors in the position attribute are in the local space of the object (instead of some relative coordinates based on the scalp).
  • Updating the scalp is an explicit operation. When the scalp object is edited, the hair stays at the same position.
    • An operator can be used to replace a scalp object with a new one.
  • A deform node in geometry nodes is used to deform the hair based on a deformation of the base mesh.
    • It has these inputs:
      • Curves: Undeformed hair curves.
      • Deformed Base Mesh: The animated or simulated base mesh. The topology must match the topology of the scalp mesh.
      • Rest Position: A vector field input that determines the original position of each vertex of the defomed base mesh. Usually, this just references a named attribute like rest_position/original_position. This attribute has to be set on the scalp.
    • The output is just the deformed curves.
@brecht and I (@JacquesLucke) talked about this. We agreed to move forward with the following approach. * A Curves object has an optional pointer to a scalp mesh object. The original mesh of that object is used as scalp, instead of the evaluated mesh. * If a curves object references a base mesh, two additional attributes are added to the curves domain: * `MLoopTri` index: Index of the triangle that the root hair point is attached to. * Barycentric coordinates: Position within the referenced triangle. * The exact naming convention for these attributes still has to be decided. * Vectors in the `position` attribute are in the local space of the object (instead of some relative coordinates based on the scalp). * Updating the scalp is an explicit operation. When the scalp object is edited, the hair stays at the same position. * An operator can be used to replace a scalp object with a new one. * A deform node in geometry nodes is used to deform the hair based on a deformation of the base mesh. * It has these inputs: * `Curves`: Undeformed hair curves. * `Deformed Base Mesh`: The animated or simulated base mesh. The topology must match the topology of the scalp mesh. * `Rest Position`: A vector field input that determines the original position of each vertex of the defomed base mesh. Usually, this just references a named attribute like `rest_position`/`original_position`. This attribute has to be set on the scalp. * The output is just the deformed curves.
Author
Member

Added subscriber: @JacquesLucke

Added subscriber: @JacquesLucke

Added subscriber: @Diogo_Valadares

Added subscriber: @Diogo_Valadares
Author
Member

Added subscriber: @brecht

Added subscriber: @brecht
Member

Added subscriber: @lichtwerk

Added subscriber: @lichtwerk
Member

Changed status from 'Needs Triage' to: 'Confirmed'

Changed status from 'Needs Triage' to: 'Confirmed'

Note sure if some of this is planned for another task, but will write down a few additional things from our discussion before I forget.

Rest position:

  • In the modifier stack, this is currently automatically computed if another modifier needs it. It's the positions from the last modifier (or input mesh) that is not a deforming modifier or a subdivision surface modifier. I think it would make sense to keep an automatic way to compute, using a despgraph dependency from the hair to the scalp object. However it also makes sense to give manual control, so a use can decide at which point in the stack or nodes the rest_position attribute is set.
  • Distribution should generally happen on a rest position. If you're creating a hair distribution on a character that has not been animated yet, and then manually edit the hair, there is nothing to be done. However there are also cases where this distribution stays procedural, or is created on a mesh that is already animated. In this case it's important to do the distribution on the rest position for consistent results from frame to frame. A typical procedural hair node setup could have a node to get the rest position mesh, use that mesh for hair distribution, and then feed the generated hair + deformed mesh + rest position into a deform node. If there is no animation/deformation then nodes can be optimized to detect that and do no work.
  • Regarding terminology, these types of coordinates in Blender have a few names. It used to be Orco a long time ago, and that's still an internal name, but in shader nodes it's now Generated and there is also some usage of the term Undeformed. I think for the purpose of geometry nodes, "Rest Position" is a good name, as it matches terminology already used for armatures and in animation in general. Internal names could be changed to match that at some point.
  • The internal CD_ORCO attribute is normalized to 0..1 which is not really helpful, not even as default texture coordinates. It would be good to do some refactoring so these are stored unnormalized and then applying any additional texture space scaling for rendering only.

Deform:

  • The basic algorithm for deforming hair is:
    • Compute root position from deformed triangle vertex positions + barycentric coordinates
    • Compute rotation matrix from mapping rest to deformed triangle vertex positions, and rotate hair based on that
    • Scale/shear should be factored out of the matrix
  • It would be interesting to verify if this matches the Hydra implementation, and to check how this can be GPU accelerated, also with OpenSubdiv.

Sudivision Surfaces:

  • For subdivision surfaces, making attachment work with different subdivision levels (or even adaptive subdivision) will require additional work. Support for this could be a later addition, with the initial implementation assuming the subdivision level does not change (which is more practical if you use a separate scalp mesh from the actual character mesh).
  • To match the OpenSubdiv implementation on the CPU and GPU these would likely be ptex coordinates. You'd have a quad index + barycentric coordinate in the quad, quite similar to loop triangles.
  • It's possible to compute such an attachment from a loop triangle attachment at subdivision level 0, and consider the ptex coordinates more of a runtime thing, that is recomputed every time or cached for interactive playback.
  • Current particles rely on CD_ORIGINDEX and CD_ORIGSPACE attributes generated by the subdivision surface modifier to find faces in the subdivided mesh that match the original mesh. The same system could be used, though generating such attributes could be avoided if the particle deform node is made more intimately aware of how the subdivision surface modifier generates faces, or has access to the OpenSubdiv data structures that are now cached for GPU subdivision.
Note sure if some of this is planned for another task, but will write down a few additional things from our discussion before I forget. Rest position: * In the modifier stack, this is currently automatically computed if another modifier needs it. It's the positions from the last modifier (or input mesh) that is not a deforming modifier or a subdivision surface modifier. I think it would make sense to keep an automatic way to compute, using a despgraph dependency from the hair to the scalp object. However it also makes sense to give manual control, so a use can decide at which point in the stack or nodes the `rest_position` attribute is set. * Distribution should generally happen on a rest position. If you're creating a hair distribution on a character that has not been animated yet, and then manually edit the hair, there is nothing to be done. However there are also cases where this distribution stays procedural, or is created on a mesh that is already animated. In this case it's important to do the distribution on the rest position for consistent results from frame to frame. A typical procedural hair node setup could have a node to get the rest position mesh, use that mesh for hair distribution, and then feed the generated hair + deformed mesh + rest position into a deform node. If there is no animation/deformation then nodes can be optimized to detect that and do no work. * Regarding terminology, these types of coordinates in Blender have a few names. It used to be Orco a long time ago, and that's still an internal name, but in shader nodes it's now Generated and there is also some usage of the term Undeformed. I think for the purpose of geometry nodes, "Rest Position" is a good name, as it matches terminology already used for armatures and in animation in general. Internal names could be changed to match that at some point. * The internal CD_ORCO attribute is normalized to 0..1 which is not really helpful, not even as default texture coordinates. It would be good to do some refactoring so these are stored unnormalized and then applying any additional texture space scaling for rendering only. Deform: * The basic algorithm for deforming hair is: * Compute root position from deformed triangle vertex positions + barycentric coordinates * Compute rotation matrix from mapping rest to deformed triangle vertex positions, and rotate hair based on that * Scale/shear should be factored out of the matrix * It would be interesting to verify if this matches the Hydra implementation, and to check how this can be GPU accelerated, also with OpenSubdiv. Sudivision Surfaces: * For subdivision surfaces, making attachment work with different subdivision levels (or even adaptive subdivision) will require additional work. Support for this could be a later addition, with the initial implementation assuming the subdivision level does not change (which is more practical if you use a separate scalp mesh from the actual character mesh). * To match the OpenSubdiv implementation on the CPU and GPU these would likely be ptex coordinates. You'd have a quad index + barycentric coordinate in the quad, quite similar to loop triangles. * It's possible to compute such an attachment from a loop triangle attachment at subdivision level 0, and consider the ptex coordinates more of a runtime thing, that is recomputed every time or cached for interactive playback. * Current particles rely on CD_ORIGINDEX and CD_ORIGSPACE attributes generated by the subdivision surface modifier to find faces in the subdivided mesh that match the original mesh. The same system could be used, though generating such attributes could be avoided if the particle deform node is made more intimately aware of how the subdivision surface modifier generates faces, or has access to the OpenSubdiv data structures that are now cached for GPU subdivision.

Added subscriber: @GeorgiaPacific

Added subscriber: @GeorgiaPacific
Member

Added subscriber: @HooglyBoogly

Added subscriber: @HooglyBoogly
Member

One question-- Would the pointer to the mesh object have any effect during evaluation of geometry nodes, or would it just be used for interactive editing and to setup information at the start of evaluation. It seems like the consensus is the latter?

Deformation
I think it would be best to allow building the deform node with smaller nodes. Otherwise I think we're limiting the flexibility of future node-based tooling for deformation.For example, we should keep the "Set Position" node as the way to change position. I expect built-in / asset bundle node groups to be very important in this whole project, so curves deformation can be handled that way too.

Naming

  • "Rest Position" sounds so much better than the alternatives. That would really help understand the feature I think.
  • I'd also like to propose using "Surface" instead of "Scalp" for all of these cases:
    • Scalp sounds a bit gross IMO
    • Hair isn't just on the top of a head
    • It's a more general name that would also apply in non-hair situations.

Original Coordinates

The internal CD_ORCO attribute is normalized to 0..1 which is not really helpful, not even as default texture coordinates.

This also sounds like a great improvement, I've wondered about that before. I created a task for that here: #95940
If there's more information that might be helpful for a refactor, it would be nice to add it there. I think we can refactor
to a more generic named "rest position" attribute with steps like this.

Attachment Data
I don't think my thoughts are that useful here, but one thing I'm seeing is that there seem to be many mapping methods,
each with pros and cons. So it's probably important to allow flexibility here.

One question-- Would the pointer to the mesh object have any effect during evaluation of geometry nodes, or would it just be used for interactive editing and to setup information at the start of evaluation. It seems like the consensus is the latter? **Deformation** I think it would be best to allow building the deform node with smaller nodes. Otherwise I think we're limiting the flexibility of future node-based tooling for deformation.For example, we should keep the "Set Position" node as the way to change position. I expect built-in / asset bundle node groups to be very important in this whole project, so curves deformation can be handled that way too. **Naming** - "Rest Position" sounds so much better than the alternatives. That would really help understand the feature I think. - I'd also like to propose using "Surface" instead of "Scalp" for all of these cases: - Scalp sounds a bit gross IMO - Hair isn't just on the top of a head - It's a more general name that would also apply in non-hair situations. **Original Coordinates** > The internal CD_ORCO attribute is normalized to 0..1 which is not really helpful, not even as default texture coordinates. This also sounds like a great improvement, I've wondered about that before. I created a task for that here: #95940 If there's more information that might be helpful for a refactor, it would be nice to add it there. I think we can refactor to a more generic named "rest position" attribute with steps like this. **Attachment Data** I don't think my thoughts are that useful here, but one thing I'm seeing is that there seem to be many mapping methods, each with pros and cons. So it's probably important to allow flexibility here.

This issue was referenced by 6e11cfc56a

This issue was referenced by 6e11cfc56af4e1594972d134e4e0c5d256d1fcce
Author
Member

Found an issue in our assumptions about MLoopTri. Brecht and I assumed that MLoopTri is when vertices are only moved around and not topology changes are done. This turned out to be wrong unfortunately.

  • mesh_recalc_looptri__single_threaded actually takes the normals as inputs.
  • The file below shows an example where only vertex positions of a mesh are changed, but the generated loop-triangles change.
    looptri_inconsistent_test.blend

We have to investigate new alternatives for how to store the attachment. Some initial ideas:

  • Assume a fixed triangulation of every polygon (maybe store poly index + triangle index + barycentric coords).
  • Store a poly index + 3d position in the polygon. Code later might be able to reconstruct where the position is exactly even after deformation (only when the rest positions are known).
  • Store a weight for every polygon corner (potentially don't support ngons).
Found an issue in our assumptions about `MLoopTri`. Brecht and I assumed that `MLoopTri` is when vertices are only moved around and not topology changes are done. This turned out to be wrong unfortunately. * `mesh_recalc_looptri__single_threaded` actually takes the normals as inputs. * The file below shows an example where only vertex positions of a mesh are changed, but the generated loop-triangles change. [looptri_inconsistent_test.blend](https://archive.blender.org/developer/F13065239/looptri_inconsistent_test.blend) We have to investigate new alternatives for how to store the attachment. Some initial ideas: * Assume a fixed triangulation of every polygon (maybe store poly index + triangle index + barycentric coords). * Store a poly index + 3d position in the polygon. Code later might be able to reconstruct where the position is exactly even after deformation (only when the rest positions are known). * Store a weight for every polygon corner (potentially don't support ngons).

For animated meshes, a changing triangulation is problematic even without hair. In the attached the .blend, we can see how moving a vertex causes a flickering triangulation. That's going to look bad without hair, and with hair there would be sudden change in normal.

Triangulation could be based on undeformed coordinates for, I vaguely remember we added that in Blender Internal. Making meshes always do that can have a real performance cost though, for example in viewport playback of an armature deformed character with subdivision surfaces. In principle the triangulation of animated meshes can be cached from frame to frame, but the practical implementation of that doesn't seem simple. The ideal solution at that general mesh evaluation level is not obvious to me, but may be worth considering. If not feasible, hair attachment could potentially use its own triangulation based on undeformed coordinates, when there are any.

For animated meshes, a changing triangulation is problematic even without hair. In the attached the .blend, we can see how moving a vertex causes a flickering triangulation. That's going to look bad without hair, and with hair there would be sudden change in normal. Triangulation could be based on undeformed coordinates for, I vaguely remember we added that in Blender Internal. Making meshes always do that can have a real performance cost though, for example in viewport playback of an armature deformed character with subdivision surfaces. In principle the triangulation of animated meshes can be cached from frame to frame, but the practical implementation of that doesn't seem simple. The ideal solution at that general mesh evaluation level is not obvious to me, but may be worth considering. If not feasible, hair attachment could potentially use its own triangulation based on undeformed coordinates, when there are any.
Member

Maybe it's prohibitively complicated or expensive, but I think a solution that uses a poly index instead of a triangle index would have some real benefits.

Mainly we have no plan or design to expose a mesh's triangulation to geometry nodes. That severely limits our flexibility when dealing with this attachment information in a generic context.
I think that's an area where the current design is lacking. Maybe there's another way to solve that, but maybe one simpler approach is just to use polygon indices instead.

We could optimize the polygon attachment method for quads/triangles, and as long as N-gons don't fail terribly, I think it could be an improvement.

Maybe it's prohibitively complicated or expensive, but I think a solution that uses a poly index instead of a triangle index would have some real benefits. Mainly we have no plan or design to expose a mesh's triangulation to geometry nodes. That severely limits our flexibility when dealing with this attachment information in a generic context. I think that's an area where the current design is lacking. Maybe there's another way to solve that, but maybe one simpler approach is just to use polygon indices instead. We could optimize the polygon attachment method for quads/triangles, and as long as N-gons don't fail terribly, I think it could be an improvement.
Author
Member

For animated meshes, a changing triangulation is problematic even without hair. In the attached the .blend, we can see how moving a vertex causes a flickering triangulation. That's going to look bad without hair, and with hair there would be sudden change in normal.

I build another simple test case that shows the issue with geometry nodes. Just change the Scale input in the modifier and see the flickering triangulation.
deformation_changes_triangulation.blend

A possible generic solution would be to store a char *triangulation_position_attribute on Mesh. It defaults to position which is the current behavior, but it could also be set to e.g. rest_position which can be stored before deformation. BKE_mesh_recalc_looptriwould then use this attribute instead of always using MVert.co.

A potential problem with this solution is that merging two meshes can break this information if both meshes use a different triangulation_position_attribute. A built-in attribute or a good attribute naming convention might help here.

Solving this is not really in scope for the hair project right now, although I wouldn't mind looking into it later.


Regarding surface attachment, it seems like a good idea to base the attachment information on a uv map. This has more advantages than I first anticipated.

Specifically, I propose the following approach:

  • Store a surface_uv_map attribute name on the Curves data block, next to the surface object pointer.
  • Per curve store:
    • surface_uv: float2 attribute containing the uv coordinate of the position where the curve is attached to the surface.
    • surface_rest_position: float3 attribute that is usually equal to the position of the first control, but doesn't have to be.
    • surface_rest_orientation: Quaternion attribute (doesn't exist yet) that combines the normal and tangent at the rest position on the surface.
  • On the surface, only the uv map is needed, the rest position attribute is not necessary with this approach (though the rest position might still be necessary for other purposes outside of curve deformation).
  • The deformation node works as follows:
   {F13067201}
* Inputs:
    * `Curves`: Curves based on the undeformed surface.
    * `Surface`: Deformed mesh. This is usually the evaluated geometry of the `Curves.surface` object.
    * `UVs`: UV map on the surface that was used for storing the attachment information on curves. This is usually a named attribute with the `Curves.surface_uv_map` name.
* Outputs:
    * `Curves`: Deformed curves.
* Implicit inputs: These inputs are not exposed currently, but maybe should be to support a fully node based workflow using anonymous attributes.
    * The three `surface_*` attributes on the input curves mentioned above.
    * For now the name of these attributes can be hardcoded in the node.
* Behavior for every `curve`:
    * Find the `surface_position` and `surface_orientation` on the deformed surface based on `curve.surface_uv`.
        * Potentially taking into account smooth shading and tangent space based on uvs to compute the orientation.
    * Compute the `transformation` from `curve.surface_rest_position`/`curve.surface_rest_orientation` to `surface_position`/`surface_orientation`.
    * Apply `transformation` on every control point of the curve.

Compared to the old approach, this has some disadvantages:

  • The surface has to have a uv map. Individual polygons shouldn't overlap in uv space.
    • The quality of the uv map doesn't really matter.
  • Deforming curves based on a deformed surface requires a inverse uv lookup (finding the polygon/triangle based on a uv coordinate).
    • This is potentially relatively expensive, but it's not clear to me if it's actually more expensive than other approaches of dealing with a subdivided surfaces.
    • Also it's a fairly well defined problem that we should be able to optimize fairly well using a good data structure.
  • More memory is required per curve:
    • The previous approach used sizeof(int) + sizeof(float2) = 4 + 8 = 12 bytes of storage for the surface attachment per curve.
    • This new approach needs sizeof(float2) + sizeof(float3) + sizeof(Quaternion) = 8 + 12 + 16 = 36 bytes per curve.
    • The Quaternion could potentially be stored in 3 floats, because it's normalized, but that memory optimization might not be worth the additional complexity and run-time overhead.

There are also some advantages which seem more significant than the disadvantages to me:

  • Only requires propagation of uvs on the surface. Those are usually needed anyway, so there is no extra cost for thhat.
  • Modifiers and edit mode operations often keep uv maps intact already, so no new code for that is necessary.
    • Especially, the subdivision surface modifier handles uvs already, so it would just work.
    • It's easier to move the curves to the right positions after changes to the base mesh.
  • The tangent space is more well defined because we can use the uv map into account.
  • All data that is stored in attributes is easily interpretable and controllable by users.
    • It does not reference any internal Blender data (e.g. MLoopTri).
    • Is fairly straight forward to generate the required attributes using geometry nodes (assuming we support quaternion attributes and potentially a rotation socket which we planned to support anyway).
    • Remembering positions on a surface using uvs is also something we might need more in the future, e.g. when a particle has to remember where it was spawned. So having a generic solution that is not specific to hair and subdivision surfaces is nice.
> For animated meshes, a changing triangulation is problematic even without hair. In the attached the .blend, we can see how moving a vertex causes a flickering triangulation. That's going to look bad without hair, and with hair there would be sudden change in normal. I build another simple test case that shows the issue with geometry nodes. Just change the Scale input in the modifier and see the flickering triangulation. [deformation_changes_triangulation.blend](https://archive.blender.org/developer/F13067095/deformation_changes_triangulation.blend) A possible generic solution would be to store a `char *triangulation_position_attribute` on `Mesh`. It defaults to `position` which is the current behavior, but it could also be set to e.g. `rest_position` which can be stored before deformation. `BKE_mesh_recalc_looptri`would then use this attribute instead of always using `MVert.co`. A potential problem with this solution is that merging two meshes can break this information if both meshes use a different `triangulation_position_attribute`. A built-in attribute or a good attribute naming convention might help here. Solving this is not really in scope for the hair project right now, although I wouldn't mind looking into it later. -------- Regarding surface attachment, it seems like a good idea to base the attachment information on a uv map. This has more advantages than I first anticipated. Specifically, I propose the following approach: * Store a `surface_uv_map` attribute name on the `Curves` data block, next to the `surface` object pointer. * Per curve store: * `surface_uv`: `float2` attribute containing the uv coordinate of the position where the curve is attached to the surface. * `surface_rest_position`: `float3` attribute that is usually equal to the position of the first control, but doesn't have to be. * `surface_rest_orientation`: `Quaternion` attribute (doesn't exist yet) that combines the normal and tangent at the rest position on the surface. * On the surface, only the uv map is needed, the rest position attribute is not necessary with this approach (though the rest position might still be necessary for other purposes outside of curve deformation). * The deformation node works as follows: ``` {F13067201} ``` * Inputs: * `Curves`: Curves based on the undeformed surface. * `Surface`: Deformed mesh. This is usually the evaluated geometry of the `Curves.surface` object. * `UVs`: UV map on the surface that was used for storing the attachment information on curves. This is usually a named attribute with the `Curves.surface_uv_map` name. * Outputs: * `Curves`: Deformed curves. * Implicit inputs: These inputs are not exposed currently, but maybe should be to support a fully node based workflow using anonymous attributes. * The three `surface_*` attributes on the input curves mentioned above. * For now the name of these attributes can be hardcoded in the node. * Behavior for every `curve`: * Find the `surface_position` and `surface_orientation` on the deformed surface based on `curve.surface_uv`. * Potentially taking into account smooth shading and tangent space based on uvs to compute the orientation. * Compute the `transformation` from `curve.surface_rest_position`/`curve.surface_rest_orientation` to `surface_position`/`surface_orientation`. * Apply `transformation` on every control point of the curve. Compared to the old approach, this has some disadvantages: * The surface has to have a uv map. Individual polygons shouldn't overlap in uv space. * The quality of the uv map doesn't really matter. * Deforming curves based on a deformed surface requires a inverse uv lookup (finding the polygon/triangle based on a uv coordinate). * This is potentially relatively expensive, but it's not clear to me if it's actually more expensive than other approaches of dealing with a subdivided surfaces. * Also it's a fairly well defined problem that we should be able to optimize fairly well using a good data structure. * More memory is required per curve: * The previous approach used `sizeof(int) + sizeof(float2) = 4 + 8 = 12 bytes` of storage for the surface attachment per curve. * This new approach needs `sizeof(float2) + sizeof(float3) + sizeof(Quaternion) = 8 + 12 + 16 = 36 bytes` per curve. * The `Quaternion` could potentially be stored in 3 floats, because it's normalized, but that memory optimization might not be worth the additional complexity and run-time overhead. There are also some advantages which seem more significant than the disadvantages to me: * Only requires propagation of uvs on the surface. Those are usually needed anyway, so there is no extra cost for thhat. * Modifiers and edit mode operations often keep uv maps intact already, so no new code for that is necessary. * Especially, the subdivision surface modifier handles uvs already, so it would just work. * It's easier to move the curves to the right positions after changes to the base mesh. * The tangent space is more well defined because we can use the uv map into account. * All data that is stored in attributes is easily interpretable and controllable by users. * It does not reference any internal Blender data (e.g. `MLoopTri`). * Is fairly straight forward to generate the required attributes using geometry nodes (assuming we support quaternion attributes and potentially a rotation socket which we planned to support anyway). * Remembering positions on a surface using uvs is also something we might need more in the future, e.g. when a particle has to remember where it was spawned. So having a generic solution that is not specific to hair and subdivision surfaces is nice.
Member

I like the approach described in @JacquesLucke's comment. Particularly that it uses existing concepts that users are already familiar with and are useful more generally.

  • The "Deform Curves on Surface" node's operation ends up being much simpler to describe too, it's just a translation and rotation per curve using generic data types.
  • People will understand how the surface attachment works much more easily-- we don't have to start the explanation with "So, every mesh has an implicit internal triangulation"

The rotation and position attributes on the curves seem conceptually a bit different than the UV, since they can always be computed when the UV and the surface mesh are available.

It's nice that the quality of the UV map doesn't matter, it means the downside of requiring a UV map doesn't carry much weight.

I like the approach described in @JacquesLucke's comment. Particularly that it uses existing concepts that users are already familiar with and are useful more generally. - The "Deform Curves on Surface" node's operation ends up being much simpler to describe too, it's just a translation and rotation per curve using generic data types. - People will understand how the surface attachment works much more easily-- we don't have to start the explanation with "So, every mesh has an implicit internal triangulation" The rotation and position attributes on the curves seem conceptually a bit different than the UV, since they can always be computed when the UV and the surface mesh are available. It's nice that the quality of the UV map doesn't matter, it means the downside of requiring a UV map doesn't carry much weight.

Using UVs to attatch hair is a pretty clever idea, and pretty simple to comprehend, and even if someone wants to make an UV that overlaps(for symmetric painting for example) or wants to change the UV after attathing hair, they can use an extra UV just for the hair if they want.
Also, it sounds much simpler to transfer hair from one surface to another if needed using this concept.

Using UVs to attatch hair is a pretty clever idea, and pretty simple to comprehend, and even if someone wants to make an UV that overlaps(for symmetric painting for example) or wants to change the UV after attathing hair, they can use an extra UV just for the hair if they want. Also, it sounds much simpler to transfer hair from one surface to another if needed using this concept.

I can see the reasoning behind using UVs. It is requiring some manual work and understanding from users that would otherwise be automatic, but they gain hair preservation when editing meshes.

  • I guess when entering curve sculpt or edit mode, there would be some check for the existence of UVs, or even some way to auto generate them? Similar to texture paint mode.
  • There some cases that users might get confusing results. Intentionally overlapping UVs (from e.g. mirror modifier or in the original mesh) or broken curves when they fall outside any UV island after edits. Curve sculpt and edit mode could potentially show warnings about such things.
  • For subdivision, there are multiple UV interpolation methods (for compatibility with other apps). Some of them keep the boundaries and corners fixed, but others not which means that after subdivision a curve could fall outside UV islands. I'm not sure if requiring users to use a compatible method would be good enough, or if there would be some way to do different interpolation for a UV map used for hair.
  • Deformation is mainly a concern for characters where there is more likely to be a UV mapping than e.g. procedurally generated grass. Though there may also be some procedural use cases where you'd then need a way to generate a suitable UV map in geometry nodes, in order for animated deformation to work?
  • For performance, I guess some good caching mechanism will be important for interactive viewport playback, since mapping UVs to triangles for every frames seems expensive.

UVs can work with or without surface_rest_position and surface_rest_orientation attributes. I'm not sure if using those instead of a rest_position attribute on the surface is better.

  • Being able to get a rest position will be needed at some point I think (for curve sculpting on deformed characters for example).
  • When editing the surface mesh, I guess all curves attached to the surface would need to have their rest position and orientation updated? Or how does this stay in sync?
  • For playback performance it's also nice not having to interpolate the rest_position every frame, and effectively caching it in per-curve attributes. But then such rest position and UV should not have to be re-interpolated, optimizations in subdivision could avoid that.
I can see the reasoning behind using UVs. It is requiring some manual work and understanding from users that would otherwise be automatic, but they gain hair preservation when editing meshes. * I guess when entering curve sculpt or edit mode, there would be some check for the existence of UVs, or even some way to auto generate them? Similar to texture paint mode. * There some cases that users might get confusing results. Intentionally overlapping UVs (from e.g. mirror modifier or in the original mesh) or broken curves when they fall outside any UV island after edits. Curve sculpt and edit mode could potentially show warnings about such things. * For subdivision, there are multiple UV interpolation methods (for compatibility with other apps). Some of them keep the boundaries and corners fixed, but others not which means that after subdivision a curve could fall outside UV islands. I'm not sure if requiring users to use a compatible method would be good enough, or if there would be some way to do different interpolation for a UV map used for hair. * Deformation is mainly a concern for characters where there is more likely to be a UV mapping than e.g. procedurally generated grass. Though there may also be some procedural use cases where you'd then need a way to generate a suitable UV map in geometry nodes, in order for animated deformation to work? * For performance, I guess some good caching mechanism will be important for interactive viewport playback, since mapping UVs to triangles for every frames seems expensive. UVs can work with or without `surface_rest_position` and `surface_rest_orientation` attributes. I'm not sure if using those instead of a `rest_position` attribute on the surface is better. * Being able to get a rest position will be needed at some point I think (for curve sculpting on deformed characters for example). * When editing the surface mesh, I guess all curves attached to the surface would need to have their rest position and orientation updated? Or how does this stay in sync? * For playback performance it's also nice not having to interpolate the `rest_position` every frame, and effectively caching it in per-curve attributes. But then such rest position and UV should not have to be re-interpolated, optimizations in subdivision could avoid that.
Author
Member

I guess when entering curve sculpt or edit mode, there would be some check for the existence of UVs, or even some way to auto generate them?

I imagined it so that you couldn't use the Curves > Empty Hair operator if the surface object doesn't have a uv map already (with a proper disabled hint). Auto-generating them when entering a mode is possible, but not sure if that should be preferred.

There some cases that users might get confusing results. Intentionally overlapping UVs (from e.g. mirror modifier or in the original mesh) or broken curves when they fall outside any UV island after edits. Curve sculpt and edit mode could potentially show warnings about such things.

Right, there could be warnings for these cases.

For subdivision, there are multiple UV interpolation methods (for compatibility with other apps). Some of them keep the boundaries and corners fixed, but others not which means that after subdivision a curve could fall outside UV islands. I'm not sure if requiring users to use a compatible method would be good enough, or if there would be some way to do different interpolation for a UV map used for hair.

Hmm, good point. I'd hope that keeping boundaries would be good enough, but that has to be tested. Worst case seems to be that we have to extend the subdivision surface node to interpolate different uv maps using different modes.

Deformation is mainly a concern for characters where there is more likely to be a UV mapping than e.g. procedurally generated grass. Though there may also be some procedural use cases where you'd then need a way to generate a suitable UV map in geometry nodes, in order for animated deformation to work?

If no deformation is done, the surface_* attributes aren't strictly necessary and can also be removed. They could help when transferring data from the ground to e.g. grass though. We do want to support generating uv maps in geometry nodes. Simple non-optimized uv maps should be relative cheap to generate, I think. More complex layouts not so much of course..

For performance, I guess some good caching mechanism will be important for interactive viewport playback, since mapping UVs to triangles for every frames seems expensive.

Yeah, that applies to all of geometry nodes really. If we notice that this becomes a real bottleneck in production, I'd fine with adding a local caching solution until there is more generic solution.

UVs can work with or without surface_rest_position and surface_rest_orientation attributes. I'm not sure if using those instead of a rest_position attribute on the surface is better.

Hm right, could work either way.

When editing the surface mesh, I guess all curves attached to the surface would need to have their rest position and orientation updated? Or how does this stay in sync?

That's right. That can either be an explicit operator call (exists already for the old attachment info), or it can be more automatic as tooling of the edit modes.

For playback performance it's also nice not having to interpolate the rest_position every frame, and effectively caching it in per-curve attributes.

Yes, probably shouldn't be an implementation detail though. Not sure if we can reliably invalidate this cache in a generic geometry nodes setup.

But then such rest position and UV should not have to be re-interpolated, optimizations in subdivision could avoid that.

I'm all for optimizations, but I think we have to be careful here with not making user-visible changes to what data is processed by a modifier/node based on how the data is used further down the line. That just doesn't scale well with more complex node setups. Providing a good and efficient default for what data is processed where makes sense though.

> I guess when entering curve sculpt or edit mode, there would be some check for the existence of UVs, or even some way to auto generate them? I imagined it so that you couldn't use the `Curves > Empty Hair` operator if the surface object doesn't have a uv map already (with a proper disabled hint). Auto-generating them when entering a mode is possible, but not sure if that should be preferred. > There some cases that users might get confusing results. Intentionally overlapping UVs (from e.g. mirror modifier or in the original mesh) or broken curves when they fall outside any UV island after edits. Curve sculpt and edit mode could potentially show warnings about such things. Right, there could be warnings for these cases. > For subdivision, there are multiple UV interpolation methods (for compatibility with other apps). Some of them keep the boundaries and corners fixed, but others not which means that after subdivision a curve could fall outside UV islands. I'm not sure if requiring users to use a compatible method would be good enough, or if there would be some way to do different interpolation for a UV map used for hair. Hmm, good point. I'd hope that keeping boundaries would be good enough, but that has to be tested. Worst case seems to be that we have to extend the subdivision surface node to interpolate different uv maps using different modes. > Deformation is mainly a concern for characters where there is more likely to be a UV mapping than e.g. procedurally generated grass. Though there may also be some procedural use cases where you'd then need a way to generate a suitable UV map in geometry nodes, in order for animated deformation to work? If no deformation is done, the `surface_*` attributes aren't strictly necessary and can also be removed. They could help when transferring data from the ground to e.g. grass though. We do want to support generating uv maps in geometry nodes. Simple non-optimized uv maps should be relative cheap to generate, I think. More complex layouts not so much of course.. > For performance, I guess some good caching mechanism will be important for interactive viewport playback, since mapping UVs to triangles for every frames seems expensive. Yeah, that applies to all of geometry nodes really. If we notice that this becomes a real bottleneck in production, I'd fine with adding a local caching solution until there is more generic solution. > UVs can work with or without `surface_rest_position` and `surface_rest_orientation` attributes. I'm not sure if using those instead of a `rest_position` attribute on the surface is better. Hm right, could work either way. > When editing the surface mesh, I guess all curves attached to the surface would need to have their rest position and orientation updated? Or how does this stay in sync? That's right. That can either be an explicit operator call (exists already for the old attachment info), or it can be more automatic as tooling of the edit modes. > For playback performance it's also nice not having to interpolate the `rest_position` every frame, and effectively caching it in per-curve attributes. Yes, probably shouldn't be an implementation detail though. Not sure if we can reliably invalidate this cache in a generic geometry nodes setup. > But then such rest position and UV should not have to be re-interpolated, optimizations in subdivision could avoid that. I'm all for optimizations, but I think we have to be careful here with not making user-visible changes to what data is processed by a modifier/node based on how the data is used further down the line. That just doesn't scale well with more complex node setups. Providing a good and efficient default for what data is processed where makes sense though.
Author
Member

Started writing some notes about the different arguments for using rest_position on the surface vs. using surface_rest_position and surface_rest_orientation on curves: https://hackmd.io/@II9-Bkl4TJifCqGL2jgbUw/BkZ-b7Bv9

Started writing some notes about the different arguments for using `rest_position` on the surface vs. using `surface_rest_position` and `surface_rest_orientation` on curves: https://hackmd.io/@II9-Bkl4TJifCqGL2jgbUw/BkZ-b7Bv9
Author
Member

Based on the different arguments mentioned in https://hackmd.io/@II9-Bkl4TJifCqGL2jgbUw/BkZ-b7Bv9, I come to the conclusion that we should use a rest_position attribute on the surface.
Working with surface_rest_position and surface_rest_orientation attributes on curves has some benefits in some use-cases and could be added later if there is need for it.

The decisive argument for me was that consistent surface tangents for the deformation are much easier to achieve this way. Without consistent tangents, we can easily get hair that rotates around the surface normal, without clear ways on how to fix it.

Based on the different arguments mentioned in https://hackmd.io/@II9-Bkl4TJifCqGL2jgbUw/BkZ-b7Bv9, I come to the conclusion that we should use a `rest_position` attribute on the surface. Working with `surface_rest_position` and `surface_rest_orientation` attributes on curves has some benefits in some use-cases and could be added later if there is need for it. The decisive argument for me was that consistent surface tangents for the deformation are much easier to achieve this way. Without consistent tangents, we can easily get hair that rotates around the surface normal, without clear ways on how to fix it.

Sounds reasonable to me. Another reason for rest_position that I did not see mentioned is that you need it to support a hair add brush on deformed characters in sculpt mode.

Sounds reasonable to me. Another reason for `rest_position` that I did not see mentioned is that you need it to support a hair add brush on deformed characters in sculpt mode.

Added subscriber: @Gareth-Jensen-1

Added subscriber: @Gareth-Jensen-1

This issue was referenced by 05b38ecc78

This issue was referenced by 05b38ecc7835b32a9f3aedf36ead4b3e41ec6ca4

Added subscriber: @Cigitia

Added subscriber: @Cigitia
Member

Changed status from 'Confirmed' to: 'Resolved'

Changed status from 'Confirmed' to: 'Resolved'
Hans Goudey self-assigned this 2022-10-20 21:06:49 +02:00
Member
Documented in the wiki now: https://wiki.blender.org/wiki/Source/Objects/Curves#Attachment_.26_Deformation
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
9 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#95776
No description provided.