For T54656, we need CPU evaluation of a subset of shader nodes that form the texture nodes.
Prototype under development in D15160.
The plan is to make this a subset of geometry nodes that can be evaluated using the same mechanisms. Texture nodes can be though of as a geometry nodes field, and when evaluated in the context of geometry nodes they are.
However they can also be evaluated outside of a geometry node tree. In particular, they can be evaluated on:
- Geometry domains vertex, edge, face, corner, for geometry nodes and baking of textures to attributes. Possibly also sculpting and vertex color painting.
- Geometry surface, for image texture baking and painting.
- 2D or 3D space not attached to any geometry. For brushes, 2D image painting, physics effectors.
To support the last two cases in the geometry nodes infrastructure (but not the geometry node evaluation itself), a new field context named TextureFieldContext is added. This represents a continuous texture space, that may be mapped to a geometry surface or not. It has one domain, ATTR_DOMAIN_TEXTURE.
Compilation and Execution
The texture nodes compilaton and evaluation mechanism uses much of the same infrastructure as geometry nodes, but is separate to be usable in different contexts. It's meant to be possible to execute texture nodes efficiently many times, for example paint brushes are multi-threaded and each thread may execute the nodes multiple times. This is unlike geometry nodes where multiple threads may cooperate to execute nodes once.
The current compilation mechanism is as follows:
- Multi-function nodes are compiled into a MFProcedure, which bundles multiple multi-functions into one for execution. Geometry nodes do not currently use this mechanism, but it is designed for this type of evaluation. This is rather inefficient currently for individual points, but should get more efficient when evaluated in big batches.
- Remaining nodes are input nodes, and execute geometry_node_execute as part of compilation, which then outputs fields and values. The execution context for these nodes is limited, unlike geometry nodes there is no geometry, object or depsgraph.
- For evaluation, fields are converted into GVArray using the provided geometry.
We could consider making geometry available and to the compilation and caching the compilation per geometry, though it's not clear there will be nodes that need this. Alternatively we may cache just the GVArray per geometry.
Input nodes return fields, which are then turned into GVArray for evaluation. For the texture domain, this is a virtual array that interpolates attributes. There may be some possibility here to share code with data transfer field nodes, or nodes that distribute points and curves on surfaces and inherit attributes.
Shader nodes need additional fields that are not currently in geometry nodes:
- Generated / Rest Position / Orco
- Normal, taking into account face smooth flag and custom normals
- UV maps (planned to become available as float2 attributes)
- Active render UV map and color layer
- Pointiness (Cavity)
- Random Per Island
It's unclear to me what the best way to specify these as fields is, if we should add additional builtin atttribute names or GeometryFieldInput. Some of these like Pointiness would also make sense to cache as attributes, and make available to external renderers.
One constraint is that it must be efficient to evaluate only a small subset of the domain. For many field inputs and operations this works, but there are some that will compute a data array for the entire domain. This can be enforced by just not making such nodes available as texture nodes, or caching the data array on the geometry.
For sculpt and paint modes, there is a question if and how we can make all these attributes available. The challenge is that it has different mesh data structures (bmesh and multires grids), which are not supported by geometry nodes. Additionally, recomputing for example tangents and pointiness as the geometry changes during a brush stroke would be inefficient, especially if it's for the entire mesh and not just the edited subset.
Fields and multi-functions are not optimized for evaluating one point at a time, there is too much overhead. That means we want to process texture evaluations in batches, and all (non-legacy) code that uses textures should be refactored.
For geometry nodes, modifiers and 2D brushes this is most likely straightforward. However for 3D brushes, particles and effectors it will be more complicated.
There is currently a mechanism to single evaluations possible while code is being converted. However this is rather inefficient, with quite a bit of overhead in MFProcedure execution. It may be possible to do some optimizations there for legacy code that we may not be able to easily convert to batches.
Node Graph Transforms
To support sockets like Image datablocks, and to automatically insert Texture Coordinate or UV Map nodes for textures coordinate sockets, we need to transform the node graph. The texture node layering will also need to do significant node graph transformations. In order to share code with Cycles and other external engines, it may be best to do this as part of node tree localization.
Relation to Other Node Systems
There are many node evaluation systems in Blender, and the question is if we can somehow share this implementation with anything else. I don't think the CPU evaluation can be directly shared, but will make the old texture nodes evaluation obsolete at least. The GPU implementation can probably be largely shared with shader nodes, and that implementation could be refactored to use DerivedNodeTree for consistency and to simplify that code.