Page MenuHome

Object Nodes Proposal for Blender 2.8
Open, NormalPublic

Description

A task for discussing node design proposal for Blender 2.8

Full design document here:
https://download.blender.org/institute/nodes-design/

For the 2.8 development cycle of Blender some major advances are planned in the way animations, simulations and caches are connected by node systems.

It has become clear during past projects that the increased complexity of pipelines including Blender requires much better ways of exporting and importing data. Such use of external caches for data can help to integrate Blender into mixed pipelines with other software, but also simplify Blender-only pipelines by separating stages of production.

Nodes should become a much more universal tool for combining features in Blender. The limits of stack-based configurations have been reached in many areas such as modifiers and simulations. Nodes are much more flexible for tailoring tools to user needs, e.g. by creating groups, branches and interfaces.

Physical simulations in Blender need substantial work to become more usable in productions. Improved caching functionality and a solid node-based framework are important prerequisites. Physics simulations must become part of user tools for rigs and mesh editing, rather than abstract stand-alone concepts. Current systems for fluid and smoke simulation in particular should be supplemented or replaced by more modern techniques which have been developed in recent years.

Details

Type
Design

Event Timeline

Multiple Objects

As I understand it, an object contains one or more components and nodes. The nodes modify data from components or dynamically generate data, and then output that modified data. But what happens exactly with multiple objects or transforms is unclear to me.

  • Would it be possible to manipulate objects through nodes, in the sense that there would be sockets of type Object? If not, would some object properties (particularly the transform) become a property of Meshes / Armatures / ... so that they can be manipulated through nodes?
  • Or another way to ask this, which data type would constraints manipulate and output? Object, Bone, Mesh (including a transform), Transform, ... ?
  • If you have for example a Boolean node, would that have two Mesh input sockets? Or would such a node have one Mesh input socket, and the other mesh would be specified as a datablock property and transformed into local space, like to the armature deform node example?
  • Would you be able to feed the output of an object node graph into another object node graph, and if so, as what kind of socket data type would that have and would it include things like the object transform, pass index, ray visibility? Or would e.g. mesh vertices automatically be transformed into the local space and the rest of the object data lost?

Components

The purpose of components as a concept is not entirely clear to me. From the examples mentioned:

  • Meshes and voxel data could be datablocks you link into the node graph
  • Poses might be action datablocks that you use to modify an Armature into another Armature
  • Particle state and binding weights sound more like cached data
  • Other data like physics system parameters could be node properties / sockets

Or would e.g. a mesh component contain more than a link to a Mesh datablock, or would the Mesh datablock be embedded in the Object?

Regarding the multiple objects issue:

From my point of view there should at least two approaches, one for operating with complete objects without touching meshes, modifiers and physics and another one to manipulate objects inside a scene (location, rotation, relationships).

In animation nodes those categories are named Mesh and Object, which can be misleading. But with Object node you can manipulate sets of objects with its locations, rotations, and other attributes. Even you can set values for attributes like a field in a modifier. With Mesh nodes you can make what you make with modifiers but with a more depth control.

Other thing is what we call inputs, in my opinion almost every type of input should be able to be recall from any type of nodetree or category of nodes. For example, you can use the data from an Emmitter Particle System to animate objects but also to create a mesh. This type of information should be callable from any place.

Poses, Actions and NLA for example seems to be properties from an specific type object (armatures) which can affect Meshes from other objects. This schema not only works great in Animation Nodes, hundreds of users can confirm this, but as far as I know is the approach used in other applications with nodal workflows.

The doubts that you present about how specific things should be done are reasonable. But sometimes it is good to have various ways to do the same thing and let the user adapt its own workflows. With this nodal systems you never know what could people end up doing with them.

Regarding the issue of Armatures and Objects with nodes: This a very basic example of nodes and armatures applied via nodes to tenths of objects, offseting objects propertas and animations.

https://www.youtube.com/watch?v=Q8PmuJl2N2o

This was done almost a year ago. Animation Nodes is limited to the Python API and the workflow could be improved. But the Nodal Model was the fruite betweeen a very rich interaction between the main developer and the users, which some of them ended being developers. To summarize most of the design work it´s basically done.

In the current prototype implementation all objects are treated as read-only, e.g. for loading in another mesh or using a controller transform. But this is mostly because the prototype is limited to object scope, sitting right next to the modifier stack. There are no object sockets, but this is more due to limitations of the pynodes UI than hard design decisions. Objects are superficially selected with traditional dropdown buttons, but internally PointerRNA type sockets are used for passing around objects.

I don't have a clear plan for scene/group-level nodes yet. On these higher levels one may want to define inter-relationships between multiple objects (i.e. constraints). The key difference is that constraints generally affect all the objects, rather than just a single "parent" (the current scoping of constraints inside objects/bones IMO is a source of trouble when using more complex arrangements like IK chains). So one will want to replace some properties of objects like the obmat on a higher-than-object level (scene/group). Note that the same happens for rigid body simulation, which currently gets special treatment but should work alongside the constraint system.

Constraint nodes could output a symbolic constraint socket, which must be plugged into a "Solver" node input (which is a kind of output node for the scene/group level). Solving constraints can work in multiple stages, so that later constraints "override" earlier ones (like current constraint solvers), or constraints can be combined in a single solver for simultaneous solution (with potential failure in overconstrained systems).

A constraint node would take one or two object references as inputs. More abstractly, "anything with a transform" could be used (objects, bones, particles), to make constraint nodes reusable.

Regarding node graph output: After conferring with Andy i've tried to adhere to the "output node" principle in node mockups. This means that every node tree has an output node which defines the result of calling ("executing") the node tree. A node tree is compiled into a function with one or more return values, which the output nodes represent.

If a node tree is actually nested inside another node like a node group, the output could be used externally, i.e. a node represents a simple function call on a higher level, with the node tree being the function implementation. Object nodes are not quite so straightforward. The object node tree represents a complex set of rules for updating associated data. If objects want to use other objects' data, they need to import it in their own nodes (and let the depsgraph figure out update scheduling). Passing data between object node trees via sockets could be possible if they define this interface explicitly, but i don't know on what level they might be connected (scene nodes again?).

Let me try to explain the "component" concept. Note that this is not a fixed idea yet, the proposal may even contradict itself in places, depending on the time of writing.

The problem with nodes in an Object context is how to address data. For standard properties of objects just using a "field name" works alright, e.g. transform (obmat), obdata/mesh (provided we keep the single-obdata design), etc.. The "component" concept is an attempt to represent data which is not so unique, like particle systems or voxel data (currently stored in a modifier). Even if such data/settings are stored in ID blocks themselves, there would still need to be a way to connect them to an object, i.e. a set of "data slots". IMO it just becomes impractical and limiting to have a single fixed pointer for all the potential types of data. Component slots are not so different from the current method of addressing special data via modifier names (e.g. "use smoke density from 'Smoke Modifier.001'"), they just become passive data slots for addressing stuff.

I don't know how smart the depsgraph can become, but to stay in Blender's philosophy, I would as far as possible not think for the user. The current depsgraph doesn't know how to order things correctly. Example that don't work at the moment:

  1. Object 1 is shrinkwrapped on object 2 and solidified.
  2. Object 2 has a boolean to carve the shrinkwrapped and solidified object 1 in it.

There a many reason why meshes or other datablocks should modify each other back and forth and where only the artistic choice is the right decision. No logical decision making could lead to the expected result. The only way to allow the user to define in which order things should happen is to have a scene-level node system.
The new ID patch https://developer.blender.org/D113 allow to identify any datablock uniquely if I understand it good? Because of course, it would be great if node trees could also be linked/appended and would link/append the needed datablocks (meshes, mats, whatever).

@Lukas Toenne (lukastoenne), thanks for the clarifications!

So you could say that the proposal mainly covers "geometry nodes". I think that distinction between scene/group level "object nodes" and "geometry nodes" inside objects is ok (and typical in other software).

For geometry nodes to be reusable as node groups, I believe links to other objects like the armature for armature deform or mesh for particle distribution should somehow not be fixed, but settable on the outside. (Is that what you mean with the PointerRNA type sockets?).

Then if you think about components, to me they seem to be input sockets to the geometry nodes. At least if Particles / Strands / Volume become datablocks alongside Mesh, Curves, .., then they could be linked to inputs, so that if you go up one level to the object nodes you can connect in other geometries from other objects. Though in many cases these geometries would simply be owned by the object itself.

There may be other examples of components that do not fit this, but I can't think of them now.

Bastien Montagne (mont29) triaged this task as Normal priority.Aug 19 2016, 4:08 PM