Page MenuHome

Everything Nodes UX
Open, NormalPublic

Tokens
"Love" token, awarded by ugosantana."Love" token, awarded by iWaraxe."Love" token, awarded by Kronk."Love" token, awarded by roman13."Burninate" token, awarded by Shimoon."Love" token, awarded by bnzs."Love" token, awarded by momotron2000."Love" token, awarded by 0o00o0oo."The World Burns" token, awarded by DaPaulus."Love" token, awarded by CandleComet."Like" token, awarded by amonpaike."Love" token, awarded by samytichadou."Love" token, awarded by SecuoyaEx."Love" token, awarded by hadrien."Like" token, awarded by julperado."Love" token, awarded by lcs_cavalheiro."Love" token, awarded by dark999."Love" token, awarded by brilliant_ape."Orange Medal" token, awarded by Design-Circle."Orange Medal" token, awarded by Zino."Love" token, awarded by razgriz286."Love" token, awarded by xrg."Mountain of Wealth" token, awarded by duarteframos."Love" token, awarded by Stuntkoala.
Assigned To
None
Authored By

Description

The forthcoming ‘Everything Nodes’ project is bound to have large implications for how we use Blender to do many common tasks. The purpose of this document, is to add more specificity to this area, and to serve as a starting point for further design work.

We already have a series of technical design docs here: https://wiki.blender.org/wiki/Source/Nodes by @Jacques Lucke (JacquesLucke)

This design document is meant as a counterpoint by instead focusing on the end-user experience and how the various parts fit together.


Goals

The overall aim of the Everything Nodes system is twofold: Both to add tremendous amounts of new low level power and flexibility, and also to add high level simplicity and ease of use.

Nodes will allow Blender to become fully procedural, meaning that artists will be able to more efficiently build scenes that are orders of magnitude more complex and advanced in a short amount of time, with full non-linear control. The object-oriented nature of node systems means that users can use, re-use and share node systems and node groups without having to start from scratch.

We should aim to make this system a foundational, integrated one, which will be core to many Blender features and workflows, rather than something tacked on on the side, with limited scope and integration.

We should also aim to fully replace the previous systems, such as the old modifiers and particles systems. Otherwise we end up with multiple competing systems which will be hard to maintain for developers and confusing for users.

Additionally, nodes, assets and properties should work together to provide an integrated, wholistic system that works together as one.


Node Systems

Currently, we use nodes for materials, compositing and textures (deprecated).

In addition to these, the Everything Nodes concept would add nodes for many more areas:

  • Modifiers & procedural modeling
  • Particles & hair
  • Physics & Simulations
  • Constraints & kinematics

Open Questions:

  • ? How do we define the borders between these systems?
  • ? Modifiers, particles, hair, materials can all be attached to objects, but per definition this is not the case for constraints. These need to operate on a higher level. How and where does this work exactly?
  • ? How can you communicate from one node tree type to another? You might want to drive a modifier and a material property with the same texture, for eg.
  • ? Particles currently sort of integrate with the modifier stack. If these things are in separate node trees, how does this work exactly? Are particles a separate object type which then references emitters?

Modifiers

Node-based modifiers may seem like a curiosity, until you realize how powerful this can be. The old modifiers stack works ok for simple cases if stringing a few modifiers together, but as soon as you want to do more complex generative procedural modeling, the limitations of the modifier stack become apparent.

Node-based modifiers will allow:

  • Much more powerful use of textures to drive any parameter
  • Can do more flexible trees, rather than just a stack (this is needed for generative modeling)
  • Much more powerful procedural animations can be created (see the Animation Nodes addon for example)
  • etc

Open Questions:

  • ? Currently, the modifier stack allows users to toggle modifiers for the viewport and render result separately. Nodes don't have this feature, although it could be added, but how? Via separate node tree outputs or bypass toggles on each node?

Parametric Modeling

Currently in Blender, as soon as you create a primitive, the settings are immediately baked and cannot later be changed. Node-based modifiers have the potential to finally address this issue. Here’s how:

  • When the user adds any primitive (Eg UV Sphere, Cone, Cylinder etc), they see the usual operator controls for adjusting the settings and values
  • However, rather than simply baking those settings into a static mesh, they simply modify the settings inside the modifier nodetree
  • These settings can be changed and modified at any time, and you can build modifiers on top of this node, so you can use these as inputs for boolean operations for example, and still change the # of segments at any time.
  • If the user wishes to ‘freeze’ the node tree, they can do so by running a ‘Freeze Nodes’ operator, or by going to Edit Mode, which will automatically prompt the user to freeze the mesh.

Keep a stack-based UI for simple cases?

For simple cases where all you want is to add one or two modifiers on an object, we could decide to keep a modifier stack UI also. This would not be an entirely separate modifiers system but simply a different view on the same underlying modifier node-tree, organized for the user in a stack. However, this presents a number of challenges:

  • You can really only represent a simple string of nodes this way - not anything complex.
  • Many atomic types of nodes don’t make sense in a stack.

Probably the easiest way to add this, is to make it so you can start with the stack-based UI and then graduate to a full node tree:

Once graduated, you'll have to work inside the Node Editor, but, as the following section describes, we can include a much smarter way to expose node trees to the Properties Editor:

Properties Editor & high level control

Nodes allow for far more complex control and power. But how can we package this power in a way that stays simple and easy to use?

For materials, we already mirror the node tree inside the Properties editor, but in my estimation this works poorly in anything other than the very simplest of cases. We can do better.

As it turns out, we actually already have solved this: We already include a beautifully simple and powerful method of packaging low level complexity in a higher level interface with Group nodes. This system allows users to hide away lots of complexity and only expose a few useful parameters. This general concept can be expanded upon, by making it so entire node trees, not just Group nodes, can expose a small subset of parameters in the Properties Editor.

The nice thing about this solution, is that casual users don’t need to actually open up the node editors, but can just tweak the high level exposed inputs.


The node tree has defined a series of high level inputs

These will be exposed in the Properties Editor, like so:

Material nodes, with exposed parameters in the Properties Editor:

Modifier nodes, with exposed parameters in the Properties Editor:

Particle nodes, with exposed parameters in the Properties Editor:

This approach makes even more sense if we provide a series of starting points. This is where assets come in:

Open question: ? Would we then remove the old nodes-in-properties system from the Material Properties?
I think yes, as the above system is just cleaner and scales much better, although in theory we could keep both views inside Material Properties


Assets

Assets can play an important role in the Everything Nodes system. With node systems exposing a smaller subset of parameters in the Properties, it makes a lot more sense to supply users with many more starting points. The idea being that casual users won’t have to dive into the nodes - they can just add materials, particles, modifiers etc and adjust the exposed parameters. Users will only have to delve into the node systems if they wish to deviate from what the available assets allow for.

For more on assets, see T54642: Asset Project - UI - Asset Browser

Workflow examples:
  1. The user opens the Asset Browser, and navigates to the screw asset. The user drags this asset into the scene. The screw itself is being generated with a node system, and has a small set of high-level user-facing controls exposed in the Properties (head type, length, etc)
  2. The user may want to have a bent screw, so they open up the nodes and add a Bend deformer node at the end of the node tree
  3. The user browses the Asset Browser and locates the Rusty Metal material. The user drags this over the screw to apply it. This material has a few parameters exposed (rustiness, cavity dirt, metal type, etc)

Particle assets:

Mesh/modifier assets:


Gizmos

While a node-based workflow allows for a fully procedural workflow, it’s also more technical and disconnected from directly manipulating items in the 3d view. We can address this by letting nodes spawn interactive gizmos in the viewport.

We can implement this by adding a series of built-in special 'gizmo nodes' which can be used as inputs for the node tree. Examples of gizmo nodes are the Location Gizmo, Direction Gizmo and others. A toggle on these nodes can show or hide these gizmos in the viewport.


Node Editor

With a higher reliance on nodes, it makes sense to make a few key improvements to the node editors themselves, such as the following:

Compact mode

Node trees can easily become quite messy, so a toggle to work in a clean and compact node layout can make things more tidy:


In this mode, we can also make re-ordering nodes much easier, by simply allowing for dragging the nodes, which will automatically re-order them and re-connect them up, like so:

Connect pop-up

Currently it can be quite a maze to figure out which nodes fit with each other, and you have to dive into long menus to find the relevant nodes. We can make this much simpler, by making it so dragging from an input spawns a searchable popup with only the relevant node-types. This way you don't have to guess and search for which types of nodes fit the current input socket - they will be right there:


Recap

The nodes system in Blender can both make Blender vastly more powerful with added flexibility, but also can make Blender much easier to use, if we combine nodes with the assets system and high level controls in the Properties and viewport gizmos, as well as introduce a few key improvements to the node editors themselves.

  • We can add high level controls inside the Properties Editor, using a system similar to Group Nodes
  • This works best if we have a built-in assets system so users don't have to build all this from scratch
  • This in turn means that some user don't even NEED to mess around with nodes in many simple cases, even if he/she is using nodes indirectly by simply adjusting the exposed values inside the Properties editor.
  • Gizmos can add visual interactive controls in the viewport, to more directly control nodes
  • A few key improvements to the node editors can go a long way to make using nodes easier

Details

Type
Design

Event Timeline

William Reynish (billreynish) lowered the priority of this task from Needs Triage by Developer to Normal.Jul 16 2019, 11:26 PM
William Reynish (billreynish) changed Type from Bug to Design.

Great proposal, this will be a huge game changer for Blender.

From what I read in the linked wiki, I think one important feature to highlight is the importance of what is described as "Mesh groups", hopefully also "Curve Groups" for Bezier curve objects.

They would essentially be an upgrade to current *Vertex Groups* feature, and will be a valuable tool for procedural modelling, especially providing some form "selection filtering" if nodes/modifiers are capable of taking them as input.

They could also provide a great opportunity for cleanup, essentially replacing all the currently excessive "per edge data layers" like Edge Creases, Bevel Weight, UV Seams, Sharp Edge, among others with a generic layer system supporting arbitrary data. The ability to manipulate all these with nodes is also important, not only in terms of inclusion/exclusion operations, but also assigning arbitrary values, say for example to the resulting vertex from an extrude operation.

If the user wishes to ‘freeze’ the node tree, they can do so by running a ‘Freeze Nodes’ operator, or by going to Edit Mode, which will automatically prompt the user to freeze the mesh.

That doesn't have to be the case, as any operator called from within Edit Mode could add a corresponding node to the tree instead, keeping the process non-destructive. It's also important to be able to access edit mode to just select components (points, faces) to work on, or to simply transform a selection. Such an operation could be stored in a 'transform' node.

We can implement this by adding a series of built-in special 'gizmo nodes' which can be used as inputs for the node tree. Examples of gizmo nodes are the Location Gizmo, Direction Gizmo and others. A toggle on these nodes can show or hide these gizmos in the viewport.

I love the idea of having gizmos attached to some nodes - after all, you'd expect to see the *extrude* gizmo when selecting the *extrude operator* node within the node editor - however I'm not sure just keeping those 'gizmo toggles' wherever there are relevant node sockets isn't simpler - say a *spin operator* node has an *origin* vector input - the toggle for the corresponding gizmo could just be placed to the side of the value field / socket. These would probably be mutually exclusive too, to not end up with a cloud of gizmos floating around in the viewport ? Or maybe gizmos could be spawned simply by selecting the node ?

However you do not touch on the subject of whether or not a node tree is attached to an object, and if so is the object type (hence data structure) fixed ? Or could the object type be dynamic, dependant on the output node type ? (mesh output, voxel output, curve output...) Asking this because going from bmesh to volume, to particles, etc. could be very powerful.

Great points all over, and I dig what you suggest UI-wise to enhance the nodes themselves. I'd add to that list the ability to have a real interface inside of nodes, such as panels, dividers, etc. of course that would be a nice touch, but nothing strictly necessary.

That doesn't have to be the case, as any operator called from within Edit Mode could add a corresponding node to the tree instead, keeping the process non-destructive. It's also important to be able to access edit mode to just select components (points, faces) to work on, or to simply transform a selection. Such an operation could be stored in a 'transform' node.

Yes, ideally Edit Mode could still be used for non-destructive modeling. I expect that would not be easy to do in practice.

And yes, you should indeed be able to enter Edit Mode for objects with modifier nodes, although I don’t expect it’s easy to make it so you can actually select and interact with generated mesh data this way. Probably this aspect would work somewhat like the current modifiers, although a more advanced approach would be to integrate the destructive and non-destructive workflows more.

In theory this could be done, so that any mesh editing operation was automatically mapped to a node operation.

How does this tie with the future of rigging? Currently it's very hard to have a base set of deforming bones and swap the rig around it, similar to what Source Filmmaker does. You can't swap Object data because the dropdown doesn't allow for that, nor can you do it swapping Armature data, because constraints and other rigging values are stored on an object/bone level, not on the armature. You have to duplicate the Armature object, along with all its children objects if you don't want to rebind each object to it. Could rigs be their own datablock or something like that? I heard stuff about rig compilation, would the new nodes even allow for that?

Constraint nodes will most likely have to live on a higher level than on the bone/object level. With Blender’s architecture it’s not 100% clear how to do this. One solution is to store it on the armature level, although then they cannot be supported by objects, which doesn’t seem it would be acceptable. Other obvious solution is to store them somehow in Collections, which can contain objects and is portable. But what then happens to objects who live in multiple collections is not obvious.

Would be also great to have python defined nodes.
Ones that run like operators.

Indeed, however there may be issues with the single-threaded nature of Python. Node trees should be as fast as possible, by using multiple cores and even using things like JIT compiling and GPU execution. Python nodes may conflict with this.

@Jacques Lucke (JacquesLucke) perhaps you could expand on this?

Indeed, however there may be issues with the single-threaded nature of Python. Node trees should be as fast as possible, by using multiple cores and even using things like JIT compiling and GPU execution. Python nodes may conflict with this.
@Jacques Lucke (JacquesLucke) perhaps you could expand on this?

You said the main thing already, CPython is inherently single threaded due to the GIL (global interpreter lock). Nevertheless, there are many cases in which it can be very useful to use Python in nodes. I think it is likely that we will be able to use Python in certain kinds of nodes at some point, but it is not something we focus on right now.

I'm hoping there will be two types of bone nodes.

One nodegraph on the armature level, for constructing the armature, (ie. where you can do things that you would do in edit mode), and one on a pose bone level, which is a nodegraph for each pose bone to do things that you would currently do with constraints. (I think the node incarnation of bone constraints should be called something else, like Solver nodes, since they wouldn't really be constraining anything, they would just output some information that you can choose to use however you want.)

Sorry if this is not on topic though.

This is exciting stuff! And I love the idea of the relevant nodes spawning as gizmos. Really looking forward to seeing what the idea looks like practically. I'm for any and all ability to control the nodes as visually as possible.

I recently did a project using Animation Nodes, and while it's powerful, it was taxing to experiment with what I thought would be simple changes.

E.g. I wanted to offset the animation for each individual words, but figuring out how to do that and setting it up was a huge hassle. However, this would be quite easily and straight-forwardly done with a Dope Sheet.
Of course, I do admit it could be because I don't quite know how to use all the nodes (but there is a lot!), but I really hope the new node system for Blender could be far more intuitive and easy to make simple animations and adjustments.

Node systems are powerful, but artists like me find themselves often lost in the logic of our own node setups as soon as they begin to become a little complicated. That's why, with AN, I appreciated things like the "Interpolation Viewer".

But even things like that would be great to take even further, like make it so it's not merely showing the result of the node setup, but also allowing the artist to manipulate the physical graph to affect the values would be very useful.

excuse my basic knowledge of the subject but when you say "everything nodes" that does that mean also on the data block level where an object can have different atomic nodes like attributes node,shape node, transform node,..etc aka as DG & DAG nodes something like in maya or softimage this gives more granular control on the scene data and how the information flow for hierarchies and relations or is it something more high level?