The forthcoming ‘Everything Nodes’ project is bound to have large implications for how we use Blender to do many common tasks. The purpose of this document, is to add more specificity to this area, and to serve as a starting point for further design work.
This design document is meant as a counterpoint by instead focusing on the end-user experience and how the various parts fit together.
The overall aim of the Everything Nodes system is twofold: Both to add tremendous amounts of new low level power and flexibility, and also to add high level simplicity and ease of use.
Nodes will allow Blender to become fully procedural, meaning that artists will be able to more efficiently build scenes that are orders of magnitude more complex and advanced in a short amount of time, with full non-linear control. The object-oriented nature of node systems means that users can use, re-use and share node systems and node groups without having to start from scratch.
We should aim to make this system a foundational, integrated one, which will be core to many Blender features and workflows, rather than something tacked on on the side, with limited scope and integration.
We should also aim to fully replace the previous systems, such as the old modifiers and particles systems. Otherwise we end up with multiple competing systems which will be hard to maintain for developers and confusing for users.
Additionally, nodes, assets and properties should work together to provide an integrated, wholistic system that works together as one.
Currently, we use nodes for materials, compositing and textures (deprecated).
In addition to these, the Everything Nodes concept would add nodes for many more areas:
- Modifiers & procedural modeling
- Particles & hair
- Physics & Simulations
- Constraints & kinematics
- ? How do we define the borders between these systems?
- ? Modifiers, particles, hair, materials can all be attached to objects, but per definition this is not the case for constraints. These need to operate on a higher level. How and where does this work exactly?
- ? How can you communicate from one node tree type to another? You might want to drive a modifier and a material property with the same texture, for eg.
- ? Particles currently sort of integrate with the modifier stack. If these things are in separate node trees, how does this work exactly? Are particles a separate object type which then references emitters?
Node-based modifiers may seem like a curiosity, until you realize how powerful this can be. The old modifiers stack works ok for simple cases if stringing a few modifiers together, but as soon as you want to do more complex generative procedural modeling, the limitations of the modifier stack become apparent.
Node-based modifiers will allow:
- Much more powerful use of textures to drive any parameter
- Can do more flexible trees, rather than just a stack (this is needed for generative modeling)
- Much more powerful procedural animations can be created (see the Animation Nodes addon for example)
- ? Currently, the modifier stack allows users to toggle modifiers for the viewport and render result separately. Nodes don't have this feature, although it could be added, but how? Via separate node tree outputs or bypass toggles on each node?
Currently in Blender, as soon as you create a primitive, the settings are immediately baked and cannot later be changed. Node-based modifiers have the potential to finally address this issue. Here’s how:
- When the user adds any primitive (Eg UV Sphere, Cone, Cylinder etc), they see the usual operator controls for adjusting the settings and values
- However, rather than simply baking those settings into a static mesh, they simply modify the settings inside the modifier nodetree
- These settings can be changed and modified at any time, and you can build modifiers on top of this node, so you can use these as inputs for boolean operations for example, and still change the # of segments at any time.
- If the user wishes to ‘freeze’ the node tree, they can do so by running a ‘Freeze Nodes’ operator, or by going to Edit Mode, which will automatically prompt the user to freeze the mesh.
Keep a stack-based UI for simple cases?
For simple cases where all you want is to add one or two modifiers on an object, we could decide to keep a modifier stack UI also. This would not be an entirely separate modifiers system but simply a different view on the same underlying modifier node-tree, organized for the user in a stack. However, this presents a number of challenges:
- You can really only represent a simple string of nodes this way - not anything complex.
- Many atomic types of nodes don’t make sense in a stack.
Probably the easiest way to add this, is to make it so you can start with the stack-based UI and then graduate to a full node tree:
Once graduated, you'll have to work inside the Node Editor, but, as the following section describes, we can include a much smarter way to expose node trees to the Properties Editor:
Properties Editor & high level control
Nodes allow for far more complex control and power. But how can we package this power in a way that stays simple and easy to use?
For materials, we already mirror the node tree inside the Properties editor, but in my estimation this works poorly in anything other than the very simplest of cases. We can do better.
As it turns out, we actually already have solved this: We already include a beautifully simple and powerful method of packaging low level complexity in a higher level interface with Group nodes. This system allows users to hide away lots of complexity and only expose a few useful parameters. This general concept can be expanded upon, by making it so entire node trees, not just Group nodes, can expose a small subset of parameters in the Properties Editor.
The nice thing about this solution, is that casual users don’t need to actually open up the node editors, but can just tweak the high level exposed inputs.
The node tree has defined a series of high level inputs
These will be exposed in the Properties Editor, like so:
Material nodes, with exposed parameters in the Properties Editor:
Modifier nodes, with exposed parameters in the Properties Editor:
Particle nodes, with exposed parameters in the Properties Editor:
This approach makes even more sense if we provide a series of starting points. This is where assets come in:
Open question: ? Would we then remove the old nodes-in-properties system from the Material Properties?
I think yes, as the above system is just cleaner and scales much better, although in theory we could keep both views inside Material Properties
Assets can play an important role in the Everything Nodes system. With node systems exposing a smaller subset of parameters in the Properties, it makes a lot more sense to supply users with many more starting points. The idea being that casual users won’t have to dive into the nodes - they can just add materials, particles, modifiers etc and adjust the exposed parameters. Users will only have to delve into the node systems if they wish to deviate from what the available assets allow for.
For more on assets, see T54642: Asset Project - UI - Asset Browser
- The user opens the Asset Browser, and navigates to the screw asset. The user drags this asset into the scene. The screw itself is being generated with a node system, and has a small set of high-level user-facing controls exposed in the Properties (head type, length, etc)
- The user may want to have a bent screw, so they open up the nodes and add a Bend deformer node at the end of the node tree
- The user browses the Asset Browser and locates the Rusty Metal material. The user drags this over the screw to apply it. This material has a few parameters exposed (rustiness, cavity dirt, metal type, etc)
While a node-based workflow allows for a fully procedural workflow, it’s also more technical and disconnected from directly manipulating items in the 3d view. We can address this by letting nodes spawn interactive gizmos in the viewport.
We can implement this by adding a series of built-in special 'gizmo nodes' which can be used as inputs for the node tree. Examples of gizmo nodes are the Location Gizmo, Direction Gizmo and others. A toggle on these nodes can show or hide these gizmos in the viewport.
With a higher reliance on nodes, it makes sense to make a few key improvements to the node editors themselves, such as the following:
Node trees can easily become quite messy, so a toggle to work in a clean and compact node layout can make things more tidy:
In this mode, we can also make re-ordering nodes much easier, by simply allowing for dragging the nodes, which will automatically re-order them and re-connect them up, like so:
Currently it can be quite a maze to figure out which nodes fit with each other, and you have to dive into long menus to find the relevant nodes. We can make this much simpler, by making it so dragging from an input spawns a searchable popup with only the relevant node-types. This way you don't have to guess and search for which types of nodes fit the current input socket - they will be right there:
The nodes system in Blender can both make Blender vastly more powerful with added flexibility, but also can make Blender much easier to use, if we combine nodes with the assets system and high level controls in the Properties and viewport gizmos, as well as introduce a few key improvements to the node editors themselves.
- We can add high level controls inside the Properties Editor, using a system similar to Group Nodes
- This works best if we have a built-in assets system so users don't have to build all this from scratch
- This in turn means that some user don't even NEED to mess around with nodes in many simple cases, even if he/she is using nodes indirectly by simply adjusting the exposed values inside the Properties editor.
- Gizmos can add visual interactive controls in the viewport, to more directly control nodes
- A few key improvements to the node editors can go a long way to make using nodes easier