Page MenuHome

Everything Nodes UX
Confirmed, NormalPublicDESIGN

Assigned To
None
Authored By
William Reynish (billreynish)
Jul 16 2019, 11:26 PM
Tokens
"The World Burns" token, awarded by vfxfan."Like" token, awarded by Simto1."Love" token, awarded by silex."Love" token, awarded by johnsyed."Like" token, awarded by Kdaf."Love" token, awarded by Arkhangels."Love" token, awarded by baswein2."Love" token, awarded by Dir-Surya."Love" token, awarded by brezdo."Love" token, awarded by DiogoX2."Pirate Logo" token, awarded by shader."Love" token, awarded by Ryxx."Love" token, awarded by astrand130."Love" token, awarded by BSannholm."Love" token, awarded by Cedch."Like" token, awarded by Constantina32."Love" token, awarded by mistajuliax."Love" token, awarded by ofuscado."Love" token, awarded by MetinSeven."Love" token, awarded by jfmatheu."Love" token, awarded by lopoIsaac."Love" token, awarded by ugosantana."Love" token, awarded by iWaraxe."Love" token, awarded by Kronk."Love" token, awarded by roman13."Burninate" token, awarded by Shimoon."Love" token, awarded by bnzs."Love" token, awarded by momotron2000."Love" token, awarded by 0o00o0oo."The World Burns" token, awarded by DaPaulus."Love" token, awarded by CandleComet."Like" token, awarded by amonpaike."Love" token, awarded by samytichadou."Love" token, awarded by SecuoyaEx."Love" token, awarded by hadrien."Like" token, awarded by julperado."Love" token, awarded by lcs_cavalheiro."Love" token, awarded by dark999."Love" token, awarded by brilliant_ape."Orange Medal" token, awarded by Design-Circle."Orange Medal" token, awarded by Zino."Love" token, awarded by razgriz286."Love" token, awarded by xrg."Mountain of Wealth" token, awarded by duarteframos."Love" token, awarded by Stuntkoala.

Description

The forthcoming ‘Everything Nodes’ project is bound to have large implications for how we use Blender to do many common tasks. The purpose of this document, is to add more specificity to this area, and to serve as a starting point for further design work.

We already have a series of technical design docs here: https://wiki.blender.org/wiki/Source/Nodes by @Jacques Lucke (JacquesLucke)

This design document is meant as a counterpoint by instead focusing on the end-user experience and how the various parts fit together.


Goals

The overall aim of the Everything Nodes system is twofold: Both to add tremendous amounts of new low level power and flexibility, and also to add high level simplicity and ease of use.

Nodes will allow Blender to become fully procedural, meaning that artists will be able to more efficiently build scenes that are orders of magnitude more complex and advanced in a short amount of time, with full non-linear control. The object-oriented nature of node systems means that users can use, re-use and share node systems and node groups without having to start from scratch.

We should aim to make this system a foundational, integrated one, which will be core to many Blender features and workflows, rather than something tacked on on the side, with limited scope and integration.

We should also aim to fully replace the previous systems, such as the old modifiers and particles systems. Otherwise we end up with multiple competing systems which will be hard to maintain for developers and confusing for users.

Additionally, nodes, assets and properties should work together to provide an integrated, wholistic system that works together as one.


Node Systems

Currently, we use nodes for materials, compositing and textures (deprecated).

In addition to these, the Everything Nodes concept would add nodes for many more areas:

  • Modifiers & procedural modeling
  • Particles & hair
  • Physics & Simulations
  • Constraints & kinematics

Open Questions:

  • ? How do we define the borders between these systems?
  • ? Modifiers, particles, hair, materials can all be attached to objects, but per definition this is not the case for constraints. These need to operate on a higher level. How and where does this work exactly?
  • ? How can you communicate from one node tree type to another? You might want to drive a modifier and a material property with the same texture, for eg.
  • ? Particles currently sort of integrate with the modifier stack. If these things are in separate node trees, how does this work exactly? Are particles a separate object type which then references emitters?

Modifiers

Node-based modifiers may seem like a curiosity, until you realize how powerful this can be. The old modifiers stack works ok for simple cases if stringing a few modifiers together, but as soon as you want to do more complex generative procedural modeling, the limitations of the modifier stack become apparent.

Node-based modifiers will allow:

  • Much more powerful use of textures to drive any parameter
  • Can do more flexible trees, rather than just a stack (this is needed for generative modeling)
  • Much more powerful procedural animations can be created (see the Animation Nodes addon for example)
  • etc

Open Questions:

  • ? Currently, the modifier stack allows users to toggle modifiers for the viewport and render result separately. Nodes don't have this feature, although it could be added, but how? Via separate node tree outputs or bypass toggles on each node?

Parametric Modeling

Currently in Blender, as soon as you create a primitive, the settings are immediately baked and cannot later be changed. Node-based modifiers have the potential to finally address this issue. Here’s how:

  • When the user adds any primitive (Eg UV Sphere, Cone, Cylinder etc), they see the usual operator controls for adjusting the settings and values
  • However, rather than simply baking those settings into a static mesh, they simply modify the settings inside the modifier nodetree
  • These settings can be changed and modified at any time, and you can build modifiers on top of this node, so you can use these as inputs for boolean operations for example, and still change the # of segments at any time.
  • If the user wishes to ‘freeze’ the node tree, they can do so by running a ‘Freeze Nodes’ operator, or by going to Edit Mode, which will automatically prompt the user to freeze the mesh.

Properties Editor & high level control

Nodes allow for far more complex control and power. But how can we package this power in a way that stays simple and easy to use?

For materials, we already mirror the node tree inside the Properties editor, but in my estimation this works poorly in anything other than the very simplest of cases. We can do better.

As it turns out, we actually already have solved this: We already include a beautifully simple and powerful method of packaging low level complexity in a higher level interface with Group nodes. This system allows users to hide away lots of complexity and only expose a few useful parameters. This general concept can be expanded upon, by making it so entire node trees, not just Group nodes, can expose a small subset of parameters in the Properties Editor.

The nice thing about this solution, is that casual users don’t need to actually open up the node editors, but can just tweak the high level exposed inputs.


The node tree has defined a series of high level inputs

These will be exposed in the Properties Editor, like so:

Material nodes, with exposed parameters in the Properties Editor:

Modifier nodes, with exposed parameters in the Properties Editor:

Particle nodes, with exposed parameters in the Properties Editor:

This approach makes even more sense if we provide a series of starting points. This is where assets come in:

Open question: ? Would we then remove the old nodes-in-properties system from the Material Properties?
I think yes, as the above system is just cleaner and scales much better, although in theory we could keep both views inside Material Properties


Assets

Assets can play an important role in the Everything Nodes system. With node systems exposing a smaller subset of parameters in the Properties, it makes a lot more sense to supply users with many more starting points. The idea being that casual users won’t have to dive into the nodes - they can just add materials, particles, modifiers etc and adjust the exposed parameters. Users will only have to delve into the node systems if they wish to deviate from what the available assets allow for.

For more on assets, see T54642: Asset Project: User Interface

Workflow examples:
  1. The user opens the Asset Browser, and navigates to the screw asset. The user drags this asset into the scene. The screw itself is being generated with a node system, and has a small set of high-level user-facing controls exposed in the Properties (head type, length, etc)
  2. The user may want to have a bent screw, so they open up the nodes and add a Bend deformer node at the end of the node tree
  3. The user browses the Asset Browser and locates the Rusty Metal material. The user drags this over the screw to apply it. This material has a few parameters exposed (rustiness, cavity dirt, metal type, etc)

Particle assets:

Mesh/modifier assets:


Gizmos

While a node-based workflow allows for a fully procedural workflow, it’s also more technical and disconnected from directly manipulating items in the 3d view. We can address this by letting nodes spawn interactive gizmos in the viewport.

We can implement this by adding a series of built-in special 'gizmo nodes' which can be used as inputs for the node tree. Examples of gizmo nodes are the Location Gizmo, Direction Gizmo and others. A toggle on these nodes can show or hide these gizmos in the viewport.


Node Editor

With a higher reliance on nodes, it makes sense to make a few key improvements to the node editors themselves, such as the following:

Compact mode

Node trees can easily become quite messy, so a toggle to work in a clean and compact node layout can make things more tidy:


In this mode, we can also make re-ordering nodes much easier, by simply allowing for dragging the nodes, which will automatically re-order them and re-connect them up, like so:

Connect pop-up

Currently it can be quite a maze to figure out which nodes fit with each other, and you have to dive into long menus to find the relevant nodes. We can make this much simpler, by making it so dragging from an input spawns a searchable popup with only the relevant node-types. This way you don't have to guess and search for which types of nodes fit the current input socket - they will be right there:


Recap

The nodes system in Blender can both make Blender vastly more powerful with added flexibility, but also can make Blender much easier to use, if we combine nodes with the assets system and high level controls in the Properties and viewport gizmos, as well as introduce a few key improvements to the node editors themselves.

  • We can add high level controls inside the Properties Editor, using a system similar to Group Nodes
  • This works best if we have a built-in assets system so users don't have to build all this from scratch
  • This in turn means that some user don't even NEED to mess around with nodes in many simple cases, even if he/she is using nodes indirectly by simply adjusting the exposed values inside the Properties editor.
  • Gizmos can add visual interactive controls in the viewport, to more directly control nodes
  • A few key improvements to the node editors can go a long way to make using nodes easier

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes

I'm hoping there will be two types of bone nodes.

One nodegraph on the armature level, for constructing the armature, (ie. where you can do things that you would do in edit mode), and one on a pose bone level, which is a nodegraph for each pose bone to do things that you would currently do with constraints. (I think the node incarnation of bone constraints should be called something else, like Solver nodes, since they wouldn't really be constraining anything, they would just output some information that you can choose to use however you want.)

Sorry if this is not on topic though.

This is exciting stuff! And I love the idea of the relevant nodes spawning as gizmos. Really looking forward to seeing what the idea looks like practically. I'm for any and all ability to control the nodes as visually as possible.

I recently did a project using Animation Nodes, and while it's powerful, it was taxing to experiment with what I thought would be simple changes.

E.g. I wanted to offset the animation for each individual words, but figuring out how to do that and setting it up was a huge hassle. However, this would be quite easily and straight-forwardly done with a Dope Sheet.
Of course, I do admit it could be because I don't quite know how to use all the nodes (but there is a lot!), but I really hope the new node system for Blender could be far more intuitive and easy to make simple animations and adjustments.

Node systems are powerful, but artists like me find themselves often lost in the logic of our own node setups as soon as they begin to become a little complicated. That's why, with AN, I appreciated things like the "Interpolation Viewer".

But even things like that would be great to take even further, like make it so it's not merely showing the result of the node setup, but also allowing the artist to manipulate the physical graph to affect the values would be very useful.

excuse my basic knowledge of the subject but when you say "everything nodes" that does that mean also on the data block level where an object can have different atomic nodes like attributes node,shape node, transform node,..etc aka as DG & DAG nodes something like in maya or softimage this gives more granular control on the scene data and how the information flow for hierarchies and relations or is it something more high level?

Maybe a little bit off-topic but this could improve the readability of node-systems in general.
Especially with animation nodes and the new Bparticles, where a lot of nodes are used, it's sometimes difficult to see where nodes are actually connected. The node-noodle gradient of selected nodes goes into the same grey as every other noodle.

An option to change the noodle-gradient from selected nodes could improve this.
At the moment only the color can be changed (wire color), not the falloff.

Noodles from selected nodes could also be shown on top of all other noodles.

Example in attached GIF.

Not sure if this has already been considered, but one thing that can really enhance collaborative workflow, and re-use of assets across multiple shots or sequences, is having group nodes that read their contents (and what is exposed control wise on the top level of the group) from shared files on disk, particularly if the reading of those files can be managed as assets.

Basically group nodes that reference a file on disk, and the file is dynamically used to define the contents of the group when the scene is read in or the user asks for the group node's contents to be dynamically reloaded on demand. You should also be able to publish to the file on disk from the node to share its contects with other scenes or users.

Potentially allows all sorts of stuff, from shared procedural assets through to things like shared sequence lighting setups.

Will it be possible to insert new nodes with addons, so that a user's own code can be included in a more modular fashion? It would be a lot easier than digging around in Blender's source code. This would be a lower barrier to entry for scripters and riggers who need more power but don't want to be actual software devs.

So happy that the Blender community is considering procedural based systems, big kudos from Houdini camp.

"How can you communicate from one node tree type to another? You might want to drive a modifier and a material property with the same texture, for eg."

I want to send a big nod towards the @attribute system for internal communication between parametric nodes across all contexts, maybe Blender deserves a variation of this of its own.

In general, I would suggest looking into Hou mechanics and how it deals with different contexts (rendering, compositing, particles, modeling, etc.) and their communication. The Geo Spreadsheet is where the party's at. Also, Grouping :) There's simply a lot about Houdini that I'd wish to see in Blender which I think would make it more powerful and more accessible to nerdy people.

In general, I would suggest looking into Hou mechanics and how it deals with different contexts (rendering, compositing, particles, modeling, etc.) and their communication. The Geo Spreadsheet is where the party's at.

Also, Grouping :) There's simply a lot about Houdini that I'd wish to see in Blender which I think would make it more powerful and more accessible to nerdy people.

Grouping I imagine would become a necessity as soon as procedural modeling is in play - I don't see ottomh any other obvious way of specifying elements (=components) for a given operator/node, other than "group by volume" and that sort of thing.
I think @Julian Eisel (Severin) had started work on a spreadsheet type editor a while ago - did you have this kind of usage in mind ? (geometry attributes)

I think @Julian Eisel (Severin) had started work on a spreadsheet type editor a while ago - did you have this kind of usage in mind ? (geometry attributes)

Yes, point, primitive, volume, and detail attributes that are interchangeable via attribute transfer nodes. This is the powerhouse of Houdini, it allows a lot of low-level modifications to be made, and a much greater procedural control over various transformations, copy stamping, and (my favorite) volume operations, which allow for addition of forces to geometry. Imagine each point (in Houdini as point is different than a vertex), aside from position, normals, etc, having also orintation, velocity, density and hundreds of other potential, context sensitive attributes..

Grouping I imagine would become a necessity as soon as procedural modeling is in play - I don't see ottomh any other obvious way of specifying elements (=components) for a given operator/node, other than "group by volume" and that sort of thing.

Point grouping is even more powerful. Also, there's groups by expressions, by attributes, and a lot of different grouping nodes altogether.

One more powerful procedural backbone is the parameter reference system, which allows users to directly reference a parameter of a geometry node (Y size of a sphere for example) in another node. Or the possibility to add expressions (sine function for example) to a certain parameter, so you get animation just by clicking play, without the need for keyframing.

I could go on forever... It's a powerful platform, but I would love to see Blender become a competition for Houdini, and have them more interchangeable (Houdini is really weak for basic poly modeling, which I think Blender took better care of).

Would like to be able to turn sets of nodes on/off easily. Either groups of nodes could turn up in Collections, to be hidden or in a Node Editor a Group Node would have a switch.

@Karl Mochel (kalmdown) you can already mute nodes. And there will of course be more than one node tree - in one scene you can have many particle systems, for example, and toggle each one in the Outliner if you only wish to see a particular one at any one time.

Would like more control to mute groups or node trees - independent of particle systems. Assuming AN or an AN-like system is added there is no way to stop specific node groups from running. Since I do group them by function being able to click on a run/don't run toggle per group would be logical.

I hope that the modeling of the meshes themselves will not change much in a regular editor and will remain as they are now, but will simply be written to the node system in a separate editor?
Houdini, in my opinion, has one drawback, it is not convenient to work with uv there. If at the same time the work with uv does not change radically it will be very convenient when everything will be on ndoah.

William Reynish (billreynish) changed the status of subtask T74967: Geometry Nodes Design from Needs Triage to Confirmed.Mar 20 2020, 12:06 PM

@William Reynish (billreynish)

Just an addendum to the "Stack Based" node design:

Nodes could be better-visualized in a custom UI "stack" by using a similar UI approach as Houdini does in its "Digital Assets" (HDAs) approach to UI.
This approach generates a "higher-level" UI, which is useful for artists when importing their procedural ("digital") assets into game engines to design levels. Artists generally only need a few values to tweak at a time, and giving them control over what values are displayed by default would be a better approach (imo) than "stacks" as we are currently seeing them.

Check it out:

https://www.youtube.com/watch?v=SiT4r22BWY8&t=10m35s

The user-interface can be comprised of any UI element the programmer/artist wants (that Blender supports), leading to better usability, greater REusability of the node programming, and easier-to-understand UI for the node-based "tool" / "modifier" overall. We only need to think of the modifier in terms of being a reusable "tool" instead of a "pre-programmed 'time-saver' feature that still fits in with the old paradigm", as we are currently looking at it now.

This approach can also fit in well with the Gizmos / Exposed Parameter setup we've already got in the design by simply offering those "Gizmos" or "Parameters" as options to include as well as what UI element (i.e. button/sider/label/gizmo) would best suit them in the UI the artist desires.

To summarize:

This approach would keep Blender simple/straightforward to use (for new users) while also creating a better approach to UI even for existing modifiers, as many controls on some modifiers are simply not relevant to some artists/projects, so providing the option to hide them, potentially by default (and thereby "simplify" the whole interface), would be a step up to users who prefer the "old" modifier stack to the "new" node-based approach to modifiers. They can essentially have the equivalent to a "modifier preset" using a system like this.

Thoughts?

Not sure if this is within scope or currently being considered, but one thing in Blender that isn't currently modifier based but would very much benefit from a modifier based workflow is bezier curve geometry generation.

Those currently belong to objectdata level datablocks, but having them separate as non destructive operations on top of objectdata would be preferable.
Materializing beveling, extruding, tapering, offsetting bezier curves as nodes/modifier would definitely be powerful, if not a must.

This curve/line-based modifier approach to procedural modeling is the thing people use in Houdini the most.

I would agree that it is a MUST in the Blender Node-based modeling workflow.

Regarding performance for Python nodes, could numba help with that? It allows to JIT-compile Python code and provides a great boost in speed similar to Cython, while being able to run on multiple cores. It can also compile ahead of time, which could be useful with nodes that have static inputs/outputs. It also supports GPU computation with CUDA, but not OpenCL. Some Python developers discussed the pros/cons of using either Cython/Numba/C++ for speeding up low-level code, and it seems that Numba made a good case for its adoptability. I personally used it on small projects and the speed-up was great for little effort. However this would introduce new dependencies (it requires numpy, and relies on llvmlite for JIT compilation, total size is around 30MB). I am quite new here and I have no idea if there are strict guidelines/requirements in that regard. Anyway, that could be interesting to look into if we want to support Python nodes without significant loss of performance.

I think some people are flocking here to give feedback regarding features instead of UX because the main task T68980 Everything Nodes was closed as a duplicate of T73324 Particles Nodes, even though Particles are only a fraction of the Everything Nodes project (?).

One thing I saw in the documentation is that the functions are being abstracted as much as possible, which is great for increased control over all aspects of a mesh/particle/simulation/material/etc. But i can see this clearly becoming too confusing and overly complex just as Animation nodes is currently, for a non-programmer or someone not acquainted with some specific terms created for everything nodes (such as primitives/functions/attributes).
My suggestion is to keep, just as currently we have inside Blender, a set of standard modifiers, which would be in fact "node groups" of the functions required to do the desired operation, that the user can select (For example a Move or Extrude Modifier), instead of having to manually insert all of the required nodes for the operation (input/convert/group/operate/output). (Of course the user could still access the modifier group to create more inputs/outputs, or "ungroup" the group to expose all nodes for further rearrange/usage)

Also, I think is imperative that every operation in blender to have a modifier a.k.a "group node" equivalent, this including regular Move/Rotate/Scale operations, as well as edit-mode ones such as Unwrap, Join, Split, Extrude, Mark/Clear Seam/Sharp (maybe by angle or feature), Bevel, Shear, Slide, Push/Pull, Connect, Create/Delete Vertex Group (as well as Assign/Remove from vertex group), Create/Delete Vertex Color, Create/Delete UV Maps, etc.
I can also see the usefulness of instead of having the "By Angle/Weight/Vertex Group" controls inside of each operation node (ex: Bevel), have it be a separate node, so that you could do any of the above operations on only a set of "primitives" (vertices, edges, tris or quads) depending on the angle/weight/vertexgroup, etc. Ex: Only UV-Unwrap a set of vertex groups that were defined (unsing a Assign to vertex group) by the angles to adjacent primitives (using a "By Feature" node).

In the case of a Unwrap modifier (a group containing a Create/Assign UV Map node and an Unwrap node), knowing that the operation can be taxing on very complex objects if done after every change to the node tree, maybe the unwrap node could have a special option to be updated manually, or only if the directly previous nodes in the chain had changes, etc.
Also, the option Project from View and Project from View(Bounds) could be instead substituted to Project from Camera and Project from Camera(Bounds) with an option to select which camera. This way not only can you create and modify an object parametrically, you could also auto-seam the object (maybe based on features like angle), uv-unwrap it and assign procedural/image materials to it, all inside the node window.

Kdaf (Kdaf) added a subscriber: Kdaf (Kdaf).