Page MenuHome

Geometry Nodes Design
Closed, ResolvedPublicDESIGN

Authored By
William Reynish (billreynish)
Mar 20 2020, 12:06 PM
Tokens
"Like" token, awarded by gianni_."Burninate" token, awarded by satishgoda."Love" token, awarded by the_avg_guy."Burninate" token, awarded by Alumx."Love" token, awarded by Branskugel."Love" token, awarded by digim0nk."Love" token, awarded by dbystedt."Love" token, awarded by Hto-Ya."The World Burns" token, awarded by himanshu662000."Love" token, awarded by GeorgiaPacific."The World Burns" token, awarded by gromishe."Like" token, awarded by Adam.S."Love" token, awarded by Ionarn."Love" token, awarded by thecooper8."Like" token, awarded by proulxpl."Burninate" token, awarded by Mir-Mir-Roy."Love" token, awarded by bnzs."Love" token, awarded by cgndev."Love" token, awarded by monio."Like" token, awarded by Fracture128."Love" token, awarded by Dabi."Love" token, awarded by Shimoon."Love" token, awarded by Peine_Perdue."100" token, awarded by Dir-Surya."Burninate" token, awarded by DiogoX2."Love" token, awarded by pierogo."Love" token, awarded by hadrien."Love" token, awarded by jonathanl."Burninate" token, awarded by wevon.

Description

A big part of the Everything Nodes project, is to convert the modifier stack to a much more powerful Geometry nodes system. This document serves to explain how this will work more precisely from a user point of view.

Geometry nodes encompass what Modifiers used to be, but where modifiers only allowed you to modify geometry, Geometry nodes also allow you to create geometry.

Transition

Moving to a node-based system will take a bit of time, so we will probably need some way to smoothen the transition and to keep old files working for a while. One way to solve this, is to add a Use Nodes toggle for modifiers, just like other areas:

If disabled, the modifier stack will continue to work, all the while developers can continue to improve and build the geometry nodes system. This approach would allow us to merge in the geometry quicker, and not have to worry about backwards compatibility as much. However, note it's possible that no new modifiers will be added to the stack.

How are nodes different?

In order to convert the modifier stack to a node-based system, we can't just do a straight 1:1 port of each modifier to a node. In many cases, nodes are different, because you have the concept of inputs and outputs. For this reason, modifiers as nodes would have to become more atomic, generally simpler, and allow users to plug in whatever they need to get the desired effect.

Every time a modifier has a property for limiting the result to a vertex group, or options to perform an operation along some direction or vector, those things should become generic inputs instead, so that you can plug in whatever you want, in order to take full advantage of the power of nodes.

Take this example for the Displace modifier:

The Direction and Vertex Group properties should simply be inputs, and the Texture property is also moot, because with nodes you can use textures to drive anything.

As a node, Displace is a lot simpler:

An example where a UV Sphere is generated, and there is a noise displacement happening along it's normals could look like this:

We can compare some more examples below:

For Boolean, currently you specify an object directly in the modifier:

As a node, this is not needed - you would simply have two inputs. You can then plug in anything you wish here:

Here is a little overview of a few example nodes:

Selections / Clusters

When modelling normally using a destructive workflow using the Edit Mode tools, you always operate on the selected items (vertices, edges or faces). In non-destructive modelling, we also need this concept. We could call the Clusters. A Cluster is simply a set of vertices, edges or faces. How do you 'select' items? Well, various nodes can get, generate or manipulate clusters. Here are some examples:

The concept of Clusters also allows us to make certain improvements to the modifiers. Often, the current modifiers have certain specific settings that apply some fixed operation to the newly generated geometry. See for example the here in the Bevel modifier:

These kinds of controls are quite inflexible and arbitrary, and different modifiers provide a different subset of fixed options. For nodes, it makes sense to generalize this, so that users can apply any operation to newly generated geometry. We can do this a number of ways, but one simple solution could be to automatically generate an attribute output for the newly generated geometry:

You could then use this output to chain together nodes in more powerful ways. Here only the result of the extruded geometry is bevelled:

Of course you can also use predefined user-created Clusters as inputs:

Textures

In a nodes system, you can use textures to drive any input, rather than the arbitrary Texture fields for a certain subset of modifier effects in the current stack. This makes using textures orders magnitude more powerful. We can make textures work much like material textures. Here is a simple example of using a texture to drive the displace strength:

Vertex Groups & Attributes

Since we can get rid of all the fixed vertex group fields for the modifiers, we can instead make this part of the node tree, just like how we currently handle vertex colors for materials. Below is an example of using a vertex group to modulate the strength of a texture, which in turn controls the displacement Strength property:

Constants/Global values

Many values for these nodes will be per element (vertex, edge, face), but some values are fixed for that node operation, and cannot vary per element. One such example is the Bevel Segments value. You can set it to any value, but it will be the same for all the affected elements during that operation.

It would be nice to still be able to drive these kinds of properties, and also to pipe them into group nodes for higher level control.

We can show these kinds of values differently, and because this will always be a single value, we can even tell the user what the value is:

Outputs

The output is much like the Material Output for materials, but it outputs a mesh instead:

We can provide some extra features here, for making an output only for the viewport or the render. This could replace these toggles from the stack system:

We can also provide an output for visualizing certain textures or effects in the viewport.

It would also be useful to set any node to be the one that is previewed in the viewport:

Users will constantly want to preview certain sections of the node tree, and having to manually connect those things up to the output node will be a pain. An easier way to preview the output of any node would be extra useful for geometry nodes, but would also be a benefit for shader or compositing nodes.

Caching & Applying

Some node trees can become heavy or very large, and sometimes it's not useful to have to recalculate the entire node tree every time a property is edited down stream. For this reason, we could provide a Cache node, that takes everything upstream and caches it so it doesn't need to be recalculated.

When cached, all nodes upstream will be greyed out:

New nodes

When converting to to nodes, we will most likely want to add many more nodes compared to the small list of modifiers today.

For example, we would want to add nodes to do all sorts of common modelling tasks. Here you see the Extrude and Bisect nodes, as an example:

Other examples are nodes for creating new geometry:

These kind of things are a taste of how much more powerful and broader in scope the geometry nodes system is.

Properties Editor

It's not always useful to have to edit node properties inside the node editor. We can re-use the same concept we already have for node groups, where we specify a series of node inputs which can be manipulated in the Properties Editor.

Here are a set of input nodes:

And here you see the controls reflected in the Properties Editor:

Adding new objects

An important example is for one of the most basic tasks in Blender: adding new objects. Here you can see a newly added UV Sphere, which automatically gets a geometry node tree and a series of useless inputs:

These can be controlled by the user at any time. An Apply button can be added to easily freeze the geometry node tree, so that users can enter edit mode and perform destructive edits.

Open Questions

  • How to handle the physics modifiers? These are different from the regular modifiers, since they need to be simulated. We could move those to the Simulation Node system instead of the Geometry nodes
  • Clusters can be a set of vertices, edges or faces - how do you define what type the Cluster is?
  • How to handle things other than meshes? We could allow the system to also output and use curves or NURBS objects

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes

I personally think Maya's approach is much more successful for Artists than Houdini, they can work without worrying about complexity of nodes & have the option to turn off or delete the "construction history" for them but at the same time have the low level control for every component whether you're doing modeling, rigging, shading..etc all put in one cohesive place .
This is why it's the front runner because of this balance, i am not sure if Blender can hit that since it wasn't built from the get go with the same paradigm & from seeing this task, it seems there will be only creation nodes.

Lsscpp (lsscpp) added a comment.EditedSep 17 2020, 1:10 PM

This is why it would be very important to have nodes generated also by the regular viewport workflow. You would end with the best of both worlds: you can go the nodes way, powerful as you want; or you can still go the 3d way, which would have a sort of realtime history in the form of a nodegraph. Together they'd make a hybrid approach where one can model and adjust the nodes back and forth.
Sorry for repeating myself.

I don't think there's any question that this system will be more powerful and allow for more complex and interesting creations.

Having said that, are simple and common modifier setups going to require more clicks to setup under this system? If so I think we that should be something addressed in the design.

For an example, I model furniture pretty much constantly for my job, most of the time whenever I'm using modifiers, it's just to quickly put a solidify / mirror / bevel / subsurf modifier, or sometimes multiple of the previously mentioned together to create a panel with automatic thickness and bevels, or something mirrored across multiple axis. For those modelling tasks, the node tree concept is of no benefit, the existing 'modifier stack' system is more than enough for my needs. Will a node tree require more clicks for me to setup?

Because ideally, I wouldn't want all this extra functionality and power to come at the cost of losing speed at the common simple tasks. Because those kinds of basic modifier setups are what I use modifiers for 99% of the time.

I am still pleased to see the new system, but unless or until it is just as fast as the current modifier stack for 'common and simple' tasks, I'm hoping the option to choose between nodes or modifier stacks will remain available, because for most things I would see myself probably just opting to use the stack and only use the node tree when my needs are more complex.

Please don't take this comment the wrong way, I'm not saying this design is no good, I'm just inquiring how this system will work for simple tasks that don't require complex node trees to achieve and what effect it will have on workflow speed. For all I know there's a plan to ensure that that simple tasks require no additional clicks to the current workflow, if so, that's great!

Modifier stacks should be emulated by a blender addon. If you look at some of the addons already out there for improving concept workflows the modifier stack isn't actually better than nodes, many addons create complex, messy, hacky modifier stacks and clutters the scene outliner to do procedural nondestructive modeling. Moving to nodes would make this actually better for these addons as it has moved to a fixed layer stack to a contained procedural graph. There is really no reason to have a hard coded modifier stack anymore in my opinion. It should just end up being a quick UI thing on top of the graph backend.

In T74967#1018977, @astrand130 wrote:

Modifier stacks should be emulated by a blender addon. If you look at some of the addons already out there for improving concept workflows the modifier stack isn't actually better than nodes, many addons create complex, messy, hacky modifier stacks and clutters the scene outliner to do procedural nondestructive modeling. Moving to nodes would make this actually better for these addons as it has moved to a fixed layer stack to a contained procedural graph. There is really no reason to have a hard coded modifier stack anymore in my opinion. It should just end up being a quick UI thing on top of the graph backend.

Currently, in the python API, I dont see a function to draw individual modifiers in an UILayout,

For that I think it would be nice to have a UILayout.template_single_modifier(mod) function to allow us to draw a modifier UI anywhere in the interface, otherwise, we would have to code each modifier UI from scratch in python and every time there was a change in the modifier properties or a new modifier was added, the add-on would break.

Or even better.

I dont believe that there's a difference between a tree of modifiers with a single branch and a modifier stack, so we could have a GUI similar to the shader editor that just list the nodes linearly as deep as it can, we would need also an easy way to apply, remove, duplicate and reorder a single modifier from within this GUI, preferentially as similar as possible if not better than the current 2.90 modifier stack (I'm talking about the drag and drop feature).

In T74967#1018977, @astrand130 wrote:

Modifier stacks should be emulated by a blender addon. If you look at some of the addons already out there for improving concept workflows the modifier stack isn't actually better than nodes, many addons create complex, messy, hacky modifier stacks and clutters the scene outliner to do procedural nondestructive modeling. Moving to nodes would make this actually better for these addons as it has moved to a fixed layer stack to a contained procedural graph. There is really no reason to have a hard coded modifier stack anymore in my opinion. It should just end up being a quick UI thing on top of the graph backend.

Currently, in the python API, I dont see a function to draw individual modifiers in an UILayout,

For that I think it would be nice to have a UILayout.template_single_modifier(mod) function to allow us to draw a modifier UI anywhere in the interface, otherwise, we would have to code each modifier UI from scratch in python and every time there was a change in the modifier properties or a new modifier was added, the add-on would break.

Or even better.

I dont believe that there's a difference between a tree of modifiers with a single branch and a modifier stack, so we could have a GUI similar to the shader editor that just list the nodes linearly as deep as it can, we would need also an easy way to apply, remove, duplicate and reorder a single modifier from within this GUI, preferentially as similar as possible if not better than the current 2.90 modifier stack (I'm talking about the drag and drop feature).

I think there are plans to use this to expose data from node groups better than how materials currently do it as to not clutter the panel. I think the goal is to allow users to create node groups and expose interfaces but the ability to replicate the modifier UX via python might be better discussed on the UX thread.

In T74967#1018977, @astrand130 wrote:

There is really no reason to have a hard coded modifier stack anymore in my opinion. It should just end up being a quick UI thing on top of the graph backend.

I agree with this part, there should be one backend (node graph) that handles everything and two UI frontends; one for the node editor and one for the properties editor that replicates the modifier stack similar to how materials are represented.
Because, when you think about it, a modifier stack can be very easily represented by a node graph. If each of the current modifiers gets converted into a node group all you have to do to replicate the modifier stack is to connect the "modifiers" in a straight line from the input to the output. This also looks like it could be represented in a way very similar to how the current modifier stack is represented in the properties editor
.

In T74967#1018977, @astrand130 wrote:

Modifier stacks should be emulated by a blender addon.

I do not agree with this. It should not be required for an addon to replicate a core component such as a modifier stack. If at all possible, systems should be designed properly to begin with, and while it's great to have addons to extend the flexibility of systems, they shouldn't need to work around stuff that could be easily solved when the systems are first being designed.

In T74967#1019031, @astrand130 wrote:

I think there are plans to use this to expose data from node groups better than how materials currently do it as to not clutter the panel. I think the goal is to allow users to create node groups and expose interfaces but the ability to replicate the modifier UX via python might be better discussed on the UX thread.

Exposing data from node groups better than how materials do it sounds good. But whether this design is better discussed here or in T67088 is debatable. Since this task is about the design of Geometry Nodes specifically, and it will ultimately replace the modifier stack, I think discussing their interface (of which a stack-like UI should be a core component) is perfectly fine.

To give an example of what I mean, here's the current process for setting up an object to have a mirror modifier for modelling purposes, as you can see it's simply 3 clicks, Select, Add Modifier, Mirror. Done.

Under this new geometry nodes system, how many clicks would be involved to setup a similar mirror modifier setup?

Or for that matter a bevel modifier, solidify, etc.

If such setups will involve significantly more clicks, I do believe that long term there would be value in maintaining a modifier stack UI with optional modifier node mode instead. Even if under the hood the modifier stack is just a node setup. Just in the interest of maintaining a fast workflow. Since we all know Blender users love their speedy workflows.

Or... What if the Node option for modifiers was simply a type of Modifier? It could be a type of modifier called "Node Modifier" and part of the existing stack of modifiers.

The advantage I can think of there is that it would allow for creating complex node modifiers and storing them for re-use later, then stacking them together to chain complex node modifiers together.

It would have another benefit as well, as it could allow for dividing complex modifications up into separate steps, as separate node modifiers, in case they wish to later "Apply" the first step to do some destructive modelling.

Right now in the current 'modifier stack' UI, it is possible to for example, add a Mirror Modifier, Bevel, Subsurf, etc, then later on if you wish to commit the mirror modifier to make an asymmetrical design but keep the bevel and subsurf, you can currently just simply 'Apply' the mirror modifier.

But the UI mockup up above shows only a single 'Apply' button for the entire node modifier system and that does make sense given the tree like structure wouldn't lend itself well to applying only a subset of the modifier nodes.

Something like this:

It makes sense to have nodes for Particles,Rigging, Grooming...and i understand that people want to do more procedual modeling, Blender is known for it's powerful modeling capabilities more than anything else & that's thanks to the modifiers stack and the other tools/addons.
Keeping the modifiers stack is essential to many users, and also much more easier & straightforward workflow , this is how 3ds max team has done it.

We will most likely keep the modifier stack, from recent discussions regarding particle and geometry nodes.

We want users to be able to turn node groups in custom modifiers with a high-level interface. And I think the modifier stack is a good place for that kind of interface, where you don't have to open the node editor, but can just quickly add a modifier and tweak a few settings.

Creating such modifiers would be done by adding a group input to the geometry nodes, and marking it as an asset. And then it can be available right from the add modifier menu like a native modifier.

That sounds very flexible, best of both worlds ! I assume existing modifiers will be kept around for transition and compatibility ?
Additionally I suppose custom interfaces for "node modifiers" will require a more thorough access to socket type, widget creation (checkboxes...), etc. than we currently have in Cycles node groups ?

@Brecht Van Lommel (brecht) By pass-through I was referring to the fact that you can merge multiple objects(mesh data, volumes, particles, etc) under a single data stream. The current node will transform only the bits it's concerned with and everything else will be passed through to the next node in the network.
In simulations(DOPS), the only difference is the fact that you have time based constraints or physics constraints. You can think of SOPs as DOPS always in frame zero. In essence, geometry nodes underlying behavior is not much different
than DOPS's behavior. You still have to do a pass-through modified state to the next node in the tree, similarly to what happens in a looping simulation. IMO, there shouldn't be a hard link between object data in the scene and the nodes operating on this data. The data represents the state of the system and the nodes are just transformations applied to this data based on some constraints or selection patterns.

I thought some more about how to reconcile the clarity of Houdini's networks and the "completeness" so to say- of this proposal, which functionally seems to be more or less based on Cycles or ICE.
So what if those passthrough nodes (as per your definition) were in fact group nodes inside which the data was "silently" split into several streams, operated on then re-merged ? The user wouldn't ever need to have more than one connection between any given two nodes. Optionally, we could have a "split data" node similar to "separate xyz" that would output as many things are there are datatypes flowing within that stream (in case the user wants to access something punctually and making a group feels overkill).

We will most likely keep the modifier stack, from recent discussions regarding particle and geometry nodes.

We want users to be able to turn node groups in custom modifiers with a high-level interface. And I think the modifier stack is a good place for that kind of interface, where you don't have to open the node editor, but can just quickly add a modifier and tweak a few settings.

Creating such modifiers would be done by adding a group input to the geometry nodes, and marking it as an asset. And then it can be available right from the add modifier menu like a native modifier.

If I may be frank for a moment: That. Sounds. Brilliant.

My excitement for 'everything nodes' just immediately dialed up to 11.

So basically rather than replacing the modifier stack, you're creating a node editor interface for us to create our own custom modifiers.

Oh.... Now my brain is overflowing with the possibilities. I can just imagine all the kinds of user content that will get created and shared now.

This in combination with the asset management, and brush management improvements, I can imagine on Blender Market buying packs of brushes, modifiers, assets, etc, and adding them straight into my asset manager in Blender.

Zino Guerr (Zino) added a comment.EditedSep 21 2020, 8:49 PM

We will most likely keep the modifier stack, from recent discussions regarding particle and geometry nodes.

We want users to be able to turn node groups in custom modifiers with a high-level interface. And I think the modifier stack is a good place for that kind of interface, where you don't have to open the node editor, but can just quickly add a modifier and tweak a few settings.

Thanks for the reassuring,that's really great news.

"Users will constantly want to preview certain sections of the node tree, and having to manually connect those things up to the output node will be a pain. An easier way to preview the output of any node would be extra useful for geometry nodes, but would also be a benefit for shader or compositing nodes."

How about integrating the Node Wrangler addon shortcuts and features into the default behavior? It's already one of the staples of Blender usage, almost anyone with a bit of knowledge in Blender already uses it and prefers it. For newbies it changes nothing in how to use nodes, the basic workflow is the same and for that reason integrating it would still be backwards compatible both in terms of files, as well as old tutorials (If the person didn't use the addon, the basic shortcuts are still the same, If the person used it, you can do the same without having to activate any addons).

The Node Wrangler features such as quickly previewing a node using Ctrl+Shift+Click, quickly mixing (ctrl+shift+Rclick), quickly switching sockets or connections, among others, are all extremely useful for any node based workflow... why try to reinvent the wheel when we already have a very proof-tested toolset used by thousands everyday in both personal and commercial projects?

Also let's not forget about UV operations being nodal as well, maybe unwrapping based on clusters, based on cameras (to do unwraps from "camera view" instead of from view), etc.

I hope you guys can go as low level as possible similar to Maya, having a generic nodes that can be inherited by different object types like Transform, Shape & shading nodes. for example when creating a cube you get a node goup of those atomic nodes but also specfic ones to for that object type, this way you can do all sort of things in just one single node editor.

Do Not Post Screenshots of Copyrighted Software

An excerpt from an email from Ton Roosendaal:

In our design process we have to stay away from references to other (non
free/open) applications. Not only do we have enough design powers to
make own solutions, it's also a violation of copyrights from others.

Do Not Post Screenshots of Copyrighted Software

An excerpt from an email from Ton Roosendaal:

In our design process we have to stay away from references to other (non
free/open) applications. Not only do we have enough design powers to
make own solutions, it's also a violation of copyrights from others.

Sorry i didn't know about that, i removed the image but sure we can at least mention the labels of nodes or is that also a violation?

I've often read, that it will be possible to affect collections (or collection instances) with the modifier nodes (geometry nodes). But I can't find any "official" statements about that. Is that part of the geometry nodes plans?

Making modifiers work on collections would be a separate project, not part of an initial geometry nodes implementation.

We will most likely keep the modifier stack, from recent discussions regarding particle and geometry nodes.

We want users to be able to turn node groups in custom modifiers with a high-level interface. And I think the modifier stack is a good place for that kind of interface, where you don't have to open the node editor, but can just quickly add a modifier and tweak a few settings.

IIUC Modifiers would then only be a fancy interface on top of Nodes. If that is indeed correct then would it maybe make sense to move the Modifier code into an addon and ship it by default?
Just to ensure that there is a clean API.

Arseny (gromishe) rescinded a token.
Arseny (gromishe) awarded a token.

This is why it would be very important to have nodes generated also by the regular viewport workflow. You would end with the best of both worlds: you can go the nodes way, powerful as you want; or you can still go the 3d way, which would have a sort of realtime history in the form of a nodegraph. Together they'd make a hybrid approach where one can model and adjust the nodes back and forth.
Sorry for repeating myself.

I vote for this, this is very important.

I believe the ability to automatically generate geometry node trees using Edit Mode operators and tools is outside the initial scope of what will be supported. Supporting that is tricky there are a ton of potential issues and pitfalls. It would be great to be able to do that eventually perhaps, but initially I don't think it'll be something the core developers will pursue.

That's a very sad, the ability to have the nodes be generated behind the scene will make geometry node more user friendly since people can use regular modeling when they want to and switch to nodes at any point. Without it, you basically need to stick to one of them from start to end, not really friendly I would say.

Will the current modifier stack become the user interface for the top level node of a node based modifier? So if you create a node based modifier, you can expose whatever enclosed parameters you want to the top level node, and those parameters will then be displayed as a new modifier in the modifier stack (as well as being shown in the shader editor's n panel)? And vice versa, if you add a modifier in the usual way, it will create a new modifier in the node editor, so that you can dive into it and make changes to it's functionality for example. Or do we still need to differentiate between edit mode and object mode?

This is why it would be very important to have nodes generated also by the regular viewport workflow. You would end with the best of both worlds: you can go the nodes way, powerful as you want; or you can still go the 3d way, which would have a sort of realtime history in the form of a nodegraph. Together they'd make a hybrid approach where one can model and adjust the nodes back and forth.
Sorry for repeating myself.

I vote for this, this is very important.

I believe the ability to automatically generate geometry node trees using Edit Mode operators and tools is outside the initial scope of what will be supported. Supporting that is tricky there are a ton of potential issues and pitfalls. It would be great to be able to do that eventually perhaps, but initially I don't think it'll be something the core developers will pursue.

That's a very sad, the ability to have the nodes be generated behind the scene will make geometry node more user friendly since people can use regular modeling when they want to and switch to nodes at any point. Without it, you basically need to stick to one of them from start to end, not really friendly I would say.

Once each edit mode operator has a node counterpart, then it will be very easy to auto generate the the node tree because it'll just be a case of populating the node with the same parameters as the redo menu and then wiring it's input to the previous nodes output. There'll need to be a transform node with a selection property too in order to capture user transformations (move, rotate, scale)

Auto generation of object mode nodes should be even easier, because you could have a one node fits all, which just records any UI interaction such as changing of a parameter or a movment or adding a modifier etc, it would just store the parameter and the old/new values. Basically just a node based undo history where each undo's values are editable after the fact. Sort of a dynamically generated node that can store any type of user activity.

It's easy to miss trade-offs that come with a modeling system that has to support both procedural and interactive approaches. A good interactive tool does not necessarily make for a good procedural node, and vice versa. And just because you could define nodes for every tool and generate a resulting node graph, does not mean that node graph will end up being useful, rather than an unorganized mess. Or you might make the UX of a tool worse so that it can be generated as a procedural node.

There are systems that are really good at interactive modeling, at procedural modeling, and at interactive procedural modeling. But all 3 types of systems come with trade-offs and workflow / UX choices. They are all useful in their own right for different use cases, but trying to build one that has to best of all worlds within a single workflow seems overambitious to me. Especially if the idea would be to just turn existing modeling tools into nodes, I don't expect that to result in a good workflow.

Once each edit mode operator has a node counterpart, then it will be very easy to auto generate the the node tree because it'll just be a case of populating the node with the same parameters as the redo menu and then wiring it's input to the previous nodes output. There'll need to be a transform node with a selection property too in order to capture user transformations (move, rotate, scale)

I also see this as rather straightforward -at least in theory, because some particulars are not obvious : comes to mind the fact that nodes should be able to operate on a procedural selection of mesh elements, rather than an explicit list of elements (as is done currently in edit mode). As noted above by several people, such other means could include selecting with a volume or through another rule, involving point coordinates or normals, and so on.
I guess that means pretty much every operator should be reworked to reflect those additional parameters. Obviously these indirect selections wouldn't work that well through the viewport, they would have to live inside the node editor -unless we have some sort of "embedded node view" within the viewport, similar to how the last operator panel pops up in the bottom left corner, but that's just a UI concern.

Then there's the notion of "tagging" generated geometry (*ie* including it in a group, or giving it an attribute) for the operations that support it such as bevel, etc. This adds another round of parameters.

@Brecht Van Lommel (brecht) I could be mistaken, but I don't see users needing to generate a nodetree in the background for the majority of modeling jobs. The way I see it, such a feature would only be relevant in the process of creating assets that are meant to be procedural, be packed into a node and have variations and parameters, such as furniture, buildings or houses, vegetation and anything in great quantities -and I guess we have to exclude characters since we'd have to auto-generate a fitting rig as well and that seems out of scope (at least I never heard anyone mention this as a target).
This means what ? that once the user decides to activate "history" (=background node creation), the generated node tree must indeed be readable, and tweakable. I think most operators we use in edit mode today, once converted to individual nodes, would tick those boxes. The issue I mentioned above (how to allow the user to make procedural selections easily) still stands however, and requires going out of the viewport and into the node to write down a rule for that selection.
Most procedural assets will need rule-based selections because once the vertex count changes, vertex indices change as a consequence and explicit selection cannot be relied on anymore. So inevitably the user will need to do some back-and-forth between viewport and node editor, unless one is cleverly integrated into the other.

Just rambling and hopefully, food for thought.

It's easy to miss trade-offs that come with a modeling system that has to support both procedural and interactive approaches. A good interactive tool does not necessarily make for a good procedural node, and vice versa. And just because you could define nodes for every tool and generate a resulting node graph, does not mean that node graph will end up being useful, rather than an unorganized mess. Or you might make the UX of a tool worse so that it can be generated as a procedural node.

There are systems that are really good at interactive modeling, at procedural modeling, and at interactive procedural modeling. But all 3 types of systems come with trade-offs. They are all useful in their own right for different use cases, but trying to build one that has to best of all worlds within a single workflow seems overambitious to me.

I'd disagree, it's a necessity, otherwise you can either work fast with no ability to parametrically change or animate what you've done...or you can work very slowly by creating geometry with nodes, and then reap the extra functionality of being able to change/animate all parameters, or even throw in new nodes at any point within the tree.

I don't understand why the UX of an existing tool would need to be altered in order to be represented by a node, could you give an example of an existing edit mode operator that would need additional parameters adding to it in order for it to be replicated as a node? The only additional information that a node would need over the redo panel would be the selected verts/edges/faces as Adrien mentioned, but this wouldn't need to be represented in the redo panel or the tool options, this would just be obtained from the selection and stored in the node (there's no downside to having additional parameters in the node that aren't in the interactive tools parameters, in fact it's useful to be able to have access to the selection after the fact, this allow to change which verts/edges/faces are affected either by manually changing the selection, or by feeding in a selection node which would have options for automatic selectin based on attributes or face angles etc.

One approach to be considered would be the same as the OTHER node based software which mixes procedural with interactive, all non operator activity in SOPS can be recorded into a single node, whilst the operations get their own nodes. So you end up with a manageable node tree that still has all of the important nodes rather than millions of nodes because of recording each user movment of verts for example.

Strongly recommend someone from the dev team become more familiar with how the competition handles this automatic node tree creation process, otherwise we're starting out with a hobbled system from the get go.

In fact this is what's so exciting about Blender implementing nodes, because Blender's interctive modelling is waaaaaaaay better than the competition, so the ability to work lightening fast AND be able to parametrically change the result of this, this'll set Blender above the competition (for modelling at least)

I don't understand why the UX of an existing tool would need to be altered in order to be represented by a node, could you give an example of an existing edit mode operator that would need additional parameters adding to it in order for it to be replicated as a node? The only additional information that a node would need over the redo panel would be the selected verts/edges/faces as Adrien mentioned, but this wouldn't need to be represented in the redo panel or the tool options, this would just be obtained from the selection and stored in the node (there's no downside to having additional parameters in the node that aren't in the interactive tools parameters, in fact it's useful to be able to have access to the selection after the fact, this allow to change which verts/edges/faces are affected either by manually changing the selection, or by feeding in a selection node which would have options for automatic selectin based on attributes or face angles etc.

I thought of some other operators which wouldn't be immediately translate-able into nodes "as is" : any operator that relies on view orientation such as knife or knife project would have to have this stuff exposed first (view vector, namely), and even then I'm not sure how well it would work : see you can rotate all around an object while cutting it -how can this ever be procedural ?
Additionally, tools that rely on cursor position such as vertex or edge slide would need another way of determining sliding direction (world space ? tangent space ?), so from a bird's eye view a lot of operators would need a fair bit of refactoring.
Admittedly the knife tool/operator does not fit well in a procedural modeling workflow so maybe this is a bad example... I guess such operators could be "left out in translation".

In any case, I agree with you in that this should be considered from the start. Hopefully we're making some progress already in terms of determining what the workflow would look like, not sure how much of this is helpful from an architectural point of view...?

One approach to be considered would be the same as the OTHER node based software which mixes procedural with interactive, all non operator activity in SOPS can be recorded into a single node, whilst the operations get their own nodes.

Could you please explain further how this would work, ideally without too direct a reference to the software in question ? (and definitely no pictures) I'm curious.

Yep, any activity such as manually sliding vertices should be consolidated into a single 'user edit' node which are un-editable after the fact. I think knife project should be fine as a node, the node would just need to store viewport camera orientation, which would also be handy for manipulation after the fact, this would have no impact on the interctive use of the tool though, as the viewport camera orientation is already defined by the user rather than a parameter.

I worry a bit that the entire system is being devised without a sufficient knowledge of established node based workflows.

I'm quite familiar with other software.

While it is possible to animate or parametrize a model created with mesh edit mode type operations in some other software, from what I've seen this it either doesn't work very well, or the modeling UX is more rigid than Blender. It works well as a way to tweak parameters in a undo history, and that could be supported in Blender too. But that doesn't require turning tools into nodes.

And I agree that you could turn every Blender modeling tool into a node. My point is that the resulting node graph would not be that great.

Yes, the interactive viewport aspect is a bit clunky in the other software, but that's not because of it's ability to generate counterpart nodes on the fly, that's just because it's not as well designed as Blender's interactive modelling workflow, which is why utilising Blender's speedy interactive mode to generate the nodes is so important in my opinion. I'm not sure who the main driving force was behind Blender's modelling workflow/interaction design, but it's pure genius. Incidentally, the problem you're describing of a messy node graph when auto generating nodes, it did used to be a problem in the other software too until recently when it began to consolidate all non 'operator' operations into a single 'user edit' node (or some name of that ilk), or multiple user edit nodes separated by operator nodes if the user did some manual moving of vertices in between parameter driven operations.

The main beauty of the combined workflow is the ability to remove or mute individual operations/nodes and also the ability to add nodes/operations anywhere within the undo history/node tree, and probably even more importantly the ability to package and share a node tree to other users or colleagues in a way that only exposes selected parameters to a front end node, meaning people can either use the tree as intended by just manipulating the front end parameters, or dive into the tree to create new functionality, or use it as a starting point for another node etc. Will the undo history approach you mentioned also allow for the insertion or removal of operators, and allow for collaboratively created community tools? Similar to HDA.

Another benefit is smaller file size, because only the steps to make the geometry need to be stored, rather than potentially massive resulting geometry. The user should have the option to manually store the geometry at various points in the tree, either by freezing a node (auto cache to ram), dropping down a file cache node (to avoid time consuming recalculations on file open), or ram cache node (to avoid recalculation of upstream nodes during editing of downstream nodes).

If it's not the intention to create node versions of edit mode operators, does this mean that it's not planned to allow for manual creation of node trees that perform edit mode operations either?

I understand the idea, but I don't believe a reusable/tweakable asset or even reduced file size is what you will actually get when you turn Blender edit mode operations into nodes. In some specific cases if you're careful to use a small subset of tools in specific ways, then maybe. But in general there would be too many operations in the graph that break when tweaking earlier operations, or reordering or deleting. Collapsing operations is fine but doesn't solve that problem.

The current Geometry Nodes project is focused on use cases like scattering, VFX and the type of functionality provided by existing modifiers. Turning every edit mode operator into a node is not important for those use cases.

Generally the only upstream nodes that can break the tree are nodes that add or remove vertices, resulting in a change of vertex order, unless later nodes rely on selections/groups generated from attributes such as face angle etc. Thanks for the info 👍

I just read through the proposal and love everything about it— except the part that talks about a Cache node. That should happen automatically (each node caches its value, and that gets reused unless an upstream change occurs). It shouldn't require any user intervention, since caches always suck.

I think it’s great that we can expose any parameter to the modifier stack front end. The current implementation of physically wiring a node’s input socket back to the group socket is going to lead to unnecessarily messy node trees though. A better solution would be to right click a socket and then from a context menu select “expose”. This would modify the look of the socket to indicate it’s exposed rather than sodomise the clarity of the tree with surplus to requirement wires. Check out Houdini VOPS to see what I mean.

I just read through the proposal and love everything about it— except the part that talks about a Cache node. That should happen automatically (each node caches its value, and that gets reused unless an upstream change occurs). It shouldn't require any user intervention, since caches always suck.

I think this should be done implicitly as a local optimization while editing or working on recent files if the user specifies. However the existence of explicit cache is very much necessary if this concept expands into the realm of user created dependency graphs and multi-machine render/simulation farms for heavy production tasks.

So far I really like the higher level design aspects however being a rather technical minded generative artist im missing some lower level things.

  • Creating geometry from scratch by defining vertices, edges, faces and their attributes
  • 4D-Vectors and Matrix attributes/noodle-types

The latter sure could be faked with multiple attributes/noodles and custom NodeGroups but having a dedicated Matrix- and a 4DVectortype would make the nodes much more powerful and multiple transformations much less of a noodle mess.

Regarding the Cache Node I think the concept could be extended to Solver/Feedback Node that feeds the result of the last (simulation)frame as input into the nodenetwork.
Alternatively this could be realized with a FileIn/FileOut Node that read and write Diskcaches respectively where the FileIn Node has a frameoffset.

As Geometry Nodes can become rather complex a Debugginview would be very handy that can show geometry informations and/or attribute tables of a specified Node.

If there is no short term plans for edit mode operators synchronization can there be at least added generic Edit node which will record all changes from edit mode manipulations? This will simulate Edit Poly modifier from 3ds max and will greatly increase non destructive modelling in Blender.

Hello, about the geometry nodes, is there a way to add them at given coordinates, for example reading those from a text file? It's very useful to have some objects, like trees, positioned correctly. Being able to apply the transformation makes the scene more realistic. Thanks.

For user feedback on current geometry nodes, please use this topic:
https://devtalk.blender.org/t/geometry-nodes/16108

Hans Goudey (HooglyBoogly) claimed this task.

I'm going to take the liberty of closing this task, since I think it's basically all covered by newer design and implementation tasks: