Cycles/Eevee texture nodes for modifiers, painting and more
Open, NormalPublic

Description

With Blender Internal being removed, we have an opportunity to support the same set of textures for Cycles, Eevee, modifiers, particles, painting, compositing, etc. So we plan to port the relevant Cycles / Eevee shader nodes to Blender C code, and make those the new texture nodes that can be used throughout Blender. Blender Internal textures can be removed then, and we can extend the Cycles / Eevee shader nodes to add any important missing features from them.

  • Some nodes like color ramps, RGB curves, and similar utilities are already there.
  • Texture nodes like image, noise, voronoi need to be ported.
  • Geometry, attribute and similar nodes as well. These are the most complicated since they access the mesh geometry.
  • Some nodes like BSDF, Mix Shader, Bevel, .. would not be supported.

Further we need a good UI for this in the texture properties.

Details

Type
Bug

Related Objects

Brecht Van Lommel (brecht) triaged this task as Normal priority.

May I suggest adding a bake texture function to the baking API instead of duplicating every node in C?
This function would evaluate a single mesh in isolation and would take a texture node tree instead of a material one (texture node trees would be just like regular material node trees without shader nodes).
This would likely be faster to evaluate since it could run in the GPU and would allow better integration with third party render engines.

I already suggested this in the devtalk forums, but I'm not sure if it was discarded as a bad idea or just went unnoticed.

Performance is not going to be good that way, we don't want to copy meshes to Cycles or another renderer when we already have them in Blender just to do some texture evaluations in the middle of a modifier stack. Using the GPU would also be problematic, mostly because of the latency of transferring data between CPU and GPU, and because it's difficult to create big enough batches of data to work on even if latency was low.

At least on that particular example, running the modifier on the GPU(gtx 950) is much faster than on the CPU(i7 4790k). And that's on a case where the ratio between evaluated pixels and vertices is just 1. For painting and compositing, shouldn't the payoff be much better ? It wouldn't even be necessary to send the whole mesh to the renderer.

Just to make it clear, I know you are infinitely more qualified than me to know what's the correct way to go. Sorry if I look disrespectful, it's absolutely not my intention.

There is definitely a point where it gets faster on the CPU, when the shader is sufficiently complicated and the number of vertices is high enough. There's a latency vs throughput trade-off.

If you have for example a scene with hundreds of objects that need few texture evaluations, the latency adds up and things could slow down a lot. Existing code is usually structured assuming that you can get one texture evaluation at a time. Rewriting that to batch together a bunch of texture evaluations would be quite some work and make the code more complicated.