Page MenuHome

Use 2D texture arrays for UDIM drawing in Eevee to avoid texture limit

Authored by Lukas Stockner (lukasstockner97) on Dec 20 2019, 2:34 AM.



Based on @Clément Foucault (fclem)'s suggestion in D6421, this patch implements support for
storing all tiles of a UDIM texture in a single 2D array texture on the GPU.

Initially my plan was to have one array texture per unique size, but managing
the different textures and keeping everything consistent ended up being way
too complex.

Therefore, this patch uses a simpler version that allocates a texture that
is large enough to fit the largest tile and then packs all tiles into as many
layers as necessary.

As a result, each UDIM texture only binds two textures (one for the actual
images, one for metadata) regardless of how many tiles are used.

Note that this removes per-tile GPUTextures completely, meaning that it
breaks D6421.

Diff Detail

rB Blender

Event Timeline


code style: don't use camel case for local variables. Use snake case.


Use one load. Use RGBA32F texture for the metadata and predivide by arraysize.

Adressed review comments.

And here's workbench engine support, way easier than I expected.


Red Flag here: Every samplers must have a valid texture (of their type) bound and assigned. If not the implementation crashes or could trigger some undefined behavior in some cases. We use quite a bit of them all over the place already. Maybe it would be nice to centralize it somewhere in GPU_texture.c.

Nevertheless, you need to create and bind dummy textures to these sampler when not used. Same for the sampler2D image sampler.

Clément Foucault (fclem) requested changes to this revision.Dec 21 2019, 2:27 AM
Clément Foucault (fclem) added inline comments.

One thing I don't really like about sampler1DArray is that the layer dimension may be limited to 256.

However, the dimension that represents the number of array levels is limited by GL_MAX_ARRAY_TEXTURE_LAYERS, which is generally much smaller. OpenGL 4.5 requires that the limit be at least 2048, while OpenGL 3.0 requires it be at least 256.

Maybe you can use sampler2D here. I don't really see why you need sampler1DArray anyway.

This revision now requires changes to proceed.Dec 21 2019, 2:27 AM
Lukas Stockner (lukasstockner97) marked an inline comment as done.

Good point about the unbound textures.

The solution I went with for now is to have a compile-time define and
build separate shaders for tiled and non-tiled. This should also
prevent any performance regression.


The layer dimension is hardcoded to 2, the number of tiles only affects width.

As for sampler1DArray - the main advantage is that I can keep the existing GPU_texture_from_blender, GPU_NODE_LINK_IMAGE_BLENDER etc. If both the regular non-tiled texture as well as the tile metadata have the same type, we'll need a lot more boilerplate code.

I approve this patch as my comments are mostly nitpicking.


style: use 10.0 for float values. Some compilers will throw errors because of implicit conversion.


Same here


Style: Nitpicking here but I prefer to have GLenum data_type = main_ibuf->rect_float ? GL_FLOAT : GL_RGBA; than having a "big" if statement. Same holds for the following glTexSubImage3D.


A little strange but OK :)

This revision is now accepted and ready to land.Mon, Jan 13, 2:45 PM