Page MenuHome

Exporting GLTF with big texture is super slow
Closed, ResolvedPublic


System Information
Operating system: MacOS Catalina beta
Graphics card: Radeon Pro 555 2 GB

Blender Version
Broken: 2.81 alpha
Worked: (optional)

Short description of error
Exporting GLTF with big texture is super slow

Exact steps for others to reproduce the error
Based on the default startup or an attached .blend file (as simple as possible).

I tried this many times with Blender 2.80 and also 2.81... exporting GLTF + Texture with large texture like 8000 x 8000 is superslow, not sure why... Maybe time it takes to copy? But it's weird.

Event Timeline

Is the texture packed into the blend file, or in a separate png / jpeg file on disk?

If packed, Blender has to encode the image to a file. More expensive than a simple file copy.

Maybe @Moritz Becher (UX3D-becher) knows more?

I tested both with Texture separate from GLTF and with GLTF with texture included as one... but both are still performing really, slow. With smaller textures, the export is super fast.

How long is superslow exactly? I tried a simple Suzanne head with a generated 8k texture (color grid). It took me 39.5 seconds. The extern_draco.dll was found and used. All of this information can be found on the Blender console output.

The export process also eats all of my computer memory (16GB) very quickly, which might also be why it's so slow. It doesn't seem to take a good advantage of multi-core. Mostly a single core process.

Exporting 4k takes 4x less time, 9.8 seconds, which suggests that texture size is the major component in how fast the export process works. 2k is 4x less time than 4k, 2.5 seconds. Which makes sense.

If you were to try exporting 16k textures with it, it would probably take at least 160 seconds if you had memory and completely kill low memory systems, I'm guessing.

All of these tests were exporting .glb binary.

I did some profiling:

      70744 function calls (70311 primitive calls) in 9.679 seconds

Ordered by: internal time
List reduced from 344 to 20 due to restriction <20>

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     1    9.586    9.586    9.586    9.586 {built-in method numpy.array}
     1    0.046    0.046    0.064    0.064
     6    0.007    0.001    0.007    0.001 {built-in method builtins.print}
     1    0.005    0.005    0.005    0.005 {method 'close' of '_io.BufferedWriter' objects}

Numpy.array seems to be the culprit. At a close look it's not the array transformation itself, but the really painfully slow way that Blender transfers image data to arrays., line 146: pixels = np.array(image.pixels), line 46: img = np.array(blender_image.pixels)

Which has been a known issue for a while.

...which means that here's a simple diff to make it at least twice at fast:, line 146: pixels = np.array(image.pixels[:]), line 46: img = np.array(blender_image.pixels[:])

I committed the proposed fix.
If more speed is needed, we will have to check in API if something is possible