Page MenuHome

Cycles slowdown and disk reading issues with large textures
Closed, ArchivedPublic

Description

System Information
Operating system: Void Linux (Kernel 5.7.15)
System Specs: AMD TR 3990X, 256GB RAM, AMD Radeon 5700XT

Blender Version
Broken: 2.83, 2.90, 2.91

Short description of error
When rendering with large (6K+) textures some file formats cause Cycles to slow down during rendering. Tested on CPU.

Exact steps for others to reproduce the error

In the below example I started with a fresh scene to test the different formats and their impact on render time.
In a simple scene with small textures <= 4K, there doesn't seem to be much of an impact.
However, when using larger textures (6-8K+) Cycles can sometimes hangs after "Updating shaders", when loading the textures, for an extended time.

The scene below just has Suzanne with a material that has three 8K textures.
When using TGA (the fastest in testing) vs JPG, the scene with targas starts rendering instantly and the scene with jpegs hangs for about 10 seconds after "Updating Shaders"
Other scenes start rendering and hang when the render tiles hit an object with large textures on it.

JPG seems to be the most affected, but I've also seen similar behavior with PNG and TIF files, it's hard to pin down what problematic about specific files and causes this behavior. There's a lot of disk reading happening, even though it pulls more data than the actual size of the textures.

On some scenes, I've seen slowdowns of multiple minutes, both in the viewport render preview and when rendering a frame. Anytime Cycles has to load the textures basically.
Switching color profiles in the image texture settings is also very slow with these jpegs. Packing the textures doesn't alleviate the problem.

Here's another scene that's just that same asset with three 8K textures again and here you can really see the difference in performance.

Just to eliminate any potential system variables, I tested the same file on a Windows machine with an Intel processor and witnessed the same behavior.

One thing I did notice is that Blender is pinned at 100% cpu usage the entire time and it's constantly reading from the disk, even though it seems to read way more data than the actual size of the file. For the jpegs in the example, that's three full minutes of high cpu use and roughly 10-20MB/s reads from the disk, even though the textures only total roughly 115MB for the tree example.

I'm more than happy to share the pine tree file, just not publicly as it contains some assets that I purchased and I want to respect the license.

Event Timeline

Is this a regression from some specific Blender version?

I did some quick testing in 2.82 and 2.81 and the behavior seems present there as well. I can have a look at 2.79 once I have some time to see it the problem exists there as well.

Just had some time to check 2.80 and 2.79b to compare and it looks like this behavior goes pretty far back.

2.80:

2.79b:

Just had some time to check 2.80 and 2.79b to compare and it looks like this behavior goes pretty far back.

Thanks for info. Can you upload example file that can be accessed publicly? Or is it as simple as just plug few 8K textures to shader and render?

I could have a look at this, as in at least confirm issue and do some profiling to see where is the bottleneck in my case, but my HW is like pentium 386 compared to yours :)

No problem, here's a link to the files: https://cloud.mantissa.xyz/index.php/s/PmfJLaNHqzewPwr (1.7GB)
If you want the tree scene as well, I'll have to share it directly, as it has assets I can't link publicly.

You can just turn each collection on and off to test the different textures.

I tested the monkey file on the latest master. WIN 10, Ryzen 3900X and files loaded from SATA-SSD I get the order:

Slowest render is Jpeg, next is EXR, then next is virtually a tie between png and tga and the fastest is the tiff. Mem usage reports 581Mb on every render aprat from .EXR which states 1157MB.

Not sure what to make out of this:
I find it logical that heavily compressed jpgs take longer on initialisation and best practise would be to avoid them anyway for final textures.
However, the order of speed from the tree file contradicts the order from the monkey in most regards- maybe because of transparency?

It doesn't occur during rendering, only when loading all the assets before the actual rendering starts. When the render starts (ie. tiles start calculating), the speed of that part is the same. The trees are all geometry so there's no transparency being used in the materials.

In my case the tests are being run from NVMe SSDs so there's no bottlenecks in the storage to speak of. EXR is multitreaded by design so it's normal that it's fast. I can open 8K+ jpgs in an image editor in a matter of seconds and the tifs are uncompressed to avoid compression bottlenecks. The only format that could slow down a little are PNG's as they can be slow to load depending on compressed they are.

Even then the difference should be seconds, not minutes as seen in the tree example. It would be great to get more eyes on this to pin down the reason why this is happening exactly.

Can you post your order on the monkey file with a recent build? Also can you send the tree file?

Nicely laid out demo file!

NOTE: The EXR files aren't EXR files, they are PNGs with LZ77 compression and an incorrect file extension. This can be seen on Windows using MediaInfo and IrfranView.

Just some thoughts without doing any profiling:
JPEGs need to go through several decoding conversions before being displayed. Link to the encoding section on Wikipedia. JPEGs are encoded as YCbCr, which has to be transformed to a RGB color space. According to the section What is taking so long? in D8126: Speed up saving when rendering movies / Profiling walk-though, we can see at least one color operation in Blender is done one pixel at a time, perhaps others are as well.
From OIIO's JPEG documentation under Limitations:

OpenImageIO’s JPEG/JFIF reader and writer always operate in scanline mode and do not support tiled image input or output.

From OIIO's Targa documentation under Limitations

The Targa reader reserves enough memory for the entire image. Therefore it is not a good choice for high-performance image use such as would be used for ImageCache or TextureSystem.

—-

I’ll do some profiling and see what I can find.

This is normal and expected behavior with the various compressions that are used. If your doing more then a single render, go to the performance section and enable persistent images. This keeps the textures in memory instead of reloading them for each render/frame. so the first render will take 10s, but the subsequent frames will take 1s. You will see in increase in speed with all image formats, since they dont need to be loaded again, but the greatest difference will obviously be with JPGs. Persistent images is a must for animations, especially when using lots of textures, or high res.

No problem, here's a link to the files: https://cloud.mantissa.xyz/index.php/s/PmfJLaNHqzewPwr (1.7GB)
If you want the tree scene as well, I'll have to share it directly, as it has assets I can't link publicly.

The link has expired

Ankit Meel (ankitm) changed the task status from Needs Triage to Needs Information from User.Apr 18 2021, 1:44 PM

@Midge Sinnaeve (mantissa) : can you put up that example file again somewhere?

Philipp Oeser (lichtwerk) closed this task as Archived.Fri, Jul 2, 12:08 PM

No activity for more than a week. As per the tracker policy we assume the issue is gone and can be closed.

Thanks again for the report. If the problem persists please open a new report with the required information.