Page MenuHome

Performance regression when rendering at very high resolution
Closed, ResolvedPublicBUG


System Information
Operating system: Windows-10-10.0.19041-SP0 64 Bits
Graphics card: Radeon RX550/550 Series ATI Technologies Inc. 4.5.14736 Core Profile Context 20.8.3 27.20.12027.1001

Blender Version
Broken: version: 2.92.0 Alpha, branch: master, commit date: 2020-11-09 14:47, hash: rB4f66cf3b8bb0
Worked: blender-2.90.1-windows64

Short description of error
Time to render:
2.90.1: ~4min
2.92: ~30min

Exact steps for others to reproduce the error

  1. Open file
  2. Set User Preferences > Interface > Render In to Keep User Interface
  3. Render image (CPU)

Event Timeline

Brecht Van Lommel (brecht) renamed this task from Performance regression when rendering large files to Performance regression when rendering at very high resolution.Nov 10 2020, 6:25 PM

I'm personally able to reproduce this issue on Linux with a Ryzen 9 3900X. Testing with old versions of Blender I have on my computer, I can reproduce it even back with Blender version 2.91.0 rB1b04eb6c4443 (2020-10-10 16:05).

I'm currently running a bisect to find the culprit commit. I'll comment back when I'm finished.

Alaska (Alaska) changed the task status from Needs Triage to Confirmed.Nov 10 2020, 9:21 PM
Alaska (Alaska) changed the subtype of this task from "Report" to "Bug".

When cycles is rendering it creates a render result for specific tiles (offset + dimension) to the actual render result. It calls the rect on the RenderPass. This will sync a tile to blender. and call the RE_engine_end_result; what will sync the tile result back to the full image (`render_result_merge).

The render API has an option to only update a section of the display based on a given rect. The display update is set to image_rect_update when rendering. This function taggs the image with a IMA_GPU_REFRESH and marks the GPUTexture of the image as obsolete.

The next time the GPUTexture is used/requested a new GPUTexture is created. During this creation process the image is scaled to match the size of the GPUTexture. If the image does not fit on the GPU it is scaled down.

We could add partial updates of GPUTextures so only the new data needs to be scaled (IMA_GPU_PARTIAL_REFRESH). the rects that needs to be refreshed could be hold in a list on the ImBuf.


There is already a function that can update a part of a GPU texture IMB_update_gpu_texture_sub. But there is room for improvement here. as it uses imb_gpu_get_data that still converts and scales the full image.

Proposed solution

  • Modify IMB_update_gpu_texture_sub to be able to update a part of the source image. Currently it copies the full source image to a defined part on the GPUTexture.
    • Create imb_gpu_get_sub_data that is similar to imb_gpu_get_data but doesn't convert/scale the full image. Use this function in.
    • change image_rect_update to use IMA_GPU_PARTIAL_REFRESH
  • Keep track of parts of the image that needs to be refreshes. I would start with a linked list as that is more scalable when rendering on 64 cores.

That sounds good to me. It may be possible to reuse the existing code for texture painting that does partial updates of scaled and unscaled textures.

Yes gpu_texture_update_from_ibuf handles this for texture painting. During development we should consider to merge the two implementations

Sounds good to me too. Although, this would not help when using progressive refine. But I think this is an edge case as it's not used often.

Jeroen Bakker (jbakker) triaged this task as High priority.Nov 18 2020, 12:23 PM
Jeroen Bakker (jbakker) lowered the priority of this task from High to Normal.Nov 23 2020, 10:49 AM
slwk1d (Slowwkidd) added a comment.EditedNov 26 2020, 3:03 PM

Just to leave a comment to be sure, I encountered the same problem in 2.91 with a resolution of 1754x2481, not "very high", so I guess the bug regards also smaller resolutions than the one in the first report.

@Jeroen Bakker (jbakker) Shouldn't this be considered a high priority? It considerably increases the render times for renders around 2K resolution, which is a quite common scale for renders, and it's currently present in 2.91.

silex (silex) added a subscriber: silex (silex).EditedDec 14 2020, 8:51 PM

After patch I'm no longer experiencing crashes with rendering which is nice.
I've done some tests on current build hash 010f44b855ca branch master.
The bigger the render image size the bigger the performance hit.

rendering image size / average CPU usage
3000x3000 / ~98%
6000x6000 / ~ 90%
9000x9000 / ~ 70%
12000x12000 / ~ 2%

Cycles CPU rendering, tile size 32px. No adaptive sampling or denoising.

@silex (silex) have you tried after applying the patch of this task?

Thank you!
After quick test the CPU looks fully utilised in all above scenarios.
As for exact performance numbers I'll need to check out. Nevertheless most regression seems to be gone.

silex (silex) added a comment.EditedJan 9 2021, 9:42 PM

I've run some tests and it seems that regression is happening in hybrid CPU+GPU rendering also.
Unfortunately I cannot pinpoint exact scene config for solid example.
Default cube renders without regression, but on production scenes with heavy geometry with displacement, volumes and lots of lights (both emissive mesh and standard) I have dips to 0% CPU utilisation.

CPU-only rendering regression is fixed.

Additionaly the tile size is wrong. In properties tile size is set to 32 px, and during rendering tile is around 14x14 px: