Page MenuHome

denoising or using denoise passes on multiple GPUs is very slow
Closed, InvalidPublic

Description

System Information
Operating system: all OS
Graphics card: CUDA cards

Blender Version
Broken: RC1
Worked: 2.80 partly before animation denoising was committed

Short description of error
Tiles need to be transmitted between GPUs for denoising to have all neighbor tiles. The transfer can only happen when the GPU with the requested tile refreshes the tiles it was rendering. The requesting GPU in the mean time idles.

Exact steps for others to reproduce the error
Any production files with denoising will show the behavior. Depending on the scene and the GPU combination (the more GPU and the more heterogenous they are, the more it will be slowed down.
When the GPU is not connected to display, it increases step samples and make the idling even longer. As multiGPU setups often have only one card connected, it increase the chance.
With 2 GPUs on some evermotion scenes, the slowdown goes up to 80%. (so 2 cards being just 20% faster than a single card)

Event Timeline

Ideas to solve as requested per PM:

  • Better tile dispatching give each GPU a corner to work on. Only in the middle they will need to exchange tiles.
  • use cuda streams to allow exchanging buffers during rendering maybe?
  • transfer the rendered tiles to CPU each time one is finished and do the denoising on CPU (already implemented partially as GPU+CPU rendering works with denoising?). I concentrate on GPU only, so just an idea.
  • Render everything first and do the denoising in one batch at the end. That would also allow to decouple tile size for path tracing and denoising, which makes denoising faster as it prefers big tiles and allow small tiles for path tracing which is also faster.

I´ve tested the different renderposibilities in blender and compared blender 2.79b with blender 2.80 5f140e61c28c.
It´s the same scene in blender 2.79b and blender 2.80.

Blender 2.79b

CPU (AMD Threadripper 1950X 16Cores)
bmw27_cpu.blend (tiles 32x32) 01:20.53
bmw27_cpu_denoise.blend (tiles 32x32) 01:24.32
2xGPU (Geforce Gtx 1080 Ti 11GB / Geforce Titan X 12GB)
bmw27_gpu.blend (tiles 240x180) 01:09.15
bmw27_gpu_denoise.blend (tiles 240x180) with denoise 01:12.15
GPU (Geforce Gtx 1080 Ti 11GB)
bmw27_gpu.blend (tiles 240x180) 02:01.35
bmw27_gpu_denoise.blend (tiles 240x180) with denoise 02:05.07

Blender 2.80 5f140e61c28c

CPU (AMD Threadripper 1950X 16Cores)
bmw27_cpu.blend (tiles 32x32) 01:46.65
bmw27_cpu_denoise.blend (tiles 32x32) 01:51.20
2xGPU (Geforce Gtx 1080 Ti 11GB / Geforce Titan X 12GB)
bmw28_gpu.blend (tiles 32x32) 00:34.24
bmw28_gpu_denoise.blend (tiles 32x32) with denoise 00:40.69
GPU (Geforce Gtx 1080 Ti 11GB)
bmw27_gpu.blend (tiles 32x32) 00:49.69
bmw27_gpu.blend (tiles 32x32) with denoise 00:55.06
CPU+GPU (AMD Threadripper 1950X 16Cores + Geforce Gtx 1080 Ti 11GB / Geforce Titan X 12GB)
bmw28_gpu_cpu.blend (tiles 16x16) 00:30.40
bmw28_gpu_cpu_denoise.blend (tiles 16x16) with denoise 00:39.05

Summing up blender 2.79b renders the scene on cpu ca. 25% faster. Why?
Compared one GPU to two GPUs in blender 2.79b there is a 75% boost with two GPUs.
In blender 2.80 the boost is low, only 44%. Why what is slowing down the second GPU? Blender 2.8 is also using a newer CUDA version.
GPU rendering in blender 2.80 is more than twice as fast as in blender 2.79b!
I couldn´t find any significant slowdown with denoise.

Brecht Van Lommel (brecht) changed the task status from Unknown Status to Unknown Status.EditedJul 19 2019, 12:37 PM
Brecht Van Lommel (brecht) claimed this task.

Thanks for the report, but we don't treat potential performance improvements as bugs. We're aware there are some inefficiencies in this code.

Feel free to contribute changes to improve performance here.