Page MenuHome

Cycles X - GPU Performance
Confirmed, NormalPublicTO DO

Assigned To
None
Authored By
Brecht Van Lommel (brecht)
Mon, Apr 26, 5:14 PM
Tokens
"Love" token, awarded by Shimoon."Love" token, awarded by 123blender123."Love" token, awarded by bnzs."Love" token, awarded by kivig."Love" token, awarded by Maged_afra."Party Time" token, awarded by gilberto_rodrigues."Love" token, awarded by Roggii."Burninate" token, awarded by HEYPictures."Love" token, awarded by harvester."Love" token, awarded by Alaska."Love" token, awarded by MetinSeven."Love" token, awarded by someuser.

Description

Memory usage

  • Reduce size of IntegratorState
    • Overlap SSS parameters with other memory
    • Compress some floats as half float
    • Reduce bits for integers where possible
    • Dynamic volume stack size depending on scene contents
  • SoA
    • Individual arrays for XYZ components of float3?
    • Pack together 8bit/16bit values into 32bit?
  • Reduce size of ShaderData
    • Compute some differentials on the fly?
    • Read some data directly from ray/intersection?
    • Don't copy matrix if motion blur is disabled
  • Auto detect good integrator state size depending on GPU (hardcoded to 1 million now)
  • Automatically increase integrator state size depending on available memory

Kernel Size

  • Replace megakernel used for last few samples? Only helps about 10% with viewport render, barely with batch render. But makes OptiX runtime compilation slow.
    • Enqueue all kernels for a single path iteration at once?
    • Device side enqueue? Not likely for OptiX.
  • Make svm_eval_nodes a templated function and specialize it for
    • Background
    • Lights
    • Shadows
    • Shader-raytracing
    • Volumes
  • Verify if more specialization is possible in svm_eval_nodes
  • Deduplicate shader evaluation call in shade_background

Scheduling

  • Try pushing (part of) integrator state in queues rather than persistent location, for more coherent memory access
    • Check potential performance benefit by coalescing state
    • Shadow paths
    • Main path
  • Make shadow paths fully independent so they do not block main path kernels?
  • Accurate countely number of available paths for scheduling additional work tiles
  • Tweak work tiles or pixel order to improve coherence (many small tiles, Z-order, ..)
  • Try other shader sorting techniques (but will be more expensive than bucket sort)
    • Take into account object ID
    • Take into account random number for BSDF/BSSRDF selection?
  • Overlapped kernel execution
    • Use multiple independent GPU queueus? (so far was found to be 15% slower)
    • Use multiple GPU queues to schedule different kernels?
  • Optimize active index, sorting and prefix sum kernels
    • Parallelize prefix_sum
    • Build active/sorted index array for specific based on another array indicating active paths for all kernels, especially when number of paths is small

Render Algorithms

  • Transparent shadows: tune max hits for performance / memory usage
  • Transparent shadows: can we terminate OptiX rays earlier when enough hits are found?

Display

  • For final render, let Cycles draw the image instead of copying pixels to Blender
    • Render pass support

Tuning

  • Adaptive determine integrator state size based on number of GPU cores and available memory
  • Tweak kernel compilation parameters (num threads per block, max registers)
    • Different parameters per kernel?

Event Timeline

Brecht Van Lommel (brecht) changed the task status from Needs Triage to Confirmed.Mon, Apr 26, 5:14 PM
Brecht Van Lommel (brecht) created this task.
Tjurig added a subscriber: Tjurig.Mon, Apr 26, 6:22 PM
Octol added a subscriber: Octol.Mon, Apr 26, 10:43 PM

In a simple queue based system with fixed queue size for each kernel, memory usage increases as the number of kernels goes up, and much of the queue memory remains unused. However it does have the benefit that memory reads/writes may be faster due to coalescing.

Using more paths and associated memory can significantly improve performance though. The question is if there is an implementation that can get us both coalescing and little memory waste. Some brainstorming.

Filling Gaps

The current initialization of main paths works by filling gaps in the state array. Before init_from_camera is called, an array is constructed with unused path indices which is then filled in.

The same mechansim could be extended to the shadow queue, constructing an array of empty entries and using that in shade_surface, rather than forcing the shadow queue to be emptied before shade_surface can be executed.

This still leaves gaps until additional paths can be scheduled, and the fragmentation may cause incoherent paths to be grouped together.

Compaction

Rather than trying to write densely packed arrays, we could compact the kernel state to remove any empty gaps, using a dedicated kernel. This kernel causes additional memory reads and writes, which are hoped to pay off when multiple subsequent kernels can read memory more efficiently. It's not obvious that this is a win. In many cases there may only be 1 or 2 kernel executions before the next path iteration, and the total cost of memory access may increase.

We can do an experiment with a slow coalescing kernel before every kernel execution, to see how much kernel execution is impacted by non-coalesced memory access.

Ring Buffers

In simple cases where we ping-pong between two kernels like intersect_shadow and shade_shadow, an idea would be to share a single queue and fill consecutive empty items. With the idea of a ring buffer this can be made to work for two kernels.

In the single-threaded case this is straightforward, however synchronization to avoid overwriting items from the input queue is not so obvious with many threads. In practice we'd likely need to allocate additional memory based on the number of blocks that are executed in parallel, and that can be significant.

Chunks

An idea to avoid that would be to use a chunked allocator for queues, with the size of the chunks being equal to the block size of kernels.

When a kernel is about to write out queue items, it would allocate an additional chunk for the queue if the current chunk does not have enough space for all items. Queue items would then be written into 1 or 2 chunks. When executing a kernel memory reads would be coalesced, since each chunk matches the block size for that kernel and contains all the queue items in order. Sorting by shader would break coalescing for the shade_surface kernel though, and a separate queue per shader would likely waste too much memory.

Significant memory could still be unused, for two reasons:

  • For input queue items to not overwrite output queue items, a simple implementation may need to double the memory. Chunks could become available as soon as a block finishes executing. However many blocks are executed in parallel, so the practical memory savings are not obvious.
  • Up to block size x number of kernels would need to allocated extra to handle memory lost by incompletely filled chunks.

The megakernel was a clear win with CUDA when we added it to speed up the last 2% of paths. But benchmarking this with OptiX now it's a mixed bag:

                              no-megakernel                 cycles-x                      
bmw27.blend                   10.0372                       10.0984                       
classroom.blend               13.3341                       13.3774                       
pabellon.blend                7.34786                       7.3563                        
monster.blend                 8.51194                       9.12812                       
barbershop_interior.blend     9.55334                       9.77131                       
junkshop.blend                12.8547                       13.2737                       
pvt_flat.blend                14.8002                       13.0439

A possible explanation is that pvt_flat.blend uses adaptive sampling, where batch sizes are smaller and the megakernel helps more. Also this does not benchmark viewport rendering, where the megakernel helps more.

There's still room for optimization when we have few paths active, without using a megakernel. Given these numbers, that seems worth looking into.

Current progress on trying to eliminate the megakernel is in P2111, still not where I want it to be.

Compacting the state array seems not all that helpful.

What I did notice while working on that is that in the pvt_flat scene, the number of active paths often drops to a low number but is not refilled quickly. Reducing the tile size to avoid that not only avoids the performance regression, but actually speedups the rendering. However this slows down other scenes.

                              compact+tilesize/4    tilesize/4     compact       no-megakernel      megakernel
bmw27.blend                   8.34171               8.07403        7.54844       7.5659             7.71559                             
classroom.blend               11.2431               10.9255        10.6584       10.6141            10.8755                             
pabellon.blend                5.91434               5.96131        5.70281       5.73752            5.87296                             
monster.blend                 6.65674               6.52214        6.94078       7.00113            8.08601                             
barbershop_interior.blend     8.43154               7.90963        8.20633       8.16851            8.75592                             
junkshop.blend                10.2858               10.4201        10.5334       10.5217            11.1836                             
pvt_flat.blend                10.116                10.2103        12.115        12.4971            11.0911

There must be something that can be done to get closer to the best of both.

Looking at the kernel execution times of bmw27, it's clear that optimizing init_from_camera for multiple work tiles would help, but it's only a part of the performance gap. There's something else going on here that is harder to pin down.

compact+tilesize/4
6.71538s: integrator_shade_surface integrator_sorted_paths_array prefix_sum
1.47519s: integrator_intersect_closest integrator_queued_paths_array
0.53159s: integrator_intersect_shadow integrator_queued_shadow_paths_array
0.38923s: integrator_shade_shadow integrator_queued_shadow_paths_array
0.32022s: integrator_init_from_camera integrator_terminated_paths_array
0.16891s: integrator_shade_background integrator_queued_paths_array

no-megakernel
6.16877s: integrator_shade_surface integrator_sorted_paths_array prefix_sum
1.17981s: integrator_intersect_closest integrator_queued_paths_array
0.41735s: integrator_shade_shadow integrator_queued_shadow_paths_array
0.36235s: integrator_intersect_shadow integrator_queued_shadow_paths_array
0.21522s: integrator_shade_background integrator_queued_paths_array
0.17480s: integrator_init_from_camera integrator_terminated_paths_array