Skip to content

Blender 4.2: Compositor

GPU Acceleration

Compositing for final renders can now be GPU-accelerated (727a90a0f1).

The device used for compositing is configured in either Render settings -> Performance -> Compositor, or in the properties panel of the Compositor editor. It is expected that CPU and GPU compositor evaluation gives the same results, with minimal differences between devices.

CPU Optimizations

The render compositor CPU backend has also been rewritten to improve performance, often making it several times faster than before.

There are some changes in behavior. These were made so that the final render and 3D viewport compositor work consistently, and can run efficiently on the GPU. Manual adjustments might be necessary to get the same result as before. See the Breaing Changes section below for details.


  • Added an overlay to display each node's last execution time. The display of execution time is disabled by default, and can be enabled from the Overlays pop-over. The execution time statistics are gathered during compositor evaluation in the nodes editor. (467a132166)
  • The Legacy Cryptomatte node is now supported in the Viewport Compositor. (2bd80f4b7a)
  • A new Bloom mode was added to the Glare node. It produces similar results to the Fog Glow mode but is much faster to compute, has a greater highlights spread, and has a smoother falloff. In the future, the Fog Glow mode will be improved to be faster and more physically accurate. (f4f22b64eb)
  • The Fast Gaussian mode of the Blur node is now implemented for the viewport compositor. (382131fef2)
  • The Translate node now has an Interpolation option that allows choosing between Nearest, Bilinear, and Bicubic. (483c854612)
  • The Fog Glow mode of the Glare node is now implemented for the viewport compositor, albeit with a slow CPU-based implementation for now. (f0c379e1d3)


  • The Fog Glow mode of the Glare node is now up to 25x faster. (d4bf23771d)
  • In the viewport compositor, the compositing space is now always limited to the camera region in camera view regardless of the passepartout value. This means areas outside of the camera will not be composited, but the result will match the final render better.
  • The Hue Correct node now evaluates the saturation and value curves at the original input hue, not the updated one resulted from the evaluation of the hue curve. (69920a0680)
  • The Hue Correct node now wraps around, so discontinuities at the red hue will no longer be an issue. (8812be59a4)
  • The Blur node now assume extended image boundaries, that is, it assumes pixels outside of the image boundary are the same color as their closest boundary pixels. While it previously ignored such pixels and adjusted blur weights accordingly. (c409f2f7c6)
  • The Vector Blur node was rewritten to match EEVEE's motion blur. The Curved, Minimum, and Maximum options were removed but might be restored later. (b229d32086)
  • The Fast Gaussian mode of the Blur node now matches the size of other modes of blur, while it was previously larger in size. (f595345f52)
  • The Bicubic interpolation mode of the Rotate and Stabilize 2D nodes is now implemented as a bicubic b-spline interpolation, while previously it used bilinear. (f1d4859e2a)
  • The Update Views button was removed from the Switch View node, and the updates now happen automatically. (9484770551)

Breaking Changes

The changes mainly affects canvas and single-value handling: the new implementation follows strict left-to-right node evaluation, including canvas detection. This makes relative transform behave differently in some cases, and makes it so single value inputs (such as RGB Input) can not be transformed as image.

Each of the following sections describe a difference in behavior that might affect the result of old compositing setups.


Transforms are now immediately applied at the transform nodes, meaning that scaling down an image will destructively reduce its resolution. In the old compositor, transformations were delayed until it was necessary to apply them. This is demonstrated in the following example, where an image is scaled down then scaled up again. Since the scaling was delayed in the old compositor, the image didn't loose any information, while in the new compositor, the image was pixelated.




The Wrapping option in the Translate node can no longer be used to repeat a pattern. It now solely operates as a clip on one side wrap on the other side manner. This is demonstrated in the following example, where an image gets Wrapping enabled then scaled down. For the old compositor, the wrapping was delayed until the scale down node, producing repetitions. While in the new compositor, wrapping only effects the translate node, which does nothing, so it is unaffected.




This functionality will be restored in the future in a more explicit form.

Size Inference

The old compositor tried to infer an image size from upstream/output nodes, while the new compositor evaluates the node tree from left to right without inferring image sizes from upstream nodes. The new behavior is more predictable but made some tricks that were previously possible to be no longer valid. This is demonstrated in the following example, where in the old compositor, the translate node inferred its size as the size of the mandrill image, so it is as if the node received a full image with the same size as the mandrill image filled with a red color, which when translated down would leave a black area at the top, which when blurred created a gradient.

The new compositor on the other hand treats the translate node as having a single red color as an input, which is transformation-less, similarly, the blur node also treated its input as a single red value, which was passed through without blurring, and finally, the color mix node when receiving a single red color, mixed it with the entire mandrill image as most nodes do.




The old compositor will clip images if their inferred size is smaller than their actual size, while the new compositor will not do size inference and consequently will not clip images. This is demonstrated in the following example.



Sampling Space

The old compositor sampled images in their aforementioned inferred space, which can be potentially clipped if the inferred size is smaller than the actual image size as mentioned before. The new compositor samples images in their own space. This is demonstrated in the following example, where a scaled up image is sampled in its entirety by using normalized [0, 1] gradients as the sampling coordinates. The old compositor produced clipped mages since its inferred size is also clipped, while the new compositor produces full images.