Skip to content

Blender 4.2: VFX & Video



  • Added an overlay to display each node's last execution time. The display of execution time is disabled by default, and can be enabled from the Overlays pop-over. The execution time statistics is gathered during compositor evaluation in the nodes editor. (467a132166)
  • The Legacy Cryptomatte node is now supported in the Viewport Compositor. (2bd80f4b7a)


  • The Hue Correct node now evaluates the saturation and value curves at the original input hue, not the updated one resulted from the evaluation of the hue curve. (69920a0680)
  • The Blur node now assume extended image boundaries, that is, it assumes pixels outside of the image boundary are the same color as their closest boundary pixels. While it previously ignored such pixels and adjusted blur weights accordingly. (c409f2f7c6)

Improved Render Compositor

The render compositor has been rewritten to improve performance, often making it several times faster than before.

The compositor currently works on the CPU, but GPU acceleration is coming soon as well.

There are some changes in behavior. These were made so that the final render and 3D viewport compositor work consistently, and can run efficiently on the GPU. Manual adjustments might be necessary to get the same result as before.

This mainly affects canvas and single-value handling: the new implementation follows strict left-to-right node evaluation, including canvas detection. This makes relative transform behave differently in some cases, and makes it so single value inputs (such as RGB Input) can not be transformed as image.

Each of the following sections describe a difference in behavior that might affect the result of old compositing setups.


Transforms are now immediately applied at the transform nodes, meaning that scaling down an image will destructively reduce its resolution. In the old compositor, transformations were delayed until it was necessary to apply them. This is demonstrated in the following example, where an image is scaled down then scaled up again. Since the scaling was delayed in the old compositor, the image didn't loose any information, while in the new compositor, the image was pixelated.




The Wrapping option in the Translate node can no longer be used to repeat a pattern. It now solely operates as a clip on one side wrap on the other side manner. This is demonstrated in the following example, where an image gets Wrapping enabled then scaled down. For the old compositor, the wrapping was delayed until the scale down node, producing repetitions. While in the new compositor, wrapping only effects the translate node, which does nothing, so it is unaffected.




This functionality will be restored in the future in a more explicit form.

Size Inference

The old compositor tried to infer an image size from upstream/output nodes, while the new compositor evaluates the node tree from left to right without inferring image sizes from upstream nodes. The new behavior is more predictable but made some tricks that were previously possible to be no longer valid. This is demonstrated in the following example, where in the old compositor, the translate node inferred its size as the size of the mandrill image, so it is as if the node received a full image with the same size as the mandrill image filled with a red color, which when translated down would leave a black area at the top, which when blurred created a gradient.

The new compositor on the other hand treats the translate node as having a single red color as an input, which is transformation-less, similarly, the blur node also treated its input as a single red value, which was passed through without blurring, and finally, the color mix node when receiving a single red color, mixed it with the entire mandrill image as most nodes do.




The old compositor will clip images if their inferred size is smaller than their actual size, while the new compositor will not do size inference and consequently will not clip images. This is demonstrated in the following example.



Sampling Space

The old compositor sampled images in their aforementioned inferred space, which can be potentially clipped if the inferred size is smaller than the actual image size as mentioned before. The new compositor samples images in their own space. This is demonstrated in the following example, where a scaled up image is sampled in its entirety by using normalized [0, 1] gradients as the sampling coordinates. The old compositor produced clipped mages since its inferred size is also clipped, while the new compositor produces full images.





  • Reduced stalls when new movie clips start playing by caching FFmpeg RGB<->YUV conversion contexts (ffbc90874b) and reducing amount of redundant work that is done for FFmpeg initialization (b261654a93).
  • Reduced amount of temporary image clears when rendering VSE effect stack (b4c6c69632)
  • VSE already had an optimization where an Alpha Over strip that is known to be fully opaque and covers the whole screen, stops processing of all strips below it (since they would not be visible anyway). Now the same optimization happens for some cases of strips that do not cover the whole screen: when a fully opaque strip completely covers some strip that is under it, the lower strip is not evaluated/rendered. (f4f708a54f)


  • "AVI RAW" and "AVI JPEG" movie file output types are removed. Existing scenes using them will be updated to "FFmpeg Video" type with default options (H.264). (f09c7dc4ba)