This is a design doc for the compositor.
Sample based: When using samples. With number of samples the artist can switch between speed and quality at any given moment. When speed is needed the artist can lower the number of samples and fast feedback. When actually rendering the number of samples can be increased for better result.
Relative: Currently changing resolution (or percentage) will effect the working of several nodes even default settings of the Blur node needs to be adjusted. With Relative all parameters should be aware of the resolution it is calculating in.
PixelSize aware: Currently the compositor is fixed to the perspective and ortograhic camera models. When using the Panaroma camera’s the blurs are not accurate. In cases like dome rendering, VR/AR a lot of trickery and working arounds are needed in order to composite correctly. When the Compositor takes the actual camera data of the scene (or image) into account this the compositor can calculate more accurate in these cases.
Canvas: Being able to put images in the compositor and align/transform it visually.
GPU support: The current compositor does only support GPU for a certain number of nodes. Other nodes are calculated on the CPU; huge amount of data is loaded/unloaded what takes a lot of resources. The new design should be able to run fully on the GPU.
In the sample based compositor the X,Y coordinate of the output image is transformed to a Ray (Position, Direction, UpVector, samplesize). This ray is evaluated to the node tree. When the ray is at an input node (Image, RenderLayer) the given ray is transformed into the specific image space. The image will then be samples to a result color/value. This result is then passed back to the node who requested it.
Nodes will be able to create alterations to these ray. And select which input socket will receive which ray. For example a blur node can ‘bend’ the incoming ray (of change the samplesize).
As samples are slightly randomized, every sample will sample different part of the same pixel, what will lead to sub pixel sampling. This leads to very crispy images compare to the current compositor.
Sample based compositing
Viewports and filtering
By all buffers, like input/output images, renderlayers nodes the user will be able to select aviewport what identifies where the image is in the scene and with what kind of camera it was created with (eg plane vs spherical). These viewports can be added in the 3d scene. But for canvas compositing we will allow these viewports to be visible in the backdrop of the node editor (feat: canvas compositing).
Using viewports it will be easier to composite planes into your scene when the camera is moving.
Also at all input image/renderlayer nodes the filter (nearest, linear, cubic, smart and others) and clipping (Clip, Extend, Repeat) can be selected that will be used when sampling this image.
Normally compositors are image based and many algorithms are also created for image based compositors. As this design is totally different there are also risks like are we able to implement all current features into this new architecture. IMO we will get to 95% of the old features. But some features are very hard to implement (or will come with huge penalties). Should we implement this as a second compositor and let the user decide which compositor to use?
Discuss/Approve this design
Find support for this design
Bitbucket source repository: https://bitbucket.org/atmind/blender-compositor-2016 branch compositor-2016
Nodes that have implementation (not guaranteed 100% same):
Viewer, RGB, Value, Mix, AlphaOver, Image (no OpenEXR), Render layers, Movieclip, Blur (only relative and bokeh), Color Matte, Chroma matte, Math, Value to Color, Value to Vector, Vector to Value, Color to Value, Color to Vector, RGB to BW, Separate RGBA/HSVA/YUVA, Combine RGBA/HSVA/YUVA, Hue Sat, Bright contrast, Gamma, Color balance, Color spill, Set Alpha, Channel matte, Difference Matte, Distance Matte, Luma Matte
Note: still very WIP. eg crash-a-lot it is intended to be a proof of concept.
Currently the rays have no width, hence many samples are needed when blurring. In the actual implementation I want to create Cone shaped rays, what will need less sampling to get to better results. Difficulty is the need for a mask to support elliptical blurs