Page MenuHome

New compositor node "Sun Beams"
ClosedPublic

Authored by Campbell Barton (campbellbarton) on Jul 17 2014, 3:46 PM.

Details

Summary

This allows adding a "fake" sun beam effect, simulating crepuscular rays from light being scattered in a medium like the atmosphere or deep water. Such effects can be created also by renderers using
volumetric lighting, but the compositor feature is a lot cheaper and is independent from 3D rendering. This makes it ideally suited for motion graphics.

The implementation uses am optimized accumulation method for gathering color values along a line segment. The inner buffer loop uses fixed offset increments to avoid unnecessary multiplications and
avoids variables by using compile-time specialization (see inline comments for further details).

More optimization could be achieved by sparser sampling (quality steps) and by utilizing SSE instructions. Currently no antialiasing is done. A number of usability features could be added as well, such
as angle-based blurring (randomizing ray angles) and different blend modes.

Diff Detail

Repository
rB Blender

Event Timeline

Lukas Toenne (lukastoenne) updated this revision to Unknown Object (????).Jul 17 2014, 4:52 PM

Use quadratic instead of linear falloff for rays.

This is (probably) more physically accurate, as far as accuracy is a
concern with this effect.

Eventually a customizable falloff exponent can be added as a user option.

Code-wise it seems fine, but i'm not really sure how to use it?

source/blender/compositor/operations/COM_SunBeamsOperation.cpp
93

Remove this as dead code?

142

Just TODO. XXX is only for real dirty crap which is to be fixed ASAP.

199

Love ascii art!

source/blender/makesrna/intern/rna_nodetree.c
6201

Should we restrict it to positive values only?

Lukas Toenne (lukastoenne) updated this revision to Unknown Object (????).Jul 18 2014, 10:04 AM

Adressed a few minor review points:

  • Removed dead code for sin_phi
  • XXX comment is really just an optional TODO
  • ray_length is now unsigned to avoid values < 0

@Sergey Sharybin (sergey): Here is a little demo file. It's a typical interior window picture with fake sun streaks added. The light "source" gets extracted quickly with some color keying (in a more serious shot one would probably employ masking). The white window light gets a yellowish tint before creating rays. Finally the original is overlayed with the sun beams, using Add mode because it's basically additional light.

source/blender/makesrna/intern/rna_nodetree.c
6201

Yes. Negative ray length might give some interesting effects, but is not so useful.

Lukas Toenne (lukastoenne) updated this revision to Diff 2176.

Fix for the normalization factor after accumulation.

This was adding an artifact on the borders toward the source, where
it would have to use the expected number num instead of the actual
number tot of samples.

Note: Physically speaking the normalization factor is not necessary,
because it simulates added light from scattering in the medium. However,
for the purpose of compositing this would create overly bright images
which are not really usable without extreme brightness reduction.
The factor at the end reduces the brightness such that every pixel is
at most as bright as the brightest source pixel in the ray, which is
much more pleasant to work with.

Lukas Toenne (lukastoenne) updated this revision to Diff 2184.

Use pixel coordinates for the source vector instead of relative factors.

This makes it consistent with the ray length property, which is also in
pixels.

This looks great! Very useful thanks :)

Just a UI/UX note (I've only seen this image, haven't tried it out), I think it'd be more natural to control an Angle Offset (like the Glare node) instead of an arbitrary coordinate/vector. It'd also of course be useful to be able to control the ray length and angle using a texture, though this could be added later like the falloff mode.

I think it'd be more natural to control an Angle Offset (like the Glare node) instead of an arbitrary coordinate/vector. It'd also of course be useful to be able to control the ray length and angle using a texture, though this could be added later like the falloff mode.

I disagree. The angle offset only results in parallel rays. You could do this pretty easily just using a directional blur. The single coordinate point is an origin point the rays fan out from. Using only an angle wouldn't let you achieve images where the origin is in frame.
https://www.dropbox.com/s/5bqc4vnncoo8glt/sunbeams_screenshot_02.png

Other notes:

Is it completely out of the question to have a origin point visualized to drag around? Adjusting the sliders works, of course, but it's not nearly as ideal as simply dragging and positioning an origin point somewhere. I'm guessing there are bigger issues with the canvas and on-screen transform icons that prevent this from being possible, or else we'd already have on-screen widgets for transform tools. Not a big deal, just asking. :)

Would it be possible to have a value input sockets for the X and Y values? That way we could plug in a tracker node if we wanted. It'd be easy to track some clouds in moving footage, for example, then use some math nodes to multiply the values and slide them off the canvas.

Ah right, of course, misunderstood sorry :)

I agree hype, it would be nice to have input sockets for the origin location. It would be nice to control the origin with an empty or lamp location. (I guess you would have to use drivers, but it would also be nice if there was an easier way to get object transforms into node setups..)

Controls/Handles for geometric compositor properties in the backdrop is something i'm looking into (Some initial drawing and selection code exists). But i would rather implement this independently as a general feature, and support a number of other nodes with size/rect/vector properties. The idea of a "canvas" is closely related and some problems still need to be solved (how do we handle resolutions in node groups?).

Turning properties into input sockets is a nice idea in principle, but there are technical limitations to deal with. Using a socket generally means that a value can suddenly be defined per pixel rather than being a constant (i.e. pixel-invariant). It would still be nice to have this working for things like math nodes and for exposing values from groups though. The source point for example is necessarily invariant w.r.t. pixels as a defining feature of the algorithm. Lack-of-invariants can be worked around with some clumsy code constructs, but eventually this requires an extension of the compositor design.

Ray Length should also get a factor input per pixel to give more control. A typical case where this would be handy is crepuscular rays (aka "god rays") from cloud patterns. These are affected by perspective and should be shorter in the distance, but currently all rays are the same length. Doing this efficiently is possible, but may require a few changes to the compositor code.

Ray Length should also get a factor input per pixel to give more control. A typical case where this would be handy is crepuscular rays (aka "god rays") from cloud patterns. These are affected by perspective and should be shorter in the distance, but currently all rays are the same length. Doing this efficiently is possible, but may require a few changes to the compositor code.

How would a factor input be able to control ray length based on perceived distance? I love the idea, but I can't imagine what you would plug in that could tell the rays where the front or back of the image is, and how long rays should be at each point and in between. Using 3d, it'd be easy with z depth pass, but using shot footage?

Yes, a Z pass would be a natural choice to generate this info. If the image is not rendered you can still generate a fake Z pass in many cases, e.g. by using a simple plane for the sky. The node could also allow for a direct Z input to make it simpler (saving the extra division/offset steps).

Shouldn't the source and ray_length values be defined in a way which works with different scale images?

  • Source can be a relative vector.
  • Ray length can be a % of the image.

Its quite common to do low resolution renders, then for the final result use double resolution or so, I know pixel values are used in some parts of the compositor, but in this case it seems reasonable to use a method that can give similar results with a scaled image?

source/blender/compositor/operations/COM_SunBeamsOperation.cpp
53

*picky* spaces around operators.

Attached a modified version of the patch where the values x/y, ray_length are relative.

Sometimes there are stray beams coming in from something not apparent in the input image.

For example, in the video I made, you can see at 00:18, in the bottom left corner, there is a glow from something.
https://vimeo.com/101670787

Starting around 1:32, you can see the glow moving along the bottom of the frame as I reposition the source point. Especially at 1:48, there's a little burst above the main window image.

It shows up a few other times throughout the video.

Any idea what that artifact is from? It's easy to roto out, but I don't think it should be there in the first place.

updated patch to use relative input, also use slider for ray-lenth

Fix for lukas - large rays