Page MenuHome

Cycles: Add Blackman-Harris filter function

Authored by Lukas Stockner (lukasstockner97) on Aug 6 2015, 9:11 PM.



Trying out other filter functions was on my todo list for some time now, and after Blackman-Harris was mentioned on the mailing list today, I decided to give it a try.
The code is based on the Wikipedia page, but I'd consider the function to be well-known enough to go without a source in the code.
Multiplying the width by two is necessary to get comparable results from Blackman-Harris and Gaussian for the same width (Plot following, this is uploaded with arc).
Improvements aren't huge, but definitely visible in test images with a lot of high-frequency detail (will also follow below).

Diff Detail

rB Blender

Event Timeline

Lukas Stockner (lukasstockner97) retitled this revision from to Cycles: Add Blackman-Harris filter function.
Sergey Sharybin (sergey) added inline comments.

Isn't this something we can compensate in the filter function?

Here's the plot of the Gaussian (blue) and Blackman-Harris (green) filters:

The fit could be made more exact with a width factor slightly below 2, but I don't expect it to really matter.

Test renders with the filters, all 1024spp:

Box filter:

Gaussian filter:

Blackman-Harris filter:

Another test, original by @Matt Heimlich (m9105826) (modified for contrast):

Box filter:

Gaussian filter:

Blackman-Harris filter:


We could, but it's also used below (line 264), so it'd need an additional condition down there.

Gaussian filter seems to be wrong, it shouldn't really be clamped. Is it something what really happens in the kernel or is just a plot artifact?

I just put lines 241, 242 and the filter_func_gaussian into Matlab and plotted the result, so I'm not sure whether it also is done in the kernel.
In the end, the Gaussian must be clamped somehow because of its infinite extent. One possibility is to subtract the value at the filter border to produce a smooth roll-off to zero, but I don't know whether Cycles does that.

@Brecht Van Lommel (brecht), might correct me if i'm wrong, but usually it makes sense to scale gaussian bell so range of [-3*sigma, 3*sigma] is fully fit in the filter with. Such a clamped filter is more or less asking for problems.

So before making a comparison with Blackman-Harris we shall make sure proper gaussian bell is used.

I'm also not totally sure why you need that width multiplication. It's like you're forcing filter width to be 2x wider?

I didn't intend to clamp the Gaussian filter, so if that's the case it should be corrected.

With filter importance sampling there isn't really a need to clamp, though it's probably still a good idea to limit the actual width. 3*sigma might be fine, could be bigger too if it helps, it shouldn't really affect performance. You'd just want some reasonable limit to avoid rare samples that are very far away from the image plane, which might be bad for things like OIIO or OpenVDB tile caches.

Also, the antialiasing in the first Blackman-Harris render looks way better to me. I'd consider making that the default pixel filter.

Tested the sponza scene out of curiosity:

400% zoom (click for full res):

Seems, Blackman-Harris does pretty well, it looks a bit better than Gaussian.

At this point Blackmann-Harris indeed seems to be betterm but it might be just because of clamped Gaussian bell. That being said two things to be investigated:

  • Multiplying width by 2 still seems weird to me, why exactly it's needed? Is it just to make BH bell width matching closer to Gaussian's? Other than that the patch LGTM.
  • Is anyone looking into clamped bell issue with Gaussian?

There is no deep theoretical reason behind the 2. I just found that B-H with a width of 1.5 looks totally different from Gaussian with 1.5, which is a bit weird if you just want to try it out - changing filters shouldn't drastically change the image. Probably something like 1.9 would be better (judging from the plot), but the question is whether throwing another long magic number into the filter is worth it.

I can also prepare a patch for the Gaussian filter, my proposal would be to extend the interval over which the filter is evaluated internally to 3 times the current value, of course without scaling the v parameter with it. Also, I'd subtract the value at the border of the interval for a fall-off to zero and no abrupt border.
However: There is a problem here, which is that the resulting lookup table will look something like the inverse error function, because for Importance Sampling we're actually inverting the integral of the filter.
Obviously, this gives problems with the currently used linear interpolation of 256 values. Actually, this problem already appears for B-H as well. Consider this plot of randomly sampled pixel offsets:

I actually considered this to be a bug at first, but the actual problem is that the whole range from -1 to ~ -0.5 is just the linear interpolation from table[0] to table[1]. Therefore, two suggestions:

  • Increase the size of the lookup table to something like 2048. This still takes only 8kb, no real problem for caches.
  • Do higher-order interpolation. This is only done twice per sample, so using e.g. cubic interpolation won't really hurt performance here.

Does making filter_table_cdf bigger but leaving filter_table the same size help? I expect that once the CDF has been inverted the error from linear interpolation is negligible.

Imagine you take 256x256 stratified jittered AA samples, with each sample ending up in a random position in each bin. A bigger table wouldn't make a noticeable difference even for such a large amount of samples.

Hm, that's true, the borders probably look worse than they actually are (the borders of the 256² space are only ~1.5% of the samples).
Making only filter_table_cdf bigger should already help with precision, I'll try that tomorrow.

(Edit: German Autocorrect :/ )

Any updates here? Think we can add the Blackmann-Harris filter patch.

This revision was automatically updated to reflect the committed changes.