- User Since
- Dec 26 2013, 7:53 PM (225 w, 6 h)
Tue, Apr 3
Okay, so it turns out that this is caused by one of the pointers in the NLM kernels not being aligned, but it's actually already fixed in the current master.
Sun, Apr 1
@Brecht Van Lommel (brecht) I can reproduce the crash using the 32-bit Windows build of 2.79b with Wine, I'll reboot to Windows and take a look.
Fri, Mar 23
Mar 17 2018
As far as this patch is concerned I definitely like the tile approach, but the problem with that is the seam merging I mentioned as a ToDo.
Mar 16 2018
Mar 15 2018
Thanks for the quick review, should all be fixed.
Mar 14 2018
Okay, here's an updated version with arc.
Mar 12 2018
Ah, right, sorry - this patch is based on the master branch, not 2.8, I'll upload with arc once I'm home.
Mar 11 2018
Mar 10 2018
Jan 17 2018
I'm not certain which code version you are referring to, both this commit and the latest master use atomic operations in the accumulators:
Jan 14 2018
Dec 21 2017
Dec 6 2017
Dec 5 2017
@mathieu menuet (bliblubli) Doing denoising on another device might work, but GPUs are actually much better at denoising, so if anything I'd want to make it work the other way around (denoising on GPU even if you're using CPU rendering).
Dec 3 2017
Nov 30 2017
Nov 21 2017
Nov 18 2017
I think the main confusion is what one set of images consists of - in Substance, afaik all 48 images are considered one texture. However, this patch doesn't care about BaseColor etc. - its only purpose is having the 1001, 1002 etc. images in one node.
Nov 17 2017
Committed as rB119846a6bb36.
Now that we have D2810, here's a new version containing only the Mikktspace changes.
Abandoned in favor of D2921.
Forgot to register the pass in engine.py
Removed film.cpp macro and renamed the pass to Volume.
Nov 14 2017
Okay, here's the better approach - no categories needed, the PASS_ enum now just stores indices and a macro turns them into bitflags.
Nov 5 2017
Oct 29 2017
Splitting by use_light_pass makes sense, yeah. Maybe also add a separate category for debug passes? Not really needed right now, though.
Sep 20 2017
Sep 16 2017
This behaviour is expected, unfortunately more memory is needed during rendering when you use denoising.
There are three reasons for that:
Aug 31 2017
I agree that the message is misleading, but a simple +1 is not really the correct answer. The number that's currently displayed is the number of finished tiles, so it actually makes sense that it starts as 0. Just doing a +1 is wrong since there will be multiple tiles active at once.
Doing the cleanup first can will actually increase the number of kernel recompiles - due to shader optimization, you could end up with different node groups depending on mix factors etc., which means you'd get annoying rebuilds while tweaking shaders. That's also a problem with attributes and resyncing, which is why those are only evaluated after optimization when doing a F12 render, and afaik the same should be happening for node groups already.
Aug 27 2017
This is caused by kernel_volume_shadow, which incorrectly uses the FLT_MAX ray distance from the sun sampling when applying the absorption in the glass volume - and for a distance of ~1e38, it makes sense that even the slightest absorption would cause the color to be saturated.
Aug 25 2017
Yeah, that approach is definitely better - especially multithreading as much as possible is extremely important.
Should be fixed with rBf9a3d01452, thanks for the report.
Aug 24 2017
Aug 22 2017
Yes, I'm not sure why glossy indirect rays would be disabled but diffuse indirect rays aren't.
Aug 21 2017
Hm, yes, there are many cases where packet traversal algorithms could be beneficial - Bevel and SSS of course, but also possibly shadow rays with Sample All and the first bounce with branched path tracing.
It might be useful to add an option for skipping the beveling when the position is more than one radius away from any edge.
Doing so would remove bevels between objects, but probably give a significant speed boost for small bevels on hard surfaces, which is the main use case for this node I guess.
Aug 17 2017
Thanks for the info, should be fixed in rB5492d2cb6738.
Seems like this was resolved, so I'll just go ahead and close it. If there's still an issue, feel free to reopen.
Aug 15 2017
Aug 9 2017
Ah, okay, makes sense - I'll do so next time :)
The problem here is the diffuse-specular-heuristic. The floor is classified as specular, so the feature passes are written based on the secondary bounce and therefore preserve reflections.
The brick on the other hand is considered diffuse, so the normal/albedo/depth passes contain no information about the white object.
At least for me, this commit breaks node previews (for example in the Image Node).
Aug 8 2017
Can't reproduce with latest master on Linux on a GTX780, might be a Windows driver issue.
Aug 2 2017
I didn't check the whole code yet, but there actually is a reason for the horizontal sum: The loop above it processes four pixels at a time. Therefore, after the loop, each element of feature_means contains the sum of one quarter of the pixels. So, by taking the horizontal sum, we end up with the sum of all pixels in every element.
Actually, the same argument also applies to the max(hmax(...)) that would be removed by this patch - we want the maximum of all pixels, not the maximum of each quarter.
Jul 27 2017
Added the dependency, the viewport updates correctly now.
Jul 26 2017
This is obsolete since denoising is in master.
Right, adding a default option is much better.
Only downside is that it's longer, so I replaced the enum buttons with a dropdown.
Both render engines now share the setting and the initial value is determined based on the DPI setting.
Jul 3 2017
Sorry for ignoring this report for so long...
These fireflies aren't really a bug, it's just a shortcoming of the stochastic evaluation used by the MultiGGX model.
Fixed with rB15fd758bd632, thanks for the report!
Jul 1 2017
Jun 28 2017
Jun 26 2017
Jun 22 2017
Jun 19 2017
And here's a quick test render, credit for the scene goes to Rhys on BlenderArtists.
Here's a simplified version of the patch that still gives the same improvement.
Jun 18 2017
Okay, some more testing clearly shows that the approximate PDF causes the issue.
Okay, so a bit of testing showed that this is actually not caused by the Principled BSDF at all, it's already a problem in 2.78: