Thu, Feb 14
Looks like this is caused by adding a vanilla "Viewport" viewlayer and making it the first in the list here
(special case in do_version_layers_to_collections() if there are layer overrides...)
/* Make it first in the list. */ BLI_remlink(&scene->view_layers, view_layer); BLI_addhead(&scene->view_layers, view_layer);
Wed, Feb 13
Nice tests @Malte S (pandrodor), i also had issues with Albedo pass being in the wrong range for some objects in my scenes and they turned white , I used the same fix as you did but isn't the Mix node unnecessary? I got same results with just the Greater Than node and the Color Subtract.
Tue, Feb 12
OIDN does work with Animations and it also works with high sample renders. The bug I showed earlier was rendered with just 4 samples just to make the point very clear.
This setup is a simple fix for the wrong “Denoising Albedo” pass:
@derek barker (lordodin) You should try this setup. It should fix the areas where no denoising is happening and improves the texture sharpness.
In defense of the built-in Cycles denoiser, one needs to take into account the fact that it produces markedly better results on a noticeably noisy image when Branched Path Tracing is used instead of normal Path Tracing (especially in areas like highlights and reflections). There have also been improvements in recent 2.8 builds as indicated by the pass data.
Ive been testing with denoising the light passes instead of the images and it does a significantly better job keeping the texture data.
Mon, Feb 11
Yes I checked out in 2.8;. It is still an issue. Eyedropper cannot sample from second window. Returns always black.
Can you attach a simple .blend with that setup? Don't forget to pack the image.
Sat, Feb 9
I tired this on a Theory Studios build and it is amazing!
I think I found a Bug in the current Implementation.
The Denoising Albedo pass contains the Texture data in the value range from 1-2, resulting in blurry textures.
If I manually subtract 1 for every channel, the Image is much sharper.
However, it appears that metal colors are stored in the range of 0-1, just subtracting by one may not be the final solution.
Here is an example render with 4 Samples:
Wed, Feb 6
Is this still an issue with 2.8? https://builder.blender.org/download/
Luxcore team is experimenting the integration with Intel OIDN. Keep in mind their previous Bayesian denoiser was overblurring everything (and was much inferior compared to Cycles one), though the test shows the OIDN could do well with higher samplecounts too.
Could you please also post 256 sample results, or something like that? It will be much better at showing how inferior Cycles denoiser is to OIDN :)
Here some comparisons at higher sample counts, with hair and motion blur.
11_01_A.lighting.flamenco, 50% resolution, 1024 AA samples.
One thing no one is bringing up is the built in cycles denoiser isn't worth turning on half the time because the amount of time and overhead it adds you can do twice as many samples. We had a 4k render recently take over a minute longer with the denoiser on vs 4x the samples.
Tue, Feb 5
You need to be careful when looking at test posted in various forums - naturally, OIDN shows a big quality difference between with and without normal/albedo passes. The HDR option also makes a significant difference, when it's turned off it is much more aggressive at blurring bright areas.
Regarding integrating the Cycles denoiser into the Compositor node: I'm not sure how we would implement that - the animation denoiser patch adds support for standalone denoising, but we'd still need to call that from Blender somehow, and adding a function to the Renderer API specifically for that seems far from ideal as well.
Right, the comparisons in that thread show that OpenImageDenoise is better at low sample counts and that makes it valuable.
Agree with @Ludvik Koutny (rawalanche)
- Fixed code style.
- Cleanup in denoiser compositing node.
It's the job of the developer submitting the new code or the users to demonstrate the new denoiser is better, not the other way around. So I would love to see some test renders with this patch.
I'm not sure whether it's just an experiment or what, but i am not convinced in this.
Mon, Feb 4
Mon, Jan 28
For input you can set the image colorspace to Non-Color. For output you can write EXR files, which will not do any changes to the data. There is no support for writing Non-Color data TGA Files.
So what you're getting at is that it isn't possible to store different information in R, G, B, and A channels.
In terms of VFX in the real-time industry this is very much a needed feature, since you often need textures containing non-color data
It appears that the application you are using to view the image is ignoring the alpha channel. If the alpha channel is taken into account the image should appear correctly.
Actually, this might be a issue with alpha channels.
Is there any possible workaround i could use as of now ?
This is not because of the split node (at least that didn't matter in my tests).
Sun, Jan 27
Wed, Jan 23
Tue, Jan 22
@Philipp Oeser (lichtwerk), read the comment in the patch. Think is better to solve the issue with such a workaround for now than to keep this being broken. Assigning to you so you can fully tackle it from now.