- User Since
- May 20 2007, 5:54 AM (685 w, 3 d)
Additive light will occlude in compositing too. It works exactly the same way! This is a good little demo test file created by someone else that shows how the additive flames bury the background emissions.
- Alpha represents degree of occlusion.
- Emissions can happen independent of occlusion.
Sat, Jul 4
Unless you persuade him that he is wrong, otherwise I don't see this problem will be solved any soon?
Fri, Jul 3
because that issue is essentially merged here, so I suppose they point to the same principle error.
as far as 2.83.0, T67110 doesn't seem been fixed
Non the less the Eevee bloom not writing alpha issue is quite nasty, and Im hoping the devs will fix it soon.
This is the one that works. It really is.
Proper handling is proper handling, and it cascades upwards.
I'm finding it manages to encode the RGB correctly (if you save to EXR anyway), but I'm getting nothing / pure black in the alpha channel, where there should be something non-zero under the fire/flames.
Sat, Jun 27
What I meant is: are there any work-arounds that make the software behave like it ought to, and be capable of rendering with a proper alpha channel?
May 29 2020
I still am unsure what result you are hoping for?
You haven't scaled your emission by the gradated shape, so some of the pixels are emitting more than they are occluding.
May 26 2020
As as simple example I need to be able to produce such simple gradient with transparency and to export it from Blender, after the creation.
May 24 2020
I wrote a patch twice. Once back in 2007, then again in 2011.
May 23 2020
You keep saying "fix" but you are pointing at the wrong thing.
NO ISSUE of the APPEARANCE! The image you see on both 3D view-port and Image View-port as render result are completely the same! The decision what type format you will export this image, doesn't matter!!! You can export what ever you need and it will be correct compare to what you see on the screen!
May 21 2020
If I open such an image generated out of a blender, the Image Viewer is showing the image absolutely properly, as was generated.
Ran some tests. The viewport is very close to the proper result, and again, the Image Viewer is broken and has been forever. It's a bug, but not a high priority bug apparently, given it has been around since Blender's inception.
I’ll try to help you, but first you need to explain what, of the earlier light and dark images in this image is “correct”.
In your first image previously, the “darker” blue on the Image Viewer is wrong; the alpha and emissions are incorrectly rendered. That is, it should be the image on the left, because the emission is occluding partially and emitting light above and beyond the occlusion in the alpha.
May 20 2020
Please try taking the output to the compositor and use an alpha over node against black.
May 19 2020
The EXR's are not correct as the .png export in terms of --alpha density--
May 13 2020
They do and are. I believe the issue here is GPU concerns.
May 12 2020
Only thing I'd add to @Brecht Van Lommel (brecht)'s comment is that 8 bit textures are always woeful.
May 10 2020
Apr 28 2020
A fixed transform to sRGB’s display linear output also breaks with any device with difference colourimetry to sRGB, including Display P3 devices.
Apr 15 2020
@Richard Antalik (ISS) Maybe push this before it needs rebasing again?
Mar 30 2020
The point I would make is that all of the Adobe PDF blend modes are hacks; there are more physically correct approaches that deliver the job. For example, Dodge and Burn are simple exposure adjustments.
I think there is an argument to be made that if Overlay is supported, it should be performed by converting colors to sRGB, blending, and converting back to linear.
Mar 29 2020
There is no such thing as “exceeding RGB range” in various situations.
Jan 25 2020
Jan 17 2020
Viewer is broken. Has been for nearly two decades.
Jan 16 2020
PNGs are complete garbage. Don’t use them.
Jan 15 2020
I've included the relevant colourimetric transforms required in the chain, given three two different layers of transparency in the viewport. Hope someone finds this informative.
Dec 21 2019
This is excellent and important work. I sincerely hope the core developers take a peek and see if it can fit. Perhaps you can also detect 10 bit encodings and upgrade them to float buffers while this work in process?
It’s less about compression and more about the details of the compression. An all I frame codec will permit reliable seeking, decoding library depending.
Dec 20 2019
Strongly suspect this is related to the encoding characteristics of the codec. There are likely P and B frames causing chaos, and they always will with seeking without caching.
@Joerg Mueller (nexyon) Thoughts?
You must appreciate that there are a near infinite number of "bugs" reported in this tracker with the large audience using Blender. Making a good report is crucial if you believe there is indeed an issue, and it will help the developers understand your issue.
Dec 12 2019
EXRs are designed to encode any type of data, which may include but is not limited to:
- scene linear colour data
- display linear colour data
- scene nonlinear colour data, typically via a log-like transform
- display nonlinear colour date
- linear or nonlinear non-colour data such as depth, normals, cryptomattes, alpha occlusion, etc.
Nov 26 2019
Oct 3 2019
ACES is really tangential; it’s just a simple series of transfer functions and a reference working space that is more or less BT.2020 for use with CGI / rendering / compositing.
@Fahad Hasan (cgvirus) The Nuke Merge nodes by default assume scene referred and will only change to the display referred versions with the horribly titled “Video Colorspace” check box toggled to on. OpenImageIO has the same swapped out formulas that Nuke uses. You will notice that things like Screen etc. will work with scene referred emissions, but is completely different from the A+B - (A*B) for example.
Sep 28 2019
Would this be the cause of the HSV colour picker returning crazy RGB ratios when using an alternative OCIO config?
Sep 26 2019
Agree with @Brecht Van Lommel (brecht).
Sep 11 2019
Aug 22 2019
OpenEXR reading and writing should use chromaticity metadata to determine the color space of the image, and convert it to and from the working color space.
Aug 5 2019
But it’s a deep rabbit hole that requires attention.
Jul 13 2019
So what to do?
Jun 14 2019
@Ivan Cappiello (icappiello) I had ordered the config originally because Blender didn’t have the colour picker role in use, and it went by order for a long time.
May 28 2019
The alpha format dictates the over operation here. Wrong over, wrong output.
May 27 2019
I rebuilt my example shader in Unity and Unreal Engine, and neither of them display dark outlines in the semi transparent areas of the image texture.
May 23 2019
I believe UE BLEND_AlphaComposite performs the canonized Porter Duff and assumes associated.
The problem is that hardware accelerated sRGB to linear conversion appears to assume unassociated alpha.
...because the color information is lost...
May 20 2019
May 18 2019
This serves as an ideal example of an image that has colour data in the 100% transparent pixels that you can view in Blender's image editor by isolating the "Color" channel.
May 15 2019
It is indeed sub-optimal. Promotion makes good sense I suppose in this light.
I see your vantage.
OpenColorIO handles all of the shader compression schemes, so no issue on that side. V2 elevates it further, making for a 1:1 with CPU, via the same compression architecture extended to more granular results.
Doesn’t it make more sense to simply let OCIO do the full transform and utilize the allocation format and range assigned for maximum fidelity?
May 14 2019
It’s not about optimizing so much as keeping the encoding as it is tagged. A simple example is a camera referred REC.709 OETF at 8 bit; it should come back as such, not as the sRGB OETF. Same goes for the litany of other encodings that are designed for 8 bit.
Yes. I see that my comment is not specific to this fix, which is required.
I believe this might be problematic on the 8 bit front, as it ends up encoding the resultant buffer to an unknown, unclassified state?
The procedure to reproduce it is pretty easy and straightforward, why do you need a gif?
Hex values are colourspace dependent and have little use outside of other facets. They are a *very* bad idea.
“Go crazy” isn’t a very useful description. Can you provide screenshots or an animated GIF of the issue happening, along with samples of some of the values?
May 5 2019
Words like Default and Standard don’t really tell the user anything useful.
Can we keep this on topic?
We should not make any changes here for 2.80.
May 4 2019
Color management discussions are confusing enough, I can't rely on an interpretation of a Slack conversation to make design changes.
Brecht, I'm not the enemy here. I'm speaking solely on the "authority" of understanding a basic level of colorimetry and having spent quite a few years around the OpenColorIO folks. I'm not making things up here.
As an addendum, if you do intend to adhere to some Blender rules about how Blender thinks colour transforms work, you'd be wise to follow OpenColorIO's design and not use children of GroupTransforms to accomplish it. Use the View, and append the look onto the view using suitable syntax. As an example:
This isn’t about your opinion beyond how you wish something to be in Blender. That's fine. It's a personal misunderstanding as to how OpenColorIO works, and pixel management in general.
now follows the OpenColorIO design better
Apr 4 2019
Apr 3 2019
Apr 2 2019
Apr 1 2019
You can’t calculate vectorscope positions without knowing the chromaticites of your RGGB camera primaries.
Mar 31 2019
Feb 2 2019
The inputs would indeed need to be applied per image buffer, as that is OCIO’s design to take all imagery to the same scene referred reference space. The VSE abuses this design.
Part of this is the fact that the VSE abuses OCIO in an attempt to work around some of its limitations.