The issue here is caused by different approach in Cycles and Blender about color management. Cycles is operating with premultiplied buffer, and color management looks like DisplayPixel.rgba = color(linear_rgb_to_srgb(LinearPixel.rgb), LinearPixel.a). Note that Cycles does not do unpremultiplication prior to color management, however rest of Blender will first unpremultiply color, perform color space conversion and then will premultiply buffer back.
@Brecht Van Lommel (brecht), why such divergence appeared in the first place and where is the truth here?
The inconsistency comes from passing false for predivide in engine_bind_display_space_shader, whereas the image editor works as if it's set to true.
It's impossible to do color management correctly on an image with alpha, so both are wrong. Here's a .blend with a Cycles checkerboard background to show what the correct result is:
Ideally color management should only happen after the image has been composited over a background. If both the image editor and 3D view would use GLSL shaders for drawing float buffers, this could be solved by drawing the render and checkerboard background in a single GLSL shader so that we can blend them and then do color management as the last step.
Enabling predivide for the 3D viewport would be the quick way to make it consistent. The downside would be that shaders that are fully transparent and have emission (like fire) would draw very poorly.
@Brecht Van Lommel (brecht), the issue with GLSL side of colormanagement is that it doesn't come for free: it implies 4x of data to be transferred to the GPU on every redraw, which will eventually saturate your PCIex bus, causing lags in navigating in the image. Keeping texture on GPU all the time would solve the performance issue, but it's not something which will come any time soon.
While there is no correct solution here, we should stick to one of the solutions only. At least ideally. Keep in mind, same linear->display space conversion is needed when saving F12 render as png image.
Issue with emissive-only volumes is annoying, but it's also happening in F12 renders. So how much important is that?
Should we consider switching display buffer from straight to premultiplied alpha? (current issues outlined here will be somewhat easier to solve, but what downsides it'll bring?)
I'm fine with making Cycles viewport render consistent with the image editor, even if it looks worse for fire, and then do further improvements later.
Making the display buffer premultiplied alpha seems safe to me, there should be no precision loss. Only downside I guess is that saving a .png will give a different result, but on the other hand it will better show what you get when saving as .exr or using the render in compositing.
With half floats the transfer to the GPU would be only 2x, but that's also not trivial to implement.
Regarding luminescent pixels, I had that fixed at one point. The predivide was the problem here, and I couldn't track down where the case of alpha == 0 was discarding RGB data. It could be solved by skipping the RGB values on the multiply in the case of alpha == 0 or setting the alpha to some extremely small value. Either scenario would preserve RGB for no occlusion / high emission cases. The OpenGL ONE_MINUS set properly blended all values, luminescent and non, when tested.
The scenarios where alpha is low and RGB is greater (low occlusion / higher emission) should hold up fine, and render as emissive.
It's impossible to do color management correctly on an image with alpha, so both are wrong.
I believe this is false if you mean that an alpha encoded image can't be properly colour managed, as can be proven via colour science calculations via the divide / multiply not zero. Guessing you weren't inferring that. I suspect you mean applying any colour manipulation before issuing a predivide is an error of process, which is absolutely correct. Appropriate process for any and all colour manipulation is always disassociate, colour manipulation, then reassociate.