Page MenuHome

Eevee: strange discrepancy in handling of 32 bit floating point textures
Closed, ResolvedPublicBUG

Description

System Information
Operating system: Linux 5.4.10-arch1-1
Graphics card: GTX980Ti

Blender Version
Broken: 2.83 (01d9a2f71b56be9354ce31564e3dddb6af4a0757)
Worked: N/A

Short description of error
When using a 32 bit floating point normal map with Eevee, it looks as if Eevee resamples the image and reduces quality:

The same image and material used with Cycles seem to work fine. Also, slightly different setups in Eevee (e.g. manually generating (some of the) same data) do not produce the same issue.
More information in this video.

Exact steps for others to reproduce the error

  1. Load the above .blend file.
  2. Observe banding on the surface in Eevee material preview.
  3. Switch to render preview (Cycles): surface will become smooth.
  4. Try the other two materials and note that they do not exhibit the same problem in Eevee.
  5. Try baking normals using the ManualTangentNM material (see video for details). When baked with no range compression, Eevee does not seem to have an issue.

Event Timeline

Richard Antalik (ISS) changed the task status from Needs Triage to Confirmed.Jan 13 2020, 10:25 PM

I can confirm this. Seems to be caused by enabling Object Data > Normals > Auto Smooth.

I don't think that that is the case. Re-baking with the whole mesh smooth-shaded without custom normals and sharp edges will still show the issue in Eevee (but not in Cycles), although of course the normals in general would be wrong.
E.g., with Auto-Smooth disabled one can still use the 'ObjectNM' material. Eevee will preview it without banding. But, baking tangent space normals with that material, switching back to the 'TangentNM' material and plugging the new texture instead of the original TS normal map will bring back banding in Eevee (but not Cycles). Even a few seams along triangulation edges would show in Eevee (on the flat sides).

It's really strange. My first guess was that it was related to the 10bit normals precision issue (T61024). But even when testing with D6614 it did not fix the problem entierly.

Then I remember that we only use 16bits texture for the GPU.

This one liner fixes is:

@@ -1060,11 +1060,11 @@ void GPU_create_gl_tex(uint *bind,
 
   /* create image */
   glGenTextures(1, (GLuint *)bind);
   glBindTexture(textarget, *bind);
 
-  GLenum internal_format = (frect) ? GL_RGBA16F : (use_srgb) ? GL_SRGB8_ALPHA8 : GL_RGBA8;
+  GLenum internal_format = (frect) ? GL_RGBA32F : (use_srgb) ? GL_SRGB8_ALPHA8 : GL_RGBA8;
 
   if (textarget == GL_TEXTURE_2D) {
     if (frect) {
       glTexImage2D(GL_TEXTURE_2D, 0, internal_format, rectw, recth, 0, GL_RGBA, GL_FLOAT, frect);
     }

However, this has some big performance impact. So I think this is an option we need to add in the future "Performance/Quality" panel for EEVEE/workbench.

Thanks for clarification. So it's not a bug per se? I wonder if that could be made an option for the texture node itself, to allow for selective sacrifices of performance to quality.

We could add a per image datablock option. We can debate what is the best option UX wise. @Brecht Van Lommel (brecht) @William Reynish (billreynish) any input on this?

I'm fine with an option on the image datablock.

Clément Foucault (fclem) closed this task as Resolved.Feb 25 2020, 3:15 PM
Clément Foucault (fclem) claimed this task.

A new option was added to allow high bitdepth (32bit) in master for 2.83.