Page MenuHome

Blender GLSL viewport DoF doesn't respect world scale
Open, NormalPublic

Description

System Information
OS X 10.11.4 El Capitan and MacBook Pro with AMD Radeon R9 M370X

Blender Version
Broken: 2.77 22a2853

Short description of error
Real-time Depth of Field (DoF) in the viewport is incorrectly focused if <Scale> is not set to 1.0 under Properties->Scene->Units. If scale is set to 2.0, the camera's focus will be 1/2 the actual distance. This does not affect non-realtime rendering.

Exact steps for others to reproduce the error

  1. Create a cube and a camera, or use the default Blender scene
  2. Create an Empty inside the cube to be used as the camera's focal object
  3. In camera settings tab, set the focus object to be the Empty
  4. In camera settings tab, set F-stop to 2.0
  5. Enter camera view
  6. In viewport properties (N-Key) under Shading, check <Depth Of Field>
  7. The cube should be in relative focus at this point
  8. To check, translate the empty along one of the axes (G-key + Y-key). It should go in and out of focus as the empty moves closer to and further from the cube.
  9. Now, go to Properties->Scene tab->Units and check <Metric> to enable the Scale property
  10. Set Scale to 2.0
  11. The cube should now appear out-of-focus, even though the empty and cube are superpositioned
  12. You will have to move the empty twice the distance of the cube from the camera in order to regain focus

Based on a (as simple as possible) attached .blend file with minimum amount of steps

Details

Type
Bug

Event Timeline

Alan Dennis (ripsting) raised the priority of this task from to Needs Triage by Developer.
Alan Dennis (ripsting) updated the task description. (Show Details)
Alan Dennis (ripsting) set Type to Bug.

From what I can tell this is working as intended.
The scene scale changes the fstop, not the focus distance. see: T48157

However, scene scale isn't meant to change how the scene displays/renders (Cycles doesn't do this for example).
@Antony Riakiotakis (psy-fi), wonder if this should be disabled or handled differently?

Bastien Montagne (mont29) lowered the priority of this task from Needs Triage by Developer to Normal.May 5 2016, 11:10 AM

Hi @Campbell Barton (campbellbarton),

Generally if the units were right everywhere (and from what I remember they are not...) scene scale is applied to all lengths in the scene, and then the geometry should not change and the focus would be the same. The crappy thing here is that some parameters from camera are always assumed to be in one unit (mm) which should not change with scene scale probably. On the other hand, the whole point of the focus is to keep, well, things in focus, so this should work regardless of the units of the scene.

Needs some investigation but right now I'm a bit busy with moving house and work to properly investigate. It may take a while.

I don't think the viewport DoF should be taking into account scene->unit.scale_length, Cycles doesn't either.

The reason it's not needed there is because both the focal length and the sensor size are in fixed mm units that do not change with scene scale. The way the formulas work, these units end up cancelling each other out and it's then irrelevant which unit they are in relative to the scene units.

Or maybe there is a mistake in the Cycles math, but they shouldn't work differently.

Hey Brecht, the problem here is converting from scene units (depth buffer to world space that we use in the shader will be in scene units) to the units used internally by the camera. More specifically we are interested in "what percentage of the camera sensor is covered by the circle of confusion".

it looks like circle of confusion can indeed be expressed in camera units, given that the focal length is also in the world units (See http://http.developer.nvidia.com/GPUGems3/gpugems3_ch28.html, eq. 1 and https://en.wikipedia.org/wiki/F-number to convert between focal length/aperture and f-number). The sensitive part is computing the focal length from the f-number and aperture which is in our weird camera units. If that works, then using it for equation 1 will obliterate the units in numerator/denominator and final result will be in camera units.

Still, you need to make sure that focal length is in scene units somehow

Made some changes to my comment because even my own brain was printing parsing errors after reading it.

Put some backwards compatibility in the mix, and this becomes a nice headache problem ;)

Just wanted to add a comment here.
The moment we have a camera system, scale becomes important not because the camera itself must be scaled, but quite the opposite; the camera needs to be aware of scale in order to keep itself a constant size. (a 50mm lens must always be 50mm even if the scene is now measured in miles). If you allow the camera to scale up with the model the dof would be identical in every scale which is not realistic (for example taking a photo of a building using a camera as big as a mountain will give you the same depth of field as taking a photo of a miniature building siting on your desk using a normal sized camera. The problem is in reality we do not take photos of buildings with mountain sized cameras; as a result when the camera body is kept constant, the subject size and distance relative to the sensor size changes and the circle of confusion registered for every pixel on the sensor changes)

Based on my observation (T61273) when render engine is cycles and the aperture type is radius everything is working properly. It produces the correct depth of field for the given scene size at the given distance as a real camera would. It is the F-stop value that needs some attention in regards to scale.

PS. I was a beta tester for the Maxwell engine and these topics were common discussion.

To help explain what's happening physically, here is an illustration that explores the effect of shrinking the whole camera by a factor of 2 (This would be equivalent to scaling the whole universe up by a factor of 2 while keeping the camera the same).

You would think that by scaling all parts of a camera uniformly the result on the sensor would be identical, but it is not.
The key is that the aperture of the lens changes in relation to the world ... and it is the aperture size relative to the world (not to the camera) that affects Cof on the sensor.