Skip to content

Camera Lens

Introduction

VFX professionals need to map real camera footage with VFX rendered effects. Let's look at a recent request from people to implement something like Lentil in Blender.

This is not a new problem, and over the years this topic has been brought up a few times.

Solution space

Over the years there have been different proposed solutions:

  • Camera nodes
  • Distortion lookup table
  • Lens parametrization

While distortion lookup was eventuallt implemented on the Blender Game Engine, the most popular influx of proposals was on parametrization:

Questions to be answered:

Let's try answering a few of the decision tree example questions:

  • Which problem is this trying to solve?
    • Replicating the effect of real cameras for 100% CGI shots, or mixed real-footage + VFX.
  • Who is this solving for? (what is the target audience: beginners? Experienced users? Animators? Riggers?)
    • VFX artists (in particular pre-viz artists), mid to experienced.
    • This is also to say that the audience is not researchers/academics.
  • Within the context of Blender as a multi-purpose DCC, how does this feature fit on it?
    • It connects to the existing VFX user group.
  • Which areas of Blender are affected by this design?
    • Rendering, but also compositing and tracking should be taken into consideration.
  • What is the impact on existing workflows?
    • Not relevant, the feature is isolated.
  • How does it impact the other user-groups and workflows in Blender?
    • There is an overlap with VR research (see Fisheye624) in the sense that it is another user group which may fight for the standard to be adopted.
  • Which new problems do this introduce?
    • This aggravates the feature disparity between Cycles and EEVEE.
    • In render mode: This doesn't play well with grease-pencil, modelling or even selection.
    • Technically these are not new problems, since they exist already for panorama and fisheye lens.
    • Supporting arbitrary parametrization may be a slippery slope to support (and keep maintaining) multiple incompatible systems.

Follow ups

This is an open-ended problem at the moment, so there is no example of a final design.

That said any design which tries to tackle this should present a solution for:

  • How does it relate to motion tracking?
    • Can the distortion models be consistent?
  • How to eventually support this with EEVEE, if possible (and if not possible explain why).
  • How to communicate in the viewport that the current rendering more is not compatible with the overlay draw engine.
    • And what to do about it? Disable the draw engine in these cases? Draw it mismatching the render engine buffer?
  • How universal these parameters are, who is adopting them already?
  • What are the known alternatives of parametrization?
  • Is this model future-compatible with a camera-nodes solution? (can they be converted in either direction)?
  • Which user-group to prioritize (and which to frustrate), and why.