Cycles panorama camera - custom lens mapping
Open, NormalPublic

Description

This patch allows for (kind of) an arbitrary lens shape, enabling effects such as whirl and pinch to have the actual expected resolution on the amplified areas. My objective with this was to be able to render with the non-uniform resolution of the retina. I submit it here in the hope it will be useful to more people.

An example animation with 3 force fields deforming the lens (source 'grid.py' attached):
http://youtu.be/2PeIdJNT1sI (sorry for youtube's crappy encoding quality, original was much clearer)

High quality frame: http://www.pasteall.org/pic/51596

There are still some work to be done (e.g. what is the best way to deal with the map files, what is the correct way to make it work on GPU), so I'll wait for feedback to get things in their final shape.

This patch was developed with financial support from the Foundation for Science and Technology of Portugal, under the grant SFRH/BD/66452/2009.

Details

Type
Patch

Nice work. Sergey was considering implementing a similar thing in the context of VFX / motion tracking. Ideally this would integrate with movie clip distortion in the compositor somehow, so renders can be made to match recorded footage or vice versa. But I don't have a particular design for this in mind at the moment, not familiar with this subject yet.

Some changes would indeed be necessary to accept this patch:
* Relative paths can be handled in blender_camera.cpp, with the blender_absolute_path function.
* For GPU render indeed no pointers can be stored like this, it should be stored in a "texture". Actually you might be able to reuse so subsurface scattering code, using lookup_table_read_2D in the kernel and scene->lookup_tables->add_table to create the table.
* float *cmap_grid in camera.h should be a vector<float> (less chance of memory leaks)
* Is there some standard file format for distortions like this, how would a user create these?

Thank you for your orientation. I fixed the problems you mentioned (new patch attached). P,lease check if the way I'm creating and destroying the lookup tables is the correct way to do it.

About a standard file format, before I started working on this I did a somehow extensive search, but couldn't find anything. I tried using numpy.savetxt, but it is limited to 2D arrays. I didn't want to make things too complicated, so I just defined the format to be the row and column count followed by the (x, y) coordinates, all text. That is easily generated in Python with:
"{0[0]} {0[1]} {1}".format(grid.shape, ' '.join(str(f) for f in grid.flat))
'grid' in this case is a numpy.array with shape [rows, cols, 2]

If there is anything else I could improve, let me know.

Nice update, some comments:
* In blender_camera.cpp, I think "char custom_map[1024];" can just be a "string custom_map"?
* It seems you used spaces instead of tabs for indentation in some places (camera.cpp, blender_camera.cpp, ..)
* If the file is invalid, maybe print an error to the console.
* Regarding "should also check file mtime", guess that would be nice but not strictly needed, we also don't do this for e.g. image textures.

Regarding the file format, I also couldn't fine anything, closest thing I found was this, but it's for mapping fisheye to something else so doesn't seem appropriate.
http://paulbourke.net/dome/warpingfisheye/
http://paulbourke.net/dome/meshmapper/

The format seems ok, we just need some text for the documentation to explain the file format then.

Dalai, assigned this to you in case you have some comments here as our panorama/dome expert.

That's very interesting.

I think for the design the use-case to be considered is to produce images that match real lens 1x1 (the eye here being considered one special lens case).
The Equisolid lens already does a great job on most cases, but if we really have to match real fisheye footage with rendered images, a calibration system would be nice (and yes, the clip editor could benefit from that as well)

Some comments:
Can you attach a sample of the map you created? (no numpy here) I'm trying to see if we could adapt it so we use the meshmapper format.
We are already using it for the game engine (when using domes with spherical mirrors) and it's a really straight forward format to write. It basically describe a uv lookup grid from an image to an output.

Even if we expand to the current description of the format we can talk with Paul Bourke to think on the best way of incorporating it back in his design (right now the file format assumes a fisheye of perfect 180, which doesn't seem the case for your work). I'm saying that because I'm almost sure he has worked in similar problems when in need to project with a not-perfect fisheye lens.

Also notice that in meshmapper there is an "intensity" parameter to control natural vignetting and potentially for blending between multiple images. That may be interesting to aggregate as well.

That said, I wonder if instead of a lookup mesh we shouldn't prioritize a real lens system such as:
* https://graphics.stanford.edu/wikis/cs348b-11/Assignment3
* http://www.cs.virginia.edu/~gfx/courses/2005/ImageSynthesis/assignments/camera.html

Now, in defence of your approach, this would probably allow for 3-D dome productions with this technique:
http://www.fddb.org/fulldome-3d-for-everyone-part-15/ (basically lookup shaders)

> * In blender_camera.cpp, I think "char custom_map[1024];" can just be a "string custom_map"?
I wanted to avoid manually initializing every field of BlenderCamera (can't memset(0) with a string object inside), but ok, I did it.

> * It seems you used spaces instead of tabs for indentation in some places (camera.cpp, blender_camera.cpp, ..)
Oh, sorry, nowadays I code mostly in Python, I forgot about tabulation. Now it is fixed.

> * If the file is invalid, maybe print an error to the console.
Done.

> * Regarding "should also check file mtime", guess that would be nice but not strictly needed, we also don't do this for e.g. image textures.
Ok, I removed the comment.

> The format seems ok, we just need some text for the documentation to explain the file format then.
Sure, that is easy. Where will it be insert? I guess it could be better written knowing the context.

> Can you attach a sample of the map you created?
Sure, attached (example.txt).

The format is just a sequence of numbers, really not that much different from Bourke's.

Dalai, implementing a real lens system like that would be useful, but I think it does make sense to also allow some arbitrary mapping?

Regarding the documentation, it would go here. But let's first decide on the file format.
http://wiki.blender.org/index.php/Doc:2.6/Manual/Render/Cycles/Camera

We should ensure the chosen file format can be extended later with more features if necessary. Looking at the meshmapper format further it looks like it could actually be a good fit but I can't find a full specification. The "2" at the top apparently indicates fisheye but is there another value that indicates mapping from equirectangular as done here?

Either way it makes sense to have the first line in the file indicate some info about the type of file this is, and then have all the other data on separate lines so that more parameters can be added later.

I don't think we can support XY coordinates from the meshmapper format though, the code in this patch assumes a regular grid and I'm not sure the complexity of implementing arbitrary XY mapping is worth it.

> We should ensure the chosen file format can be extended later with more features if necessary.
I agree, the format should have a leading signature / magic number.

> Looking at the meshmapper format further it looks like it could actually be a good fit but I can't find a full specification. The "2" at the top apparently indicates fisheye but is there another value that indicates mapping from equirectangular as done here?

From http://paulbourke.net/dome/warpplayer/ :
1: planar image, perspective projection
2: fisheye projection
3: cylindrical projection
4: spherical projection
5: cubic map

Right now I'm just translating from the equirectangular projection, but it would be easy and useful to extend to the other panorama ones.

> Either way it makes sense to have the first line in the file indicate some info about the type of file this is, and then have all the other data on separate lines so that more parameters can be added later.
Agreed.

> I don't think we can support XY coordinates from the meshmapper format though, the code in this patch assumes a regular grid and I'm not sure the complexity of implementing arbitrary XY mapping is worth it.
Perhaps I misunderstood the format description, but it seems the XY component is only involved in the projection part, not the rendering one.

I'm gonna add support for this format in the next patch, then we can decide what else to do.

Ok, now the parser interprets Bourke's warpmesh format and a new (more extensible) format. Here is my test scene rendered with the standard_16x10.data deformation: http://www.pasteall.org/pic/show.php?id=52520 (note that the intensity field is ignored).

The new format is:
[magic=0xd157047] [field mask] [projection type]
[cols] [rows]
[X, if in mask] [Y] [U] [V] [I]
[... repeats cols * rows times]

The currently accepted projection types are 2 (fisheye, fixed 180 fov) and 3 (cylindrical / equirectangular).

Updated for the current version of Blender.

I can't seem to apply the patch to the latest master (6292b60a3f5b1c224ec78301c7830045560fa3b4). Is this still being worked on?

I haven't updated my Blender binary in a while, as it does everything I need it to do. But I'll check what has to be done to update it, and if it is easy I'll provide a new patch.

It was easy, so here is the new patch:


And a reference class for reading and writing the custom map format:

If you need anything else, just ask.

So even so the patch is really useful for the custom panoramic cameras, it's still really limited. I've just shared a solution which is gonna to be more flexible there: https://developer.blender.org/rB0e1f34a0c8b7f2a6ecddbca65f44bbd444c1386a

It is just an initial commit made to a dedicated branch and still quite loads of stuff to be finished, but you might be interested in having a look int here and maybe even help with the development.

Hi Sergey, thank you for your work. I'm curious about in which way you find the custom map patch limited. As you mentioned in the camera ray nodes description, "The above thing might be solved if we bake distortion into a grid and use as a lookup table from the kernel"... that's exactly what this patch is useful for, you just have to provide it with the distortion map. Try baking the mapping from the SVM, and use it with this patch and then check the performance... my guess it will be worth it. If you need any help, please let me know.

From VFX you might need to make it so render result is distorted in the same exact way as the original footage. Currently your approach might work because variable focal length is not supported by the solver (yet), but it's already quite tedious work to bake distortion to an external file and link it to the renderer. Especially since after subsequential solution your distortion model might change a bit, leading you to re-baking the file and so.

Once the variable focal length is supported i don't really see how your approach could work in a nice way, you'll need to have sequence of files, rebaking them every time distoriton model changes and sending this files to the render farm..

By the lookup table i only meant that distortion grid gets baked on CPU and sent to the render device as a lookup table. This is quite the same as what's happening in compositor. And this is only needed for ray undistortion, all the rest is probably faster to be computed from the kernel. In any case you don't need to worry about keeping lookup files in a consistent way, sending them to the farm and so.

Also, using the nodes (not current ones, but once it's all done) you might be able to render such effects as rolling shutter, using only camera motion as an input and keep it all working whatever you'll be tweaking with the camera.

A rolling shutter distortion / undistortion of some sort would be excellent.