Status: Task design and engineer plan review
Goal of this project are add code support and tools for Camera Projection Painting to Blender.
Such workflows used in 3D CG from early days for matte painting, 3D and video integration, etc.
In all workflows most logical UX is projection preview as it seen from camera and projection preview overlay on 3D model or scene that visible from any point.
When matte painting is one of oldest workflows. Last years we can see huge progress in 3D scanning/Photogrammetry methods, that now one of main parts of 3D assets or scenes creation process. And in same time a lot of artists use photo modeling workflows with photo or video data as a source for textures.
And again, most logical and fast way to work with textures for that models is using 3D paint feature directly in 3D app, instead of use external 3D painters or 2D editors to work with UVed textures.
Who can use it
This project improvements in used UX from 3D assets creators, Photogrammetrists (3D scanners) and VFX/Matte painters point of view.
For that we made a list of current limitations and propose improvements for wide type of 3D CG users.
- 3D assets creators:
Probably one of biggest group. Standard photo modeling workflow use photo or images as a backgrounds. Final model UVed in human readable form and textures are making with standard for 2D editors or texture painter's workflows with using layers, masks, etc.
Paint tool that can allow project image or photo from specific camera view can speedup time for mock-ups iterations in works with clients and art/creative directors. Due to real-time preview of results in 3D. In same moment using real photo, especially in architectural models limiting possible source of images. Due to lens distortions in all photo, required additional attention to used only high-quality professional photo equipment. Or applying undistortion to all sources before using them for texturing. And when using video in that moment required lot of additional disk space to store that temporary data.
If 3D assets are low-polygon than this also required creation of “proxy” objects with higher polygons count, because current Blender UV Projection give visible distortions on textures if used low-poly mesh or mesh with n-gons.
- 3D sculptors:
Another and also big group of 3D content creators. And if in early CG days that was mostly artists that working with organic shapes or characters. In current time a lot of artists doing hard-surface modeling in 3D sculpting editors.
That group of artists also need the way to project photo data to textures and using paint editor in same app, speedup sketching or work in progress iterations.
All models made in that workflows in time of work can have complicated and non-optimized UVs. And using external 2D editors or texture painters not usable until model are retopologized and UVed in human readable form.
Current Blender UV projection have less issues with such meshes. But polygons count already have a big impact on CPU time required for work. Computing new UV Projection from new camera/view for 5-10 million poly meshes already far from real-time.
This user group also have demand on works with photo or video sources, and lens distortion in source data required to use of undistorted images.
- Photogrammetry and 3D scanning:
Last years one of most growing groups of 3D content creators. That group work with a huge poly count meshes. And it more narrows to 3D sculpting by used workflows in post-processing scan data. With one but important difference, 3D scans are often monolith or split to chunks 3D meshes. And in the middle of process this meshes have completely non optimized UV's with thousands UV islands made by UV parametrization algorithms optimized for speed and works with thousands or even hundreds of millions polygons meshes.
Photogrammetry with all its power have a limitations in used algorithms due to a huge amount of data used in time of creation of 3D meshes and their texturing. And photogrammetry user not always have way to easily control process of choosing best source for texture in specific part of model. And result texture can use source data from image (camera) with worse depth of field or changed environment lights.
Lot of clients of 3D scan studios prefer to work with raw meshes and original textures to be sure that all possible details from 3D scan was used in most efficient way for their project.
But photogrammetrist still need fast way to quickly fix raw mesh and raw texture from specific image (specific camera) with better details, or correct light, etc.
As with 3D sculptors, mesh polygon count is more than enough to avoid UV projection deformations on textures. But used polygon count around 10-20 million of polygons in raw mesh is not a rare use case for high-quality high-resolution 3D scan.
One of biggest differences for other groups, is an image (camera count) used for 3D model and texture creation. 50-60 images/cameras are average small photogrammetry scan.
Big or precise scans can easily have 1000 or even 10000 images with 8K or bigger resolution.
Good point that using all these images/cameras for final texture is not a mandatory. But workflow that allow quickly switching between images/cameras are big time saver.
And as soon as source for 3D model and texture are real photo, that always have lens distortions. But if other groups probably can work with some small distortions. Photogrammetry due to it high precision required using mathematically correct lens distortion models. That allow precise copy of source texture from scanned objects with sub millimeter resolution.
Used image count and size of this images required a lot of disc space for any additional temporary files like undistorted images.
So support for using lens undistortion in projection painting “must have” feature.
- Matte painting/VFX:
That group can combine workflows and issues from all groups mentioned above. But in addition, this group often work with video sources. That mean this can be hundreds or thousands of frames that also can have real lens distortions or even rolling shutter issue (visible as a skew image deformation).
3D models can be a combination of low-polygon models or blocks or n-gons for quickly build different distance planes on scene.
And all of this in production or sketching time required quick integration with projection mapping painting.
Current Blender workflow limitations and issues
Summarizing the above limitations of Blender:
Blender currently has a way to do camera projection painting, with "Clone" brush and "UV Project" modifier. This method is enough labor intensive on the part of the user experience, requires to choose a lot of options even just to change camera from which projection act, which negatively affects the speed of the workflow. The main disadvantage of this method is the requirement for the density of the geometry. Low poly models (for example, after retopology) will have unnecessary distortions when drawing. Also, an extra step in this process is the export undistorted images/movie clips, because datasets can be large enough to create a lot of unnecessary files. It is also desirable to have control over brightness/contrast, ect, this method do not allow this.
User workflow expectations
The ultimate goal of this project is to build a workflow that eliminates these shortcomings:
Assumed that the user has a scene with mesh objects, created by himself or imported from third-party software. There are also a number of cameras and original images/movie clips.
- Automatically attach images/movie clips to specific cameras, they have similar names in any software. It should also be possible to do this manually.
- Distortion parameters can be obtained through Movie Tracking Solver, but it should be possible to set parameters separately.
Canvas texture correction/painting process
Assumed that user is in Texture Paint Mode.
- Select a camera. It is assumed that when you select a camera, the corresponding image/movie clip will also be selected for painting.
- Paint. It is assumed that texture projection will occur from the camera with distortion correction according to the distortion parameters.
More globally, a major refactoring of the Texture Paint module is needed (like this one T73935), all users expect smooth workflow with high-polygon objects and high-resolution images/movie clips.
Also, an important part is to give the user a visual representation in the viewport of exactly how the texture will project on the object and from which camera the projection is act (the most obvious is to project from the active camera of the scene).
Here described only a suggestions, each of them can be an open question.
- For both "Movie Tracking" and "Texture Paint" modules image/movie clip as well as distortion parameters should be part of a particular camera (That should be coordinated with Movie Tracking module). About the Brush Management project. The described workflows have a tight relationship with a specific scene and objects in it, hence from the point the user's view of the photogrammetrist or VFX artist, it is expected that the work done by him at the scene setup stage will be saved specifically for this scene and objects in it. Therefore, it is important to understand that the camera with which the projection is act, its distortion parameters should not have relations with a specific brush. Since the dataset of images/movie clips can be large and also associated with specific cameras of a specific scene, it is also cannot relate to the brush.
- Automate the process of attaching images/movie clips to specific cameras. This is most often possible by the name matching of the camera and the image/movie clip file.
- Add "Camera" brush texture projection mode. It should use data (image/movie clip) from the active scene camera for the brush texture and also to determine the current parameters of the distortion.
- Refactoring the brush preview. At the moment it is not informative and has limited presentation of the end result (for example, for "3D" texture map mode it's not presented for many years). In the context of this project the preview should be an overlay that takes into account the geometry of the object and gives an actual representation at any view.
All this but more visually
Previously, we (@Vlad Kuzmin (Ssh4), @Ivan Perevala (ivpe)) did it as an addon, it was more photogrammetry-oriented so UI/UX should be different in some cases. In this short video described basic principles of workflow with large dataset of raw (distorted) images, viewport preview. Non-commercial scene is used.
This section will proceed only after final design is ready:
Open questions according to the suggestions below. Everyone can have their own opinion, so we need a general solution to be approved by everyone. Therefore, I leave a link to the original post, as well as its summary. After design document update some suggestions are included into it and some are still questionable:
- Original post by @Brecht Van Lommel (brecht) Original post by @Sergey Sharybin (sergey) Where to store the distortion parameters of images/movie clips as well as the relationship between a specific camera with a specific image as given the current Movie Tracking design and in the case of its modification to more unified?
- Original post by @Brecht Van Lommel (brecht) In case if camera store image or movie clip as background image. Should we have option for automation switch brush texture image / movie enabled by default? Or it should be the only way?
- Original post by @Sebastian Koenig (sebastian_k) Camera background image is enough for visual preview or it should be shader based?
- Original post by @Sebastian Koenig (sebastian_k) VFX scenes has already set background movie clips on cameras. It can be used for any workflow as well, I think. In terms of current project - to store image or movie clip relative to specific camera. But if we have more than one background image on single camera? In this case background images/movies are blended. The oldest principle: "You get what you see" should work here too. So how can we handle this? In my opinion the simple way - remove list of background images and leave only one. As I remember in older releases (< 2.8x) we had this as a list to use from different view angles (from left, right, top, camera). So, I think it is more Blender's design questionable too.
- Original post by @Vlad Kuzmin (Ssh4) should be camera intrinsics animated for support videos that captured using zoom lenses as well as using optical or sensor stabilization?
- Refactoring of the Texture Paint mode. Is it planned and is it worth counting on help in its implementation?
A small research on YouTube about current limitations:
CGMatter: Blender 2.8 Panorama stitching projection painting (part 2)
CGMatter: 2D To 3D
CGMatter: Realistic 3D? EASY
IanHubert: Wild Tricks for Greenscreen in Blender