Split of projection matrix for left and right eye #29161

Closed
opened 2011-11-05 06:36:49 +01:00 by Damien Touraine · 16 comments

%%%Currently, there is only one projection matrix inside blender. This matrix is adapted by a single eye shifting for stereoscopy rendering.
In specific case (CAVE Automatic Virtual Environment), the stereoscopic projection must be adapted to the position of the eyes of the user regarding the scene. For instance, the user can "walk" on the screen and turning on himself.
The default stereoscopic management by BGE is not enough to manage such case. So, we suggest to add two projection matrix to a given camera : one per eye. Both matrices can be addressed by Python API. It will be more simple to adapt the stereoscopy from a Python "plugin".

The patch we propose includes such feature. With the two matrices, it adds an attribute to the camera that is the active rendering eye. As such, every call to the projection matrix of the camera will return the one of the active eye. Before each rendering (inside KX_KetsjiEngine::RenderFrame), the render will set the current eye. Thus each rendering optimization (clipping, scene management, ...) will be computed regarding the current eye.%%%

%%%Currently, there is only one projection matrix inside blender. This matrix is adapted by a single eye shifting for stereoscopy rendering. In specific case (CAVE Automatic Virtual Environment), the stereoscopic projection must be adapted to the position of the eyes of the user regarding the scene. For instance, the user can "walk" on the screen and turning on himself. The default stereoscopic management by BGE is not enough to manage such case. So, we suggest to add two projection matrix to a given camera : one per eye. Both matrices can be addressed by Python API. It will be more simple to adapt the stereoscopy from a Python "plugin". The patch we propose includes such feature. With the two matrices, it adds an attribute to the camera that is the active rendering eye. As such, every call to the projection matrix of the camera will return the one of the active eye. Before each rendering (inside KX_KetsjiEngine::RenderFrame), the render will set the current eye. Thus each rendering optimization (clipping, scene management, ...) will be computed regarding the current eye.%%%

Changed status to: 'Open'

Changed status to: 'Open'

%%%Actually, we may also have to split the modelview matrix. Because, among others, OpenGL lighting is computed regarding modelview matrix, but not projection one.
Thus, if the disparity between is voluntarily big (or two different point of view), there might be artifacts in lighting computation.
But, contrary to the projection matrix, the modelisation comes from KX_GameObject. I don't know a smart way to split the modelview matrix.%%%

%%%Actually, we may also have to split the modelview matrix. Because, among others, OpenGL lighting is computed regarding modelview matrix, but not projection one. Thus, if the disparity between is voluntarily big (or two different point of view), there might be artifacts in lighting computation. But, contrary to the projection matrix, the modelisation comes from KX_GameObject. I don't know a smart way to split the modelview matrix.%%%

%%%assigning to myself. I will try to look at it next week.
You may be right on the model matrix issue.%%%

%%%assigning to myself. I will try to look at it next week. You may be right on the model matrix issue.%%%

%%%Concerning the model matrix, we can add a "post camera position" left and right matrices. These matrices are generally equal to identity (unless stereo rendering). The current eye one should be left mutliply by the matrix generated from camera position in the KX_Camera::GetModelviewMatrix() .
But that is an efficient method only if all scene management (clipping planes, bounding spheres, and so on) rely on KX_Camera::GetModelviewMatrix() method to optimize the rendering. I'm not enough confident with blender to be sure that will work.%%%

%%%Concerning the model matrix, we can add a "post camera position" left and right matrices. These matrices are generally equal to identity (unless stereo rendering). The current eye one should be left mutliply by the matrix generated from camera position in the KX_Camera::GetModelviewMatrix() . But that is an efficient method only if all scene management (clipping planes, bounding spheres, and so on) rely on KX_Camera::GetModelviewMatrix() method to optimize the rendering. I'm not enough confident with blender to be sure that will work.%%%

%%%In the aim of my previous message, I studied a smart way to manage stereoscopic modelview update by python API.
The solution I have found rely on the shift of computing of stereo modelview to Camera rather than RAS_OpenGLRasterizer.
Thus BGE asks for the stereoscopic point of view to the camera (that owns most of the interesting datas for stereo view point calculation). If the user has specified one through python API, thus, the camera provides it. Otherwise, it will give a "default" modelview shifting the same way than RAS_OpenGLRasterizer::SetViewMatrix did before.
The version 2 of the patch %%%

%%%In the aim of my previous message, I studied a smart way to manage stereoscopic modelview update by python API. The solution I have found rely on the shift of computing of stereo modelview to Camera rather than RAS_OpenGLRasterizer. Thus BGE asks for the stereoscopic point of view to the camera (that owns most of the interesting datas for stereo view point calculation). If the user has specified one through python API, thus, the camera provides it. Otherwise, it will give a "default" modelview shifting the same way than RAS_OpenGLRasterizer::SetViewMatrix did before. The version 2 of the patch %%%

%%%In the aim of my previous message, I studied a smart way to manage stereoscopic modelview update by python API.
The solution I have found rely on the transfer of computing of stereo "shifting" from RAS_OpenGLRasterizer to Camera.
That induce little update of other components (ImageRender, Dome, Light), because RAS_OpenGLRasterizer::setViewMatrix(...) is replaced by KX_Camera::GetStereoMatrix(float eyeSeparation) and RAS_OpenGLRasterizer::SetModelviewMatrix() (both methods are a splitt of the two functionnality of RAS_OpenGLRasterizer::setViewMatrix(...)).

With the version 2 of the patch (see attachments), BGE asks for the stereoscopic point of view to the camera (that owns most of the interesting datas for stereo view point calculation). If the user has specified one through python API, thus, the camera provides it. Otherwise, it will give a "default" modelview shifting (transfer of the code from RAS_OpenGLRasterizer::SetViewMatrix did before).
%%%

%%%In the aim of my previous message, I studied a smart way to manage stereoscopic modelview update by python API. The solution I have found rely on the transfer of computing of stereo "shifting" from RAS_OpenGLRasterizer to Camera. That induce little update of other components (ImageRender, Dome, Light), because RAS_OpenGLRasterizer::setViewMatrix(...) is replaced by KX_Camera::GetStereoMatrix(float eyeSeparation) and RAS_OpenGLRasterizer::SetModelviewMatrix() (both methods are a splitt of the two functionnality of RAS_OpenGLRasterizer::setViewMatrix(...)). With the version 2 of the patch (see attachments), BGE asks for the stereoscopic point of view to the camera (that owns most of the interesting datas for stereo view point calculation). If the user has specified one through python API, thus, the camera provides it. Otherwise, it will give a "default" modelview shifting (transfer of the code from RAS_OpenGLRasterizer::SetViewMatrix did before). %%%

%%%Hi Damien, I have some questions about your solution and an overall impression.

I think this is a way too complicate route, when it comes to the user.

Instead of adding the Python calls to left/right/middle project/position matrixes .... why not add a camera.frustrum_shift and do all the calculation internally. This should work even for non-stereo caves (like Jorge's) -- although for those caves the camera.projection_matrix was enough).

Also why to have a middle projection matrix?
The way I see you may need to create an option to set camera.frustrum_orientation too. Although that could be achieved by rotating the camera in Blender and walking 'side-ways'.

Unless you are planning to implement a stereo method different than the 'zero parallax' I think you don't need to bother the user with so many details. What do you think?%%%

%%%Hi Damien, I have some questions about your solution and an overall impression. I think this is a way too complicate route, when it comes to the user. Instead of adding the Python calls to left/right/middle project/position matrixes .... why not add a camera.frustrum_shift and do all the calculation internally. This should work even for non-stereo caves (like Jorge's) -- although for those caves the camera.projection_matrix was enough). Also why to have a middle projection matrix? The way I see you may need to create an option to set camera.frustrum_orientation too. Although that could be achieved by rotating the camera in Blender and walking 'side-ways'. Unless you are planning to implement a stereo method different than the 'zero parallax' I think you don't need to bother the user with so many details. What do you think?%%%

%%%Hi Dalai,
I'm agree with you: it is a little bit complicated. But, CAVE rendering system is very complicated, too. I remind that, in case of CAVE rendering, the screens are contiguous and only the position and orientation of the screens (regarding whole CAVE system) are used to computer projection and modelisation matrices. We must not rely on "classical" camera parameters such as lens. For instance, "Zooming" (changing the focal distance to magnify or decrease the subject) is not possible. You must "scale" the scene. So, if you do internal calculation of the projection according to given stereo shifting, you may induce overlaps or holes at the intersection of the screens wherever it must be continuous, because a single blender rendering node may not be aware of the other screens.

Thus, we have three solutions:

  • as you suggest, we can add only both camera.frustrum_shift and camera.frustrum_orientation. Thus, we have to compute, internally, all correct matrices. But, latter, we may have a specific case where this formalism is not valid and the developper would have to send another patch to solve his own constraints. Moreover, the order of multiplying of camera.frustrum_shift and camera.frustrum_orientation may induce mistakes.
  • we patch blender by allowing blender Python API developper to specify his own modelisation AND projection matrices the way he wants. As such, this developper can do whatever he wants with these matrices. He as a full controll of the OpenGL rendering pipeline. The only restriction he has is to specify his "points of views" according to the camera position.
  • we include all calculation, including screen coordinates inside rea

In case of CAVE system's, all projection screens are not in the same plane. Most of CAVE include perpendicular screens. Moreover, the positions of the users are tracked to adapt the stereo to the "real" position of the eyes of the users.
My idea, with the blender-cave I'm working on, is to fix the position of the blender camera in the CAVE reference frame (for instance, the camera may look at the "main" screen of the Virtual Environment). Thus, each sreen, that is defined in real world reference frame (that is the same than CAVE one), can be define according to the blender camera. Thus, for a given rendering blender window, we must apply a transformation matrix to adapt it the camera to the attached screen.

The integration of middle matrix was for compatibility with before patch applying (not mixing projection_matrix with new projection_matrix_left/right/middle). But, you are agree : we may remove middle matrices. Implicitely, when the user call the default matrix (stereo_position_matrix or projection_matrix), we work on the "middle" matrix (ie : the matrix that is apply when BGE is not in stereo).

BlenderCAVE that will include all these feature will be release soon as open source software. So a sample of how to integrate CAVE by these complex projection matrices will be provided. We may integrate it inside Blender Python API samples provided natively inside Blender.

Actually, I've a problem with the current python API : with a default 2.6 blender (not modified by our patch), whenever I try to add a scale before frustum matrix, the scene itself does not scale. Is there any scene management that adapt the size of the objects to "perfectly" feet inside the window ? If yes, is there any way to desactivate it ?
%%%

%%%Hi Dalai, I'm agree with you: it is a little bit complicated. But, CAVE rendering system is very complicated, too. I remind that, in case of CAVE rendering, the screens are contiguous and only the position and orientation of the screens (regarding whole CAVE system) are used to computer projection and modelisation matrices. We must not rely on "classical" camera parameters such as lens. For instance, "Zooming" (changing the focal distance to magnify or decrease the subject) is not possible. You must "scale" the scene. So, if you do internal calculation of the projection according to given stereo shifting, you may induce overlaps or holes at the intersection of the screens wherever it must be continuous, because a single blender rendering node may not be aware of the other screens. Thus, we have three solutions: * as you suggest, we can add only both camera.frustrum_shift and camera.frustrum_orientation. Thus, we have to compute, internally, all correct matrices. But, latter, we may have a specific case where this formalism is not valid and the developper would have to send another patch to solve his own constraints. Moreover, the order of multiplying of camera.frustrum_shift and camera.frustrum_orientation may induce mistakes. * we patch blender by allowing blender Python API developper to specify his own modelisation AND projection matrices the way he wants. As such, this developper can do whatever he wants with these matrices. He as a full controll of the OpenGL rendering pipeline. The only restriction he has is to specify his "points of views" according to the camera position. * we include all calculation, including screen coordinates inside rea In case of CAVE system's, all projection screens are not in the same plane. Most of CAVE include perpendicular screens. Moreover, the positions of the users are tracked to adapt the stereo to the "real" position of the eyes of the users. My idea, with the blender-cave I'm working on, is to fix the position of the blender camera in the CAVE reference frame (for instance, the camera may look at the "main" screen of the Virtual Environment). Thus, each sreen, that is defined in real world reference frame (that is the same than CAVE one), can be define according to the blender camera. Thus, for a given rendering blender window, we must apply a transformation matrix to adapt it the camera to the attached screen. The integration of middle matrix was for compatibility with before patch applying (not mixing projection_matrix with new projection_matrix_left/right/middle). But, you are agree : we may remove middle matrices. Implicitely, when the user call the default matrix (stereo_position_matrix or projection_matrix), we work on the "middle" matrix (ie : the matrix that is apply when BGE is not in stereo). BlenderCAVE that will include all these feature will be release soon as open source software. So a sample of how to integrate CAVE by these complex projection matrices will be provided. We may integrate it inside Blender Python API samples provided natively inside Blender. Actually, I've a problem with the current python API : with a default 2.6 blender (not modified by our patch), whenever I try to add a scale before frustum matrix, the scene itself does not scale. Is there any scene management that adapt the size of the objects to "perfectly" feet inside the window ? If yes, is there any way to desactivate it ? %%%

%%%Misprint for third solution :

  • we include a specific mode that do all calculation, including screen coordinates inside real world inside blender. This calculation won't rely on Focal Lens to computer projection matrix.%%%
%%%Misprint for third solution : * we include a specific mode that do all calculation, including screen coordinates inside real world inside blender. This calculation won't rely on Focal Lens to computer projection matrix.%%%

%%%The thing about focal length is that this is a "render" parameter 'borrowed' by the GE for the stereo calculation.

I know that it doesn't make sense in the GE, but since it's there (and already used by the stereo code as the distance to the zero parallax plane) it's not a huge problem to use. But, now that I think of it, it would be a problem for caves since you would need different "focal lengths" for each projection/wall.

So I would say that allowing custom right/left matrixes may be the way (no middle).
Now, would you prefer to wait after the release of BlenderCAVE to gather some user feedback?

Actually, I've a problem with the current python API : with a default 2.6 blender (not modified by our patch), whenever I try to add a scale before frustum matrix, the scene itself > does not scale. Is there any scene management that adapt the size of the objects to "perfectly" feet inside the window ? If yes, is there any way to desactivate it ?

You mean to scale the objects or the openGL matrixes? have an example I can look at?

%%%

%%%The thing about focal length is that this is a "render" parameter 'borrowed' by the GE for the stereo calculation. I know that it doesn't make sense in the GE, but since it's there (and already used by the stereo code as the distance to the zero parallax plane) it's not a huge problem to use. But, now that I think of it, it would be a problem for caves since you would need different "focal lengths" for each projection/wall. So I would say that allowing custom right/left matrixes may be the way (no middle). Now, would you prefer to wait after the release of BlenderCAVE to gather some user feedback? > Actually, I've a problem with the current python API : with a default 2.6 blender (not modified by our patch), whenever I try to add a scale before frustum matrix, the scene itself > does not scale. Is there any scene management that adapt the size of the objects to "perfectly" feet inside the window ? If yes, is there any way to desactivate it ? You mean to scale the objects or the openGL matrixes? have an example I can look at? %%%

%%%Actually, I'm thinking of using focal length as a parameter to scale the scene. Because, some scenes (for instance biological or astronomical ones) do not have human size scale (few nanometers or severel light-years) wherever the Virtual Environments have typical human size (about two meters heigth). Each camera and screens will have the same focal length. But, the distance between each point of view and its projection screen will be different and dependent on all CAVE screens. Remind that in blender CAVE there is one blender instance per point of view (modulo the OpenGL quad buffer).

Yes, I think that "middle matrix" will be remove. For the moment, I will wait for feebacks from users fromof our lab and other Virtual Environment (Jorge Gascon, Julian Adenauer) before modifying the patch and resubmit it to this blender patch page.
If you wan't, I can send you the current version. But Jorge will update his Git server (see http://www.gmrv.es/~jgascon/BlenderCave/#download) whenever we have a first working version (few problem of synchronisation of the data for the moment).

Concerning the problem of scaling, I made a mistake: if we want to scale a scene or an object, we need, first, to place the reference frame at its center.
%%%

%%%Actually, I'm thinking of using focal length as a parameter to scale the scene. Because, some scenes (for instance biological or astronomical ones) do not have human size scale (few nanometers or severel light-years) wherever the Virtual Environments have typical human size (about two meters heigth). Each camera and screens will have the same focal length. But, the distance between each point of view and its projection screen will be different and dependent on all CAVE screens. Remind that in blender CAVE there is one blender instance per point of view (modulo the OpenGL quad buffer). Yes, I think that "middle matrix" will be remove. For the moment, I will wait for feebacks from users fromof our lab and other Virtual Environment (Jorge Gascon, Julian Adenauer) before modifying the patch and resubmit it to this blender patch page. If you wan't, I can send you the current version. But Jorge will update his Git server (see http://www.gmrv.es/~jgascon/BlenderCave/#download) whenever we have a first working version (few problem of synchronisation of the data for the moment). Concerning the problem of scaling, I made a mistake: if we want to scale a scene or an object, we need, first, to place the reference frame at its center. %%%

%%%Hi, we have updated to work with blender 45474. Thus, it is working with the new matrices representation.
Altough this patch applies with few translation on revision 47598, we haven't fully test: we would like to know if this patch would be validated before any further test on recent version of blender ...

Regards

 Damien Touraine%%%
%%%Hi, we have updated to work with blender 45474. Thus, it is working with the new matrices representation. Altough this patch applies with few translation on revision 47598, we haven't fully test: we would like to know if this patch would be validated before any further test on recent version of blender ... Regards ``` Damien Touraine%%%
Member

Added subscriber: @JulianEisel

Added subscriber: @JulianEisel
Member

@dfelinto, shouldn't this be closed due to your multiview work?

@dfelinto, shouldn't this be closed due to your multiview work?

Changed status from 'Open' to: 'Archived'

Changed status from 'Open' to: 'Archived'

Not really for that, but this should be close since it's no longer needed (now that we have bge.render.getStereoscopyEye())

Not really for that, but this should be close since it's no longer needed (now that we have bge.render.getStereoscopyEye())
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
3 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#29161
No description provided.