Status:
Team
Commissioner: ?
Project leader:
Project members:
Big picture:
Description
Use cases:
Design:
Engineer plan: -
Work plan
Milestone 1 - Scene Inspection T71347
Time estimate: Mostly done, 2-3 weeks of work left (state: mid-February 2020)
Milestone 2 - Continuous Immersive Drawing T71348
Time estimate: ?
Later
Notes: -
The description here and in sub-tasks is based on a design document by @Julian Eisel (Severin): VR/XR Big Picture Kickoff. Please don't take it as official document.
Goals and Overall Vision
XR enables an entirely new way to work with computer generated 3D worlds. Instead of trying to take the UIs we’ve already implemented for traditional monitor use and make them work in XR, we should establish an experience on entirely new grounds. Otherwise, what we’ll end up with is just a different, more difficult way to use Blender.
So to enter the new immersive world, we should think about which workflows could benefit the most from XR, and carefully craft a new, improved experience for them. Don’t try to enable huge amounts of features for XR, risking bad outcome - or even total failure - but find out which features matter the most and focus on them first.
Long term, the experience can then be enriched further, with more features added as smaller, more concise projects. Eventually, XR would allow creating 3D content with unseen interactivity. The entirety of Blender’s content creation capabilities available in an immersive 3D experience. However, this better be the long term vision for XR, not the goal for this initial project.
OpenXR
All VR support in Blender will be based on OpenXR, the brand new specification for VR, AR, MR, etc - or short: XR. It is created by the Khronos group and has a huge number of supporters from all over the VR industry. It’s likely going to be the standard for VR/AR/MR/… (XR).
OpenXR splits the workload between the application (Blender) and the OpenXR runtime. The runtime handles all the device specific stuff, provides many common features (e.g. time wrapping) and can add extensions to the OpenXR API. So the devices Blender will support are defined by the available OpenXR (compatible) runtimes. Currently:
- Windows Mixed Reality
- Oculus (though, not officially released)
- Monado (FOSS Linux OpenXR runtime)
For information on how to test our current OpenXR driven implementation, see https://wiki.blender.org/wiki/User:Severin/GSoC-2019/How_to_Test
State of the Project
While there are multiple initiatives related to Blender and VR (e.g MPX and Marui’s Blender XR), the first important milestone for XR support in main Blender is being finished with an ongoing Google Summer of Code project. It aims to bring stable and well performing OpenXR based VR rendering support into the core of Blender. It’s not supposed to enable any interaction, it’s solely focused on the foundation of HMD support. For more info, refer to T71365. Further work is being done to complete the first milestone: T71347: Virtual Reality - Milestone 1 - Scene Inspection.
The one important part that’s missing is controller support, or more precisely, input and haptics support. Although we can do some experiments, it would be helpful to first get a better idea of how VR interaction should work eventually, before much effort is put into supporting this.
This is the foundation that we can build on. It should allow people with experience in the field to join efforts to bring rich immersive experiences to Blender.
Development Process
Proposal is to do a use-case driven development process. That means, we define a number (say 10-20) of specific use cases. These should be chosen wisely, based on what we think has most potential with XR.
First, just vaguely describe what requirements the system has for a use-case, a bit later on, define what tools and options are needed to get the use case done (possibly by a specific persona).
On top of that, the process should be done iteratively. That means, we define short term goals for an iteration, work on them and evaluate afterwards before we get into the next iteration. The iterations may include implementation work. So we don’t do the waterfall-like “design first”, but there should be enough design work done before first implementations are made.
Any implementation should be driven by a use-case (or multiple) that the team agrees is drafted out well enough.