Page MenuHome

Virtual Reality (XR/VR/AR/MR)
Confirmed, NormalPublicTO DO

Assigned To
Authored By
Dalai Felinto (dfelinto)
Aug 21 2019, 4:30 PM
"Party Time" token, awarded by Mike.Bailey."Love" token, awarded by PatrickWilson."Love" token, awarded by Somnolent."Love" token, awarded by 04ely."Love" token, awarded by Skyfish."100" token, awarded by satishgoda."Like" token, awarded by Fracture128."Love" token, awarded by blueprintrandom."Love" token, awarded by Tetone."Love" token, awarded by aliasguru."Love" token, awarded by juanquico.


NOTE: This section will is WIP and will be updated shortly.



Commissioner: ?
Project leader:
Project members:
Big picture:


Use cases:


Engineer plan: -

Work plan

Milestone 1 - Scene Inspection T71347
Time estimate: Mostly done, 2-3 weeks of work left (state: mid-February 2020)

Milestone 2 - Continuous Immersive Drawing T71348
Time estimate: ?


Notes: -

The description here and in sub-tasks is based on a design document by @Julian Eisel (Severin): VR/XR Big Picture Kickoff. Please don't take it as official document.

Goals and Overall Vision

XR enables an entirely new way to work with computer generated 3D worlds. Instead of trying to take the UIs we’ve already implemented for traditional monitor use and make them work in XR, we should establish an experience on entirely new grounds. Otherwise, what we’ll end up with is just a different, more difficult way to use Blender.
So to enter the new immersive world, we should think about which workflows could benefit the most from XR, and carefully craft a new, improved experience for them. Don’t try to enable huge amounts of features for XR, risking bad outcome - or even total failure - but find out which features matter the most and focus on them first.

Long term, the experience can then be enriched further, with more features added as smaller, more concise projects. Eventually, XR would allow creating 3D content with unseen interactivity. The entirety of Blender’s content creation capabilities available in an immersive 3D experience. However, this better be the long term vision for XR, not the goal for this initial project.


All VR support in Blender will be based on OpenXR, the brand new specification for VR, AR, MR, etc - or short: XR. It is created by the Khronos group and has a huge number of supporters from all over the VR industry. It’s likely going to be the standard for VR/AR/MR/… (XR).

OpenXR splits the workload between the application (Blender) and the OpenXR runtime. The runtime handles all the device specific stuff, provides many common features (e.g. time wrapping) and can add extensions to the OpenXR API. So the devices Blender will support are defined by the available OpenXR (compatible) runtimes. Currently:

  • Windows Mixed Reality
  • Oculus (though, not officially released)
  • Monado (FOSS Linux OpenXR runtime)

For information on how to test our current OpenXR driven implementation, see

State of the Project

While there are multiple initiatives related to Blender and VR (e.g MPX and Marui’s Blender XR), the first important milestone for XR support in main Blender is being finished with an ongoing Google Summer of Code project. It aims to bring stable and well performing OpenXR based VR rendering support into the core of Blender. It’s not supposed to enable any interaction, it’s solely focused on the foundation of HMD support. For more info, refer to T71365. Further work is being done to complete the first milestone: T71347: Virtual Reality - Milestone 1 - Scene Inspection.

The one important part that’s missing is controller support, or more precisely, input and haptics support. Although we can do some experiments, it would be helpful to first get a better idea of how VR interaction should work eventually, before much effort is put into supporting this.

This is the foundation that we can build on. It should allow people with experience in the field to join efforts to bring rich immersive experiences to Blender.

Development Process

Proposal is to do a use-case driven development process. That means, we define a number (say 10-20) of specific use cases. These should be chosen wisely, based on what we think has most potential with XR.
First, just vaguely describe what requirements the system has for a use-case, a bit later on, define what tools and options are needed to get the use case done (possibly by a specific persona).

On top of that, the process should be done iteratively. That means, we define short term goals for an iteration, work on them and evaluate afterwards before we get into the next iteration. The iterations may include implementation work. So we don’t do the waterfall-like “design first”, but there should be enough design work done before first implementations are made.

Any implementation should be driven by a use-case (or multiple) that the team agrees is drafted out well enough.

Event Timeline

Dalai Felinto (dfelinto) lowered the priority of this task from 90 to Normal.Aug 21 2019, 4:30 PM
Dalai Felinto (dfelinto) created this task.

Dalai and I agreed on merging T47899 into this. The old task can still be referred to, but development took new paths that are better discussed here and in the sub-tasks. We'd also like to discuss things on a broader level - the "big picture" for VR in Blender.
Will publish more info soon.

I think having collapsible 'shelves' which hold icons or folders is the way to go

one could grab a 'icon' that is 2d thumbnail on a shelf -> pull it into vr where it becomes a 3d icon,
set it on a desk -> inspect a 'folder' inside it containing all it's attributes
(brush radius, feather, color etc)

further down the line, the actual python running the object will be visible inside one of these 'nests' or logic noodles

this way tools can be shared in a .blend
(some work will need to be done for infrastructure later to make more flexible 'chunks' used to make tools in py calling compiled code)

maybe also have 'active tool button bindings' in a folder*
(like when a tool is 'bound' to a VR handle the tool itself has the shortcuts in it for how to interact with it)

Ben (Mr_Squarepeg) added a comment.EditedNov 5 2019, 1:42 AM

On the Topic of UI.

This has been solved by tools such as Tvori and AnimVR. I would recommend looking at what they have done to solve this issue. Things that would be helpful would be the ability to pin, Scale uniformly, and move panels such as the timeline, Asset Management window, etc in 3D space for example.

Now on the topic of Animating rigged characters. It would be to be able to pose and Animate characters in VR using motion controllers.

Something like this example would be most welcome.

Hope this post has been helpful in some way.

@Ben (Mr_Squarepeg) the idea is to support different use cases, and different UI/UX experiments via our Python Add-on API. This way interested developer teams can maintain complete solutions without us having to pick and choose a single definitive VR user interface.

@Dalai Felinto (dfelinto) That is understandable. I just want the default UI Solution for VR at least for animation and Scene Layout to be awesome! Which I have no doubt will be accomplished. Hence why I am making these suggestions early on.

@Jacob Merrill (blueprintrandom) regarding gesture input:
The link you posted is something different though: here "gesture" means estimating a hand shape from a picture (for example making a "peace" sign).
But in VR you usually don't have cameras tracking your hands and gestures are more like "drawing a 3D shape".
A more closely related example would be the gesture feature in Maya / MARUI:

As far as I am aware the Oculus Quest does not have OpenXR support yet which is what Blender's VR support is based on. That said it would be better to use the controllers included with the Quest and Rift S because they are more accurate at this time, Use the leap motion, or use the Valve Index controllers when they release their OpenXR run time.

This task is used as parent of all other VR tasks which are planned to be implemented by multiple teams. No reason to have this assigned to anybody.

Since I haven't seen it discussed, I should add that live capture of head, controllers, and additional trackers is very useful for animating bipeds. A simple IK setup and basic python script could be made by the user, so long as additional trackers are accessible within the blender XR system.

"Since I haven't seen it discussed, I should add that live capture of head, controllers, and additional trackers is very useful for animating bipeds. A simple IK setup and basic python script could be made by the user, so long as additional trackers are accessible within the blender XR system."

  • yeah animating agents in real time may be kinda neat too- like avatar.

Glycon is a software a friend of mine makes that does this using this same stuff ($paid$)