Page MenuHome

VR design / usability
Confirmed, NormalPublicTO DO


The following lists a number of higher level, usability related questions that have to be figured out.

General Workflow

How will navigation in the virtual world happen?
Early on, we should decide how navigation in the VR view will work. It should probably be usable with and without handheld controllers.
Other VR engines have already implemented their solutions, which may be useful as a reference.

How can we create workflow specific VR UIs?
It would be nice if VR UIs could be integrated well with other workflow mechanics we have in Blender. For example there could be a simplified UI for 101-like templates, or specialized sculpting UIs. A version with focus on accessibility might also be interesting, VR offers new approaches for handicapped access.
Basically we shouldn’t set one VR UI in stone, but allow people to come up with better or alternative solutions for specified use-cases. We’d still deliver a polished default experience obviously.

This has big technical implications, as it influences how tools, operators, gizmos and other graphical elements have to be implemented.

Operators, Tools & Gizmos

How will operators, tools and gizmos work together in VR?
Will we only provide the active-tool concept, or can we allow non-modal operator execution as well? How will tools and operators be selected/invoked?

How can users customize keymap bindings of controllers?
Note: The OpenXR specification “outsources” this part to the runtime. In Blender we would listen to abstract actions (e.g. “Teleport”) and the OpenXR runtime (Oculus, Windows Mixed Reality, etc.) binds this to a device button/gesture. This makes our job easier, we don’t have to care for device specific keymap bindings.
So assuming we don’t need to listen to device specific button events (which might still be needed), OpenXR already answered this question for us.


How is the VR reference space defined?
What’s needed is a point in space that acts as the origin of the VR experience, a up direction and an initial direction to look at (always perpendicular to the up direction?).
Note that the up direction and the origin basically define the floor plane.
The OpenXR runtime calculates the head position/rotation based on that, including eye level above the ground. It can also define the room/movement boundaries.
Other engines probably already figured this out and we may be able to copy them.

How can settings be accessed from within the VR session?
There are different settings to think about: Viewport settings, VR-specific settings (e.g. toggle positional tracking), operator settings, tool settings, etc. We’d probably not expose properties editor or user preference settings for now, although we could make it possible to show any editor in a 3D floating popup.

How can users change settings for the VR session, but outside of it? Do we allow this at all?
For example, users may want to change viewport settings from the regular 2D UI, because the VR session renders too slow. This also matters for users that don’t have controllers.

How are session defaults and startup settings defined?
This is mainly about viewport settings (display mode, lighting, shading, etc.) and VR specific settings. You may want to set these up even before the session starts. And you may want to set your own default settings, too.