VR: Controller Based Picking
Picking in this context means, finding which item (object, bone, vertex, etc) is being pointed at by the controller.
How to implement picking? OpenGL selection or ray casting?
(To perform OpenGL picking we would draw a controller's perspective to an offscreen buffer select based on that result.)
Few things to consider:
- Blender has an own OpenGL selection implementation that does not use the legacy GL_SELECT.
- Picking is not only needed for objects, but also vertices/edges/faces, bones, bezier control points, etc. Also: Gizmos.
- Gizmos do currently only support OpenGL selection.
- Empties, cameras, lights and other selectable items have no real geometry that can be used for ray casting. Unless that changes, OpenGL gives a more WYSIWYG selection.
- We could in theory perform different selection methods in different modes. E.g. in Edit Mode we could do BVH accelerated selection, we have utilities ready for this.
- Should we ever get acceleration structures for other modes that we could rely on for selection, ray casting would be much preferable, because of better scalability. Nothing like this is planned short term though.
- Ray casting could be performed in a separate thread, especially to avoid blocking viewport drawing.
- It's probably too expensive to perform picking on every frame. It should only be performed when needed (e.g. on trigger press). It seems like we can do some drawing tricks to visualize where the virtual controller ray intersects objects though. Needs more investigation.
How is picking executed?
What happens when pushing the trigger to select, or even select and tweak an object?
- Add an own operator for VR controller based selection.
- Add a invoke_3d() callback for operators. It can wrap the existing logic, but directly hand it the 3D direction vector to use, rather than projecting the cursor from 2D to 3D space first.
- Have a function to trigger picking, e.g. bpy.types.XrSessionState.pick() in Python. The result can be cached for multiple calls per frame.