HoloLens Features and Concepts Introduction

Captures some features of Microsoft HoloLens.

Coordinate System Concepts

Name Description
Stationary frame of reference For world-locked content in seated-scale experience
- keep the object stable related to the world, while respect changes to the user’s head position and head orientation
- an app creates 1 stationary frame of reference on startup and use it throughout the app’s lifetime, such as the world origin and world coordinate system in Unity engine
- hologram may drift when users walk beyond 5 meters because the system determines distances between various points in the real-world are shorter or longer than the system previously believed (while the hologram’s coordinate keeps the same)
- optimizes for stability near the user
Attached frame of reference For body-locked content in orientation-only experiences
- moves with the user as they walk around, with a fixed heading defined when the app first creates the frame
- the only coordinate system which can be used to render holograms when the headset can’t figure out where it is in the world
Stage frame of reference For standing-scale experience
- defines a stage origin, a spatial coordinate system centered at the user’s chosen floor position and forward orientation where they intend to use the device
- holograms with Y=0 appears on the floor
- optimizes for stability near their origins
Stage bound For room-scale experience
- a single fixed coordinate system within which to place floor-relative content
Spatial anchor For world-scale experience
- represents an important point in the world that the system should keep track of over time
- automatically adjust its position as device learns about the world to ensure that it stays precisely where it was placed relative to the real-world
- has a coordinate system, placed holograms with positioning to its spatial anchor could maintain stability
- optimizes for stability near their origins
- could be saved in and located from anchor store with a string key as the reference. holograms associated to this spatial anchor could also be stored in local’s storage
- supports sharing among devices and platforms with Azure Spatial Anchors service

Scene Understanding vs Spatial Mapping

  • tradeoff of maximal accuracy and latency to structure and simplicity.
~ Scene Understanding Spatial Mapping
Efficiency higher level processing as it should provide you with a superset of functionality the lowest-latency possible and mesh triangles that only you will want to access
Range unlimited, provides all scanned spatial mapping data limited, data are in a limited size cached ‘bubble’ around the user
Placement SceneQuads infers which areas of the quad that were not scanned and invalidate areas, allowing placement not limited to scanned areas - place object in spatial surface
- a surface cannot be used for placement if it has not been scanned
Occlusion - support requesting data from spatial mapping from generated mesh
- provides “best of both worlds” scenario
provide occlusion in highly dynamic scenes
Physics watertight meshes decompose space with semantics
- ensures physics ray casts always hit
- allows for simpler generation of nav meshes for indoor
perform many raycasts within a small area and to use the aggregate results to derive a more reliable understanding of the surface since the surface is complex
Navigation simplifying the floor structure, makes dynamic nav-mesh generation in 3d engines such as Unity are attainable - NavMesh cannot be used because spatial mapping surfaces are not known until the application starts
- spatial mapping system won’t provide information about surfaces very far away
Visualization requesting unlimited number of surfaces from scene understanding’s version spatial mapping mesh, allowing capturing mesh representations for large space ~