Figure 8.6 - Server available POI
According to the MAR component classification scheme in Clause 7, this system class has the following characteristics:
Pure sensors – GNSS
Real world capturer – visual, 2D video
Recognizer – location, recognition event, local
Tracker – earth reference , spatial event, local
Space mapper – spatial
Event mapper – location, remote
Execution engine – local, 2D + t
Renderer - visual
Visual display – 2D mono
Aural display - mono
8.3 MAR Type 3DV: 3D Video Systems
8.3.1 Real Time, Local-Depth Estimation, Condition based Augmentation
The device captures multi-view video and estimates depth. This representation is used to detect conditions imposed by the content designer. Once the condition is met, the Device renders the virtual object by using the scale and orientation specified by the content designer. For example, the end-user has an augmented reality experience where one virtual object is displayed on a horizontal plane detected within a ray of 10m. The content specified in the Information viewpoint is:
Media used for the augmentation.
The orientation and scale of the virtual object (uniform/isotropic scaling representing physical units).
The condition (e.g., horizontal plane within a ray of 10m).
Figure 8.7 - Real-time, local depth estimation, condition based augmentation
According to the MAR component classification scheme in Clause 7, this system class has the following characteristics:
Pure sensors – visual, other (3D depth)
Real world capturer – visual/video, other (3D depth)
Recognizer –3D primitives, recognition event, local
Tracker – 3D primitives, spatial event, local
Space mapper – spatial
Event mapper – location, local
Execution engine – local, 3D + t
Renderer - visual
Visual display – 3D
8.3.2 Real Time, Local-depth Estimation, Model-based Augmentation
A content designer captures offline an approximation of the real world as a 3D model and then the authors content by adding additional 3D virtual objects registered within an approximation of the real world. The end-user navigates in the real world using a multi-view camera. The Device estimates the depth and computes the transformation matrix of the camera in the real world by matching the captured video and depth data with the 3D model approximating the real world. The augmented scene is therefore rendered by using the transformation matrix result. The content specified in the Information viewpoint is:
Virtual objects and their local transformations in the scene MAR experience.
The approximation of the 3D model of the real world.
Share with your friends: |