1. Input data is merged and pre-processed. 2. A desktop GIS is used to extrude 2d polygons into simple 3d model 3. A 3d modelling tool is employed to calculate normals, faces, lighting information, texture information and shadows improving the realism of the model. 4. The 3d model is converted to a format ( e.g. VRML) that can be used in mobile AR client. From Liarokapis et al. [37]
In a similar fashion a research group at Edinburgh University employed Mastermap to create a speech based augmented reality system to help people navigate city landscapes without having to peer at a map on a screen every two minutes. Key to this research was the ability to determine the user’s line of sight using a 3d viewshed model. Again OS Mastermap was used to provide building footprints which combined with LiDaR data (provided by the Environment Agency) enabled the authors to determine which points of interest were within view, using a speech interface to notify the user of visible features [41]. Augmenting location based services with line of sight information was highlighted by several people we spoke to working in the area of mobile learning as an area where EDINA could help educators and researchers developing mobile learning platforms.
Share with your friends: |