Alternative Access Project: Mobile Scoping Study Final Report



Download 6.11 Mb.
Page13/15
Date26.04.2018
Size6.11 Mb.
#46915
TypeReport
1   ...   7   8   9   10   11   12   13   14   15


In a similar fashion a research group at Edinburgh University employed Mastermap to create a speech based augmented reality system to help people navigate city landscapes without having to peer at a map on a screen every two minutes. Key to this research was the ability to determine the user’s line of sight using a 3d viewshed model. Again OS Mastermap was used to provide building footprints which combined with LiDaR data (provided by the Environment Agency) enabled the authors to determine which points of interest were within view, using a speech interface to notify the user of visible features [41]. Augmenting location based services with line of sight information was highlighted by several people we spoke to working in the area of mobile learning as an area where EDINA could help educators and researchers developing mobile learning platforms.

Speech Based Augmented reality system to navigate City landscapes [41] using line of sight model to notify user when a feature
There has been a great buzz [42] around the use of augmented reality frameworks such as Layar [6] in education, but so far we have not seen any examples of these relatively accessible AR technologies being used for teaching. It is possibly too early to expect applications to have surfaced yet, but our own evaluation (see technical evaluation) suggests that there are significant technical barriers for educators that wish to publish content for AR. Our evaluation also highlighted problems relating to accuracy of GPS in urban landscapes where tall buildings reduce accuracy by as much as 90m in some areas. We think that this problem can probably be overcome by defining vantage points that users can navigate to and serving points of interest relative to these vantage points. It might even be possible to use a vantage point to ascertain the error in GPS for given location and automatically correct future readings. Another technique for overcoming this issue may be using 3d image recognition to pinpoint the user’s location more accurately relative to building outlines [13, 39 ,43].
So far the research we have seen focuses on superimposing 2d and 3d models into reality view. Digimap data has been used for building 3d models rather than used directly. There is a notable absence of low tech solutions using simple point of interest databases such as OS 50k gazetteer (Unlock), BGS rock images and height point data. It seems fairly obvious to us that an augmented reality application that superimposes rock names and 2d rock images on a reality view would have educational value. So why is all the attention on extremely difficult applications involving 3d modelling and view shed analysis, rather than simple text and 2d image augmented reality views?
Perhaps for the people who want to exploit these simple applications the technical barriers are too high, while for those who have the necessary skills the technical barriers are too low, that is, they cannot justify the low level technical work against their research programme objectives. We believe EDINA could play a role in bridging this gap by providing easy to use authoring and publishing tools which will enable educators to create augmented layers that can be consumed by AR browsers such as Layer [6] and Wikitude[7]. These tools would allow users to add their own points of interest and 2d images, and possibly 3d models. They should also be able to create their own layers from already available points of interest database such as Unlock ( e.g. place names, administrative boundaries).
To help those wanting more sophisticated augmentations providing a download tool that combines Mastermap data with digital terrain data and LiDaR data sources would significantly reduce the effort for researchers wanting to use 3d models in augmented reality applications. We quizzed people who were using AR whether such a facility should be provided as an API where 3d data could be streamed to individual devices via a cloud service. Most thought that the priority is to make the data available in a convenient format for download rather than attempt to offer a streaming service. An ideal solution would allow users to select a geographic extent (bounding box) and then obtain 3d model of the area in a useful formats such as VRML [44] or X3D [45], generated from automatic merger of Mastermap, Digital Terrain Model and LiDaR data sources. There was more support to offer a “line of sight” data as online API but again a download facility which cut out the cumbersome data processing steps involved in merging Mastermap with LiDaR and extruding building footprints to the correct height was seen as the most important contribution we could make.
Another take on mobile that John Traxler and others have highlighted views the mobile device as a “virtual limb” or as Charlie Schlick, Product Manger at Nokia put it “our new private parts” [46].A good example of this “prosthetic” view of the mobile device is work being done by Chris Kray at University of Newcastle [47, 48 ,49].
Kray’s research focuses on the mobile device as a tool for collaborative spatial interaction between individuals. Rather than using the mobile screen to display data from an external source, the mobile screen is used as a way for users to show and share with others personal data they hold on their device, such as photos or calendar schedules. One approach the Newcastle team have developed employs spatial proximity regions around mobile device on a normal table to facilitate a collaborative task such as agreeing a future meeting [47]. A camera-projector system uses dynamic visual markers displayed on the screen of the mobile device to track where users are moving their device on a tabletop. The projector displays different regions on the tabletop where users can push their device to share information, such as photos or calendar events. This allows convenient transfer of data from one personal device to another, or to a shared space. A key difference between this form of interaction and a shared touch screen display is that users retain ownership of their actions and their data as it is possible to trace each user gesture to an individual device.
This version of augmented reality has great potential for teaching and learning as it greatly facilitates pedagogic objectives such as student initiated learning and peer instruction. A relevant application to maps is a student project supervised by Kray where devices are used to display geo-tagged media (photos, GPS trail) on a tabletop displaying a map, so that individual’s geographic footprint can be seen by a group engaging a collaborative task. We can envisage this working well in a field trip exercise when groups meet up at the end of the day to review and share their findings.

Some screenshots of the “augmented tabletop” from Kray et al. [47]


Download 6.11 Mb.

Share with your friends:
1   ...   7   8   9   10   11   12   13   14   15




The database is protected by copyright ©ininet.org 2024
send message

    Main page