1)Video see-through displays block out the wearer's surrounding environment, using small video cameras attached to the outside of the goggles to capture images. On the inside of the display, the video image is played in real-time and the graphics are superimposed on the video. One problem with the use of video cameras is that there is more lag, meaning that there is a delay in image-adjustment when the viewer moves his or her head.
2)Optical see-through displays- Most companies who have made optical see-through displays have gone out of business. Sony makes a see-through display that some researchers use, called the Glasstron. Blair MacIntyre, director of the Augmented Environments Lab at Georgia Tech, believes that the Microvision's Virtual Retinal Display holds the most promise for an augmented-reality system. This device actually uses light to paint images onto the retina by rapidly moving the light source across and down the retina. The problem with the Microvision display is that it currently costs about $10,000. MacIntyre says that the retinal-scanning display is promising because it has the potential to be small. He imagines an ordinary-looking pair of glasses that will have a light source on the side to project images on to the retina.
Monitor based AR displays
We use the term monitor-based (non-immersive), or "window-on-the-world" (WoW), AR to refer to display systems where computer generated images are either analogically or digitally overlaid onto live or stored video images. Although the technology for achieving this has been well-known for some time, most notably by means of chroma-keying, a large number of useful applications present themselves when this concept is implemented stereoscopically.
Monitor Based Augmented Reality
Example of Croma keying technique is the way the whether forecast is done on television. Here the forecaster is seen to point to the varying maps, which is an augmentation to the reality, how? The forecaster is actually standing in front of a plain green or blue screen & the graphics we see, are generated by the computer. The forecaster is lookin at the augmented seen with the maps in a computer screen placed in front of him & so he is able to know where to point on the blank screen.
There are advantages and disadvantages to each of these types of displays.
With both of the displays that use a video camera to view the real world there is a forced delay of up to one frame time to perform the video merging operation. At standard frame rates there will be potentially a 33.33 millisecond delay in the view seen by the user. Since everything the user sees is under system control compensation for this delay could be made by correctly timing the other paths in the system. Or, alternatively, if other paths are slower then the video of the real scene could be delayed.
With an optical see-through display the view of the real world is instantaneous so it is not possible to compensate for system delays in other areas. On the other hand, with monitor based and video see-through displays a video camera is viewing the real scene. An advantage of this is that the image generated by the camera is available to the system to provide tracking information. The U of R augmented reality system exploits this advantage. The optical see-through display does not have this additional information. The only position information available with that display is what can be provided by position sensors on the head mounted display itself.
The biggest challenge facing developers of augmented reality is the need to know where the user is located in reference to his or her surroundings. There is also the additional problem of tracking the movement of users' eyes and heads. A tracking system has to recognize these movements and project the graphics related to the real-world environment the user is seeing at any given moment. Currently, both video see-through and optical see-through displays typically have lag in the overlaid material due to the tracking technologies currently available.
For augmented reality to reach its full potential, it must be usable both outdoors and indoors. Currently, the best tracking technology available for large open areas is the Global Positioning System. However, GPS receivers have an accuracy of about 10 to 30 meters, which is not bad in the grand scheme of things, but isn't good enough for augmented reality, which needs accuracy measured in millimeters or smaller. An
augmented-reality system would be worthless if the graphics projected were of something 10 to 30 meters away from what you were actually looking at.
There are ways to increase tracking accuracy. For instance, the military uses multiple GPS signals. There is also differential GPS, which involves using an area that has already been surveyed. Then the system would use a GPS receiver with an antenna that's location is known very precisely to track your location within that area. This will allow users to know exactly how inaccurate their GPS receivers are, and can adjust an augmented-reality system accordingly. Differential GPS allows for submeter accuracy. A more accurate system being developed, known as real-time kinematic GPS, can achieve centimeter-level accuracy.
Tracking is easier in small spaces than in large spaces. Researchers at the University of North Carolina-Chapel Hill have developed a very precise system that works within 500 square feet. The HiBall Tracking System is an optoelectronic tracking system made of two parts:
Six User-Mounted
Optical Sensors
Global positioning system
For position tracking, we take advantage of a high-precision version of the increasingly popular Global Positioning System receiver.
A GPS receiver determines its position by monitoring radio signals from navigation satellites. The accuracy of the inexpensive, handheld receivers that are currently available is quite coarse--the positions can be off by many meters. Users can get better results with a technique known as differential GPS. In this method, the mobile GPS receiver also monitors signals from another GPS receiver and a radio transmitter at a fixed location on the earth. This transmitter broadcasts corrections based on the difference between the stationary GPS antenna's known and computed positions. By using these signals to correct the satellite signals, differential GPS can reduce the margin of error to less than one meter. Our system is able to achieve centimeter-level accuracy by employing real-time kinematic GPS, a more sophisticated form of differential GPS that also compares the phases of the signals at the fixed and mobile receivers.
Unfortunately, GPS is not the ultimate answer to position tracking. The satellite signals are relatively weak and easily blocked by buildings or even foliage. This rules out useful tracking indoors or in places like midtown Manhattan, where rows of tall buildings block most of the sky. We found that GPS tracking works well in the central part of Columbia's campus, which has wide open spaces and relatively low buildings. GPS, however, provides far too few updates per second and is too inaccurate to support the precise overlaying of graphics on nearby objects.
For a wearable augmented reality system, there is still not enough computing power to create stereo 3-D graphics. So researchers are using whatever they can get out of laptops and personal computers, for now. Laptops are just now starting to be equipped with graphics processing units (GPUs).
Toshiba just added an NVidia GPU to their notebooks that is able to process more than 17-million triangles per second and 286-million pixels per second, which can enable CPU-intensive programs, such as 3-D games. Still, notebooks lag far behind -- NVidia has developed a custom 300-MHz 3-D graphics processor for Microsoft's upcoming Xbox game console that can produce 150 million polygons per second -- and polygons are more complicated than triangles. So you can see how far mobile graphics chips have to go before they can create smooth graphics like the ones you see on your home video-game system.
Practical portable 3-D systems won't be available until at least 2005, said MacIntyre. His research lab is currently using a ThinkPad to power their mobile augmented-reality system. The top ThinkPads use an ATI Mobility 128, 16-MB graphics chip.
As consumers and individuals, we’re only beginning to see the capabilities of AR in our day to day use. In industries the world over however, AR has been in use for quite some time. Researchers working with augmented reality systems have proposed them as solutions in many domains. The proposed applications for augmented reality
Medical
Most of the medical applications deal with image guided surgery. Pre-operative imaging studies, such as CT or MRI scans, of the patient provide the surgeon with the necessary view of the internal anatomy. From these images the surgery is planned. Visualization of the path through the anatomy to the affected area where, for example, a tumor must be removed is done by first creating a 3D model from the multiple views and slices in the preoperative study. This is most often done mentally though some systems will create 3D volume visualizations from the image study. Augmented reality can be applied so that the surgical team can see the CT or MRI data correctly registered on the patient in the operating theater while the procedure is progressing. Figure 2 shows a view that the user might see from an augmented reality system in the medical domain. It depicts the merging and correct registration of data from a pre-operative imaging study onto the patient's head. Providing this view to a surgeon in the operating theater would enhance their performance and possibly eliminate the need for any other calibration fixtures during the procedure.
Another application for augmented reality in the medical domain is in ultrasound imaging. Using an optical see-through display the ultrasound technician can view a volumetric rendered image of the fetus overlaid on the abdomen of the pregnant woman. The image appears as if it were inside of the abdomen and is correctly rendered as the user moves.
2. Entertainment
A simple form of augmented reality has been in use in the entertainment and news business for quite some time. Whenever you are watching the evening weather report the weather reporter is shown standing in front of changing weather maps. In the studio the reporter is actually standing in front of a blue or green screen. This real image is augmented with computer generated maps using a technique called chroma-keying. It is also possible to create a virtual studio environment so that the actors can appear to be positioned in a studio with computer generated decorating.
Strictly speaking with current technology this may not be considered augmented reality because it is not generated in real-time. Princeton Electronic Billboard has developed an augmented reality system that allows broadcasters to insert advertisements into specific areas of the broadcast image. The electronic billboard requires calibration to the stadium by taking images from typical camera angles and zoom settings in order to build a map of the stadium. By using pre-specified reference points in the stadium, the system automatically determines the camera angle being used and referring to the pre-defined stadium map inserts the advertisement into the correct place.
3. Military Training
The military has been devising uses for augmented reality for decades. In fact, the Office of Naval Research has sponsored some augmented-reality research. And the Defense Advanced Research Projects Agency (DARPA) has funded an HMD project to develop a display that can be coupled with a portable information system. The idea here is that an augmented-reality system could provide troops with vital information about their surroundings, such as showing where entrances are on the opposite end of a building, somewhat like X-ray vision. Augmented reality displays could also highlight troop movements, and give soldiers the ability to move to where the enemy can't see them.
The military has been using displays in cockpits that present information to the pilot on the windshield of the cockpit or the visor of their flight helmet. This is a form of augmented reality display.
4. Engineering Design
Imagine that a group of designers are working on the model of a complex device for their clients. The designers and clients want to do a joint design review even though they are physically separated. If each of them had a conference room that was equipped with an augmented reality display this could be accomplished. The physical prototype that the designers have mocked up is imaged and displayed in the client's conference room in 3D. The clients can walk around the display looking at different aspects of it. To hold discussions the client can point at the prototype to highlight sections and this will be reflected on the real model in the augmented display that the designers are using.
Share with your friends: |