Toward Smart Cars with Computer Vision for Integrated Driver and Road Scene Monitoring



Download 0.69 Mb.
Page2/3
Date28.05.2018
Size0.69 Mb.
#50828
1   2   3

Lane Tracking

Despite many impressive results from lane trackers in the past such as work of Dickmans with the VAMROS project, and the Carnegie-Mellon Universities NAVLAB project, it is clear that no single cue can perform reliably in all situations. The lane tracking system presented in this paper dynamically allocates computational resources over a suite of cues to robustly track the road in a variety of situation. Bayesian theory is used to fuse the cues while a scheduler intelligently allocates computational resources to the individual cues based on their performance. A particle filter is used to control hypothesis generation of the lane location.


Each cue is specifically developed to work independently from the other cues and is customised to perform well under different situations (i.e. edge based lane marker tracking, area based road tracking, colour based road tracking, etc.). Cues are allocated CPU time based on their performance and the computation time each cue requires. A number of metrics are used to evaluate the performance of each cue with respect to both the fused result and the individual result of the cue.
Additionally, the framework of the lane tracker was designed to allow the cues to run at different frequencies enabling slow running (but valuable) cues to run in the background. A dual phase particle filter system is used to reduce the search space for the lane tracker. The first particle filter searches for the the road width, the lateral offset of the vehicle from the centre line of the road and the yaw of the vehicle with respect to the centre line of the road. The second particle filter captures the horizontal and vertical road curvature in the mid to far-field ranges.
Figure 2 shows the output of the lane tracker in several different situations using 4 different cues. In all situations, the cues used were a mixture of cues suited solely to marked roads or unmarked roads and cues suited to both. Green lines indicate the hypothesis with the highest probability while the red lines indicate the mean of the samples. Top left, top right and bottom left show different situations on a marked road including shadows and passing cars, while bottom right shows the tracker working on unmarked road. All tests were performed with 4 cues including an edge-based lane marker cue, an area-based colour detection cue, an edge-based colour detection cue and a maximum likelihood road width cue.


Figure 2
FaceLAB is a driver monitoring system developed by Seeing Machines. It uses a passive stereo pair of cameras mounted on the dashboard of the vehicle to capture 60Hz video images of the driver's head. These images are processed in real-time to determine the 3D position of matching features on the drivers face.

The features are then used to calculate the 3D pose of the person’s face +/-1mm, +/- 1deg as well as the eye gaze direction +/- 3deg, blink rates and eye closure.





Figure 3



Download 0.69 Mb.

Share with your friends:
1   2   3




The database is protected by copyright ©ininet.org 2024
send message

    Main page