Time
Figure 9.3. Burgett et al.’s (1998) three collision avoidance zones, including where the lead vehicle is stationary (top), where the lead vehicle stops before the host vehicle (middle), and host vehicle stops before the lead vehicle (bottom). The arrows illustrate where the threat of collision is being evaluated. In the first two scenarios, the algorithm evaluates the threat at the moment that the host vehicle stops, however, in the bottom scenario the algorithm evaluates the threat when the host and lead speed are equal.
Like the Burgett et al. algorithm, the CAMP algorithm made the same two assumptions that were listed above and calculated the minimum collision-avoidance distance based on the three collision-avoidance zones. However, the CAMP algorithm differed from the Burgett et al. algorithm in two respects: the assumption of equivalent lead and host vehicle speeds was removed, and rather than using a fixed (0.75 g) value for the assumed host vehicle deceleration response, the deceleration was a function of other parameters. The function for estimating the host vehicle deceleration response was modeled on the last-minute braking data that they collected and is as follows (in units of ft and s):
=-5.308+0.685()+2.570(if >0)-0.086(-)
The estimated host vehicle deceleration becomes greater (more negative) when lead vehicles decelerate, when lead vehicles are stationary, and as the closing velocity becomes smaller. The fact that the threshold for braking should be greater (later) when the lead vehicle is decelerating at a greater rate appears counterintuitive. However, if one assumes that drivers were braking based on their insensitivity to deceleration, it would follow that drivers would have a larger required deceleration when the lead vehicle decelerates at a constant rate. If human visual systems are insensitive to deceleration, higher rates of lead vehicle deceleration will likely be underestimated, requiring a larger braking rate to compensate for later braking. An alternative explanation could be that the delay in drivers reacting to the lead-vehicle resulted in more urgent situations for higher braking rates. If either explanation is true, it may reveal a weakness in the CAMP approach. Basing an algorithm on driver perception of threat, as indicated by last-minute braking responses, will mimic the performance limitations of the driver, rather than maximally enhancing the detection of threatening scenarios. In other words, just because humans cannot perceive lead vehicle deceleration, does not mean an algorithm should not. Given that over half of rear-end collisions involve decelerating lead vehicles (Najm et al., 2003), perhaps earlier detection of lead vehicle deceleration is warranted.
In order to promote further algorithm development and the open flow of communication, the National Highway Transportation Safety Administration (NHTSA) developed a public-domain algorithm for the ACAS FOT program. This algorithm is documented in a report by Brunson, Kyle, Phamdo, and Preziotti (2002). Like the other algorithms discussed in this category, the imminent-level of the NHTSA algorithm makes the same three assumptions and divides the problem into Burgett et al.’s three collision-avoidance zones. Like the previous generation of NHTSA algorithm (Burgett et al., 1998), this algorithm uses a fixed assumed host vehicle braking response (-0.55 g). Furthermore, the assumption of equivalent lead and host speeds was removed and many implementation details, such as nuisance alarm reduction techniques were added.
The algorithms that are based on kinematic constraints appear to most comprehensively model the range of pre-crash scenarios. Lead vehicle deceleration rate, host vehicle braking limitations, and driver reaction time are represented in the equations. Not only does this type of algorithm model the reality of collision-avoidance with the greatest fidelity of the three classes of algorithms, it also appears to directly map onto the underlying nature of what represents a collision threat (Kiefer et al., 1999). As a situation requires increasing levels of host vehicle braking, the situation becomes increasingly threatening. When the situation requires a level of host vehicle deceleration that is equal to the maximal level of braking possible, it is literally the last chance for the driver to brake in order to avoid a collision. Warning the driver after this moment would no longer prevent a collision (unless the driver could steer in such a way to avoid the collision), though it could potentially reduce crash energy.
One potential limitation of this category of algorithm is that, like the time-to-collision algorithms, it does not detect any level of threat until either range-rate or range-acceleration is negative. Thus, if one assumes no minimum collision-avoidance range, the algorithm may register no threat even when the vehicles are mere inches apart if there is no relative closure. Although this situation is not registered as a threat by this type of algorithm, it is actually quite threatening. Any sudden change in the situation, such as the lead vehicle braking, and collision may be unavoidable. This type of threat is not instantaneous, but rather probabilistic in nature. Driving beyond the constraints of human brake reaction time, represents a threat in that something may happen that leaves insufficient time for the driver to respond. This limitation could be remedied in several ways. One approach would be to adopt a minimum miss distance, such as the 6.67 ft used by Burgett et al. Rather than assessing the minimum collision-avoidance range for avoiding a collision, the Burgett et al. algorithm assesses the minimum collision-avoidance range for avoiding a collision by 6.67 ft. This ensures an alert when range drops below 6.67 ft. The second generation NHTSA algorithm also used this miss distance. The GM algorithm in the ACAS FOT also represented probabilistic by presenting a cautionary alert level that was based on time-to-collision and the time-headway algorithms. By providing a cautionary alert when drivers become too close to the lead vehicle, drivers are warned before an imminent alert becomes necessary. This minimizes the chance that drivers will drive beyond their brake reaction time capability.
Most algorithms appear to adopt a brake reaction time of approximately 1.5 s, however, these algorithms can differ greatly because of the different choices of assumed host-vehicle braking rate. Burgett et al.’s algorithm employs a value of -0.75 g, which seems to represent the absolute maximum rate at which the vehicle can attain (given that this is the average rate after braking is first applied). Adopting a value this high may represent more of an effort to reduce crash energy than an attempt to prevent all rear-end crashes. At the other extreme is the CAMP algorithm, which tends to adopt smaller values (e.g., for a host vehicle approaching a stationary lead vehicle at 40 mph the assumed host braking response would be 0.32 g). The CAMP required-deceleration function may lead to an overly sensitive algorithm that produces an excessive number of nuisance alerts. Krishnan and Colgin (2002) used a Monte Carlo simulation to compare the GM ACAS FOT algorithm with the CAMP and NHTSA ACAS FOT algorithms and demonstrated that the CAMP algorithm had a greater probability of producing early and nuisance alerts.
Share with your friends: |