Session C10 175 Disclaimer —



Download 60.53 Kb.
Date23.05.2017
Size60.53 Kb.
#19019

Session C10

175


Disclaimer — This paper partially fulfills a writing requirement for first year (freshman) engineering students at the University of Pittsburgh Swanson School of Engineering. This paper is a student, not a professional, paper. This paper is based on publicly available information and may not be provide complete analyses of all relevant data. If this paper is used for any purpose other than these authors’ partial fulfillment of a writing requirement for first year (freshman) engineering students at the University of Pittsburgh Swanson School of Engineering, the user does so at his or her own risk.
PEDESTRIAN AND NON-MOTOR VEHICLE DETECTION SYSTEMS WITH LIDAR FOR AUTONOMOUS VEHICLES
Robert Schippers, ros82@pitt.edu, Mena 1:00, Jeffrey Socash, jms570@pitt.edu, Mahboobin 4:00



Abstract — The purpose of this paper is to examine the use of Light Detection And Ranging (LiDAR) in obstacle detection systems in autonomous vehicles (AVs). Particular attention will be given to methods being developed to detect nonmotorized bodies such as pedestrians and cyclists due to unique variables they present compared to other vehicles. Due to the increasing societal and professional relevance of AV development, we examine LiDAR’s current role in the development of pedestrian detection systems in autonomous vehicles as well as its implications for sustainable development. We begin our paper with an overview of AV development and the role of LiDAR within it. We then survey currently-proposed LiDAR-based detection methods for non-motorized bodies and critically examine the potential technical problems associated within. This is followed by an examination of machine-learning techniques being implemented to decrease errors in detection. Waymo, formerly Google’s self-driving car, is used as a case study. We conclude with a discussion of current technical, social, and ethical problems associated with AVs and the possible solutions to reach a qualified position on LiDAR’s role in future AV development.
Key Words - Autonomous Vehicles, LiDAR, Machine Ethics, Machine Learning, Object Classification, Obstacle Detection, Point Cloud, Region-of-Interest Proposal
INTRODUCTION: TRENDS IN AUTONOMOUS VEHICLE DEVELOPMENT
A significant trend in mechatronics over the past decade is the increasing digitization of automobile systems. This trend of digitalization has led some researchers and companies to consider developing autonomous vehicles (AVs): vehicles that utilize obstacle detection systems and computer algorithms which allow them to navigate roadways without human input.

This development is not unprecedented; ideas of self-driving vehicles emerged as early as the 1960s. The most significant catalyst for AV development, however, was the Grand Challenge, an engineering challenge sponsored by the US Defense Advanced Research Projects Agency (DARPA). Starting in 2004, these challenges were designed with the intent of demonstrating technical feasibility of AVs [1]. Today, several companies are investing in self-driving vehicle prototypes, including Google and Uber. This invigorated interest in AV does not only pose potential benefits from an engineering standpoint, but also from safety and economic perspectives.


Potential of Vehicle Automation in Minimizing Accidents
The main benefits of AV development lie in minimizing human error on the road and increasing general safety. The National Highway Traffic Safety Administration (NHTSA) reported 5.5 million total crashes per year in the U.S. as of 2013, with 93% having a “human cause” as the primary factor and 2.2 million being fatal or injurious [1]. Furthermore, of the 32 thousand reported fatal crashes, over 40% of them involved some combination of alcohol, drugs, distraction, and/or fatigue [1]. This does not even reflect other human shortcomings such as speeding, overaggressive driving, slow reaction time, and inexperience. Hypothetically, perfected autonomous vehicles could be capable of completely preventing these types of fatal crashes while minimizing injurious and non-injurious crashes.

In addition to vehicle crashes, human driving errors endanger non-motorized objects such as pedestrians and bicycles. The CDC estimates 900 bicyclists were killed in the US in 2013, with an estimated 494 thousand emergency department visits due to bicycle-related injuries [2]. They also estimate that 4,735 pedestrians were killed in traffic accidents in 2013, with 150,000 pedestrians treated in emergency departments for non-fatal injuries [2]. Many of these accidents happened in urban areas, with high concentrations of vehicles on the road.

In addition to minimizing the dangers posed to human life, there is an economic benefit to be sought in minimizing crashes. As of 2014, vehicle crashes were estimated to have a sum cost of $277 billion, approximately 2% of the U.S. GDP [1]. Therefore, there is an impetus to develop AVs from both moral and economic standpoints.
Sustainability Implications of AVs
AVs are also a topic of interest in the context of sustainability. Sustainability is a broad topic in the context of engineering; there are several ways to define it. One such way is quality of life. Sustainability defined as quality of life covers the improvement and management of the well-being of the public. For the purposes of AV’s, the central focus of quality of life revolves around the safety of pedestrians as well as those inside of the vehicle. This can be achieved in multiple ways such as improving detection technologies in the AV’s, educating the public on the growing popularity of AV’s, and establishing proper ethical codes for the responsibilities of AV’s. There are also considerable quality-of-life benefits in the long-term potential of saving work-hours by automating driving.

Implementation of AVs could also see a rise in productivity. According to the American Automobile Association’s (AAA) 2015 American Driving Survey, the average driver makes 2.1 driving trips per day, driving an average of 29.8 miles and spending an average 48.4 minutes on the road. Those who drive every day make an average of 3.1 trips, driving an average amount of 43.2 miles over 70.2 minutes [3]. This amounts to an average of over 17.6 thousand minutes per year, equivalent to about seven 40-hour work weeks. The automation of driving could free up these hours used driving and allow for a possible increase in work performance, providing economic and quality-of-life benefits. In addition to freeing up work-hours, autonomous vehicles carry a capacity to reduce traffic congestion. Texas Transportation Institute researcher David Schrank predicts that by 2020, U.S. travelers will waste about 8.4 billion total hours in congested traffic and 4.5 billion gallons of fuel for a total economic cost of $199 billion, accounting for both normal traffic delay and congestion caused by safety failures and crashes [1]. Furthermore, by letting the car drive and recall itself to its owner’s location, there is the potential to save thousands in annualized costs by moving parking spaces to less dense suburban areas [1]. By removing the element of human error in automobile transport, there is the potential for massive streamlining that could increase quality of life and reduce the economic waste caused by human error.


THE PROBLEM OF NON-MOTORIZED OBJECT DETECTION
One major problem that inhibits the development of sophisticated detection systems is the detection of non-motorized bodies such as pedestrians on foot and on bicycles. Detection of non-motorized bodies is significantly more difficult than motorized vehicles for several reasons. First, there are significant physical dissimilarities between non-motorized bodies and motorized bodies such as cars and trucks. Furthermore, physical variance in size, posture, lighting, travelling speed, etc. between non-motorized bodies complicates classification by introducing a wide variety of variables to consider.

The key assumption that must be considered is that a pedestrian is not outfitted with any technology capable of transmitting position information, unlike a motor vehicle. Therefore, collision warning systems for AVs must be developed without relying on communication between the vehicle and the obstacle.

With robust non-motorized object detection being our goal, we must establish first the technology being used to achieve this goal. We shall examine the implementation of LiDAR technology in the development of these detection systems.
LIDAR: THE SCANNER TECHNOLOGY BEHIND THE DETECTION SYSTEMS
One such means of vision-based detection is LiDAR. LiDAR operates on a similar principle as sonar and radar. A LiDAR photodetector operates by emitting high-frequency, low-wavelength beams of laser light at its surroundings. These rays of light are infrared, with a much smaller wavelength and higher frequency than the visible light spectrum. The primary concept behind this method of detection is that the distance between the detector and surrounding objects can be calculated using two values: the speed of light (approximately 300,000 km/s) and the average time a light photon takes to reach the object and return to the sensor. The typical LiDAR system consists of four main components: lasers, optic scanners, a photodetector, and a navigation and positioning system [4]. LiDAR itself, however, is used in photodetector function.

By performing this process and performing this calculation at a high enough frequency (as much as 150,000 pulses of light per second in some systems) [4], the LiDAR system can return an array of data for analysis. This data is arranged in a “point cloud” format, a 3-dimensional array of points each containing x-, y-, and z-coordinates relative to a chosen coordinate system [4]. This allows the scanner to create a “map” of its surroundings, as seen in Figure 1. This data, as we will observe later, can be used to determine regions of interest (ROIs) and serve as a data model for intelligent machine learning.



FIGURE 1 [4]

An example of a 3D “map” generated from LiDAR data.
LiDAR is advantageous for object detection because its use of high-frequency light waves allows it to function outside of visual light spectra. Recall that non-motorized object detection is complicated in part by the difficulty of identifying poorly-illuminated bodies. LiDAR sensors almost entirely mitigate this disadvantage. LiDAR provides an accurate means of discerning a three-dimensional map of surroundings, with attention given to depth, making LiDAR a valuable component in the development of versatile, robust detection systems in tandem with other devices such as cameras and GPS.
DETECTION SENSORS PAIRED WITH LIDAR
The main advantage of using LiDAR in conjunction with current detection sensors is to aid in pedestrian detection. Detecting pedestrians can be a nuisance; especially because of the unpredictability of the pedestrian’s appearance due to body figure, clothing color, and their contrast against the environment. According to Chinese National University of Defense Technology researchers Wang Jun, Tao Wu, and Zhongyang Zheng, “[t]he main advantage of laser scanners is the reliability of its detections and the capability of working under different lighting conditions” [5]. The researchers demonstrated that laser scanners such as LiDAR hold an advantage over other detection systems in detecting objects throughout an array of lighting conditions. However, LiDAR could not be employed on its own due to decreased clarity over far distances and in poor weather conditions [5]. This is the reason LiDAR is being implemented with the current boosted detection systems, so that the strengths of each subsystem will cover the weakness of the other and overall will create a very reliable and fast paced detection system.
LiDAR’s Involvement in the Detection Process
In the general layout produced by Jun, Wu, and Zheng, LiDAR is used in the first step of pedestrian detection and its role is in detecting and localizing the objects in its surroundings. This is performed in stages with the first stage being the construction of the 3D layout of its surroundings which is achieved by using a Velodyne HDL-64E sensor [Figure 2] to spinning 64 laser diodes at rates of up to 15 Hz throughout a 360-degree horizontal field of view [5]. The layout is then taken to the next step which evaluates the surroundings to assess if there are any potential pedestrians in the area. Sliding window detection is used for this step and it utilizes a fixed sized window to scan through the constructed layout in search of features that correspond to a pedestrian [5]. One can envision this process as running the program over a small area to see if the corresponding features of a pedestrian matches what the system is observing. These associated features are then computed through the Hungarian algorithm to create a frame by frame data association. The Hungarian algorithm is a tactic used to solve an assigned problem over polynomial time, which in this case is over the 3D surrounding layout.

FIGURE 2 [5]

Velodyne’s HDL-64E LiDAR Photodetector
Compensating for Time Inefficiency
The use of this system of steps proves to be useful; however, it lacks in the aspect of time efficiency and is not effective for real-time use which is a necessity. Jun, Wu, and Zheng address a possible solution to this problem but real-time detection is addressed in a later section in more detail.

The proposed method to increase efficiency in speed of the LiDAR detection system is to assume the regions of interest are going to be on the ground [5]. In the case with pedestrian detection, it is safe to say that most pedestrians will be on the ground. The flaw in this method is that there is the possibility of people on bikes and other non-motorized vehicles that will not be accounted for nor detected by the LiDAR detector. However, assuming every ROI will be on the ground saves a large amount of time.

The general process for making up for the time inefficiency is to have the LiDAR subsystem only scan at the ground level. In doing this, the sliding window technique can work over a smaller area which will increase the overall speed of the subsystem. In a situation of every pedestrian being grounded, this method would work smoothly; however, we know this is not the case. Alternative strategies must be employed to achieve the necessary real-time detection required for LiDAR technology to an autonomous vehicle on the road.
Pairing with LiDAR Promotes Sustainability
Through the pairing of LiDAR with detection sensors, the pedestrian detection process is made more efficient and more reliable. LiDAR is able to boost the detection process and therefore can detect pedestrians throughout a larger number of lighting conditions. This improved detection promotes the safety and the quality of life of pedestrians, bikers, and those inside of the vehicle.

Due to skepticism relating to the safety of unmanned vehicles and their reliability in accident prevention, LiDAR’s ability to add accuracy to pedestrian detection serves as a monumental boost to the credibility of the sustainability of AV’s. Now having the ability to locate pedestrians more accurately, this feature can be refined to operate at greater speeds and ensure safety more efficiently.


Transitioning to Real-Time Detection
These are the steps that LiDAR delivers to the productivity of pedestrian detection when combined with boosted detection sensors. LiDAR’s adaptability to a variety of conditions of light aids greatly in the efficiency of pedestrian detection. LiDAR’s capabilities are pushed to the limits when the concept of real-time becomes an issue because high speeds need to be met with a complete coverage of pedestrian detection.
REAL-TIME DETECTION WITH LIDAR
Now that LiDAR has an established position in the role of pedestrian detection, real-time detection is a factor that tests the efficiency of the subsystem. According to the Nanjing University of Science and Technology Engineers Xiaofeng Han, Jianfeng Lu, Ying Tai, and Chunxia Zhao, factors such as the previously stated unpredictability of a pedestrian’s appearance combined with the array of contrast from different environments can cause problems especially when UGVs (Unmanned Ground Vehicles) are required to work at real-time speeds [6]. These UGVs range from military grade machinery to the autonomous car and the capability of detecting objects, especially pedestrians, in real-time is paramount to the proper operation of these vehicles.
Steps of Gathering LiDAR Data
The usefulness of the LiDAR subsystem in this area is the ease of locating ROI’s by grouping points made in the LiDAR 3D point cloud. As previously stated, LiDAR’s 3D sensor technology is being fused with vision based sensors to improve upon the current use of the sliding window detection technique. In the pedestrian detection method presented by Han, Lu, Tai, and Zhao, there are several steps which are based around the data gathered by LiDAR. It begins with taking in the layout gathered by LiDAR’s point cloud. Once gathered, all of the points that correspond to the ground are erased. Of the remaining points, the areas with high concentrations of points are designated as ROIs and potential pedestrians [Figure 3] [6]. By erasing the points that are associated with the surrounding ground, the locations of the ROIs are made more clear and are easier to locate at real-time speeds.

The next step includes transferring the LiDAR cloud points to an image plane and replacing the areas with no LiDAR point with a black background known as a “black mask” [6]. In doing this, the cloud points are placed onto an area that can be easily manipulated. After the points are relayed, a black mask is applied to the areas of no LiDAR points which means that areas that don’t represent regions of interest are replaced with black to clearly identifying the area of the potential pedestrian.

Once the process of focusing in on the ROI is complete, the portion of the process dedicated to computing the cloud points takes place. This is mainly done with a combination of the HOG’s (Histogram of Oriented Gradients) sliding window method stated earlier and other LiDAR based features [6]. These methods are then combined with an SVM classifier (Support vector machine) which involves LiDAR’s role in pairing with machine learning.

FIGURE 3 [5]

ROI of a pedestrian, produced by Jun et al’s HOG/SVM classification system
Description of HOG and LiDAR Subsystems
The HOG subsystem starts by taking the image and breaking it up into boxes and breaking the boxes up into cells. It then takes the cells of an individual box and connects them into a single vector which is then known as the HOG descriptor of the box. Then the same steps are taken for each box and they are connected into one vector to form the HOG descriptor of the entire image [6]. The reason for transferring all of the data into vector form is so that it can later be run in the SVM.

The problem with this HOG descriptor process and the reason for implementing LiDAR into the system is due to the interference of the background of the image on the data in the vector. This is where the process of black masking comes into play. By erasing out the regions of the image that do not correlate to the region of interest, the cells and boxes that are collected for these areas are registered in the vector as zeros. This dramatically reduces the interference of background textures on the calculation of the image. This new descriptor gathered from the LiDAR implementation is called LGHOG (LiDAR Guide HOG) [6].

On the LiDAR side of the subsystem, it is explained by Han, Lu, Tai, and Zhao that there are numerous point cloud operators that have been proposed for use; however, all of them require a large amount of cloud points to operate which isn’t beneficial in the pedestrian detection process [6]. The issue with this is that in order to acquire a larger amount of cloud points the image needs to be larger and at a greater distance. However, the further the pedestrian is away from the LiDAR sensor, the more difficult it is to scan them. So, for the case of pedestrian detection, a simpler version of the LiDAR subsystem is used which appropriately identifies the pedestrian in interest.
Combining LGHOG and LiDAR and Processing Them in SVM
After the LGHOG descriptor vector and LiDAR features are prepped, they are then combined into one vector. The process for doing this is resizing all the images in the LGHOG descriptor vector to the same pixel size. They are then all broken down into smaller sizes and formulated into one final vector. The vector is now in final form to be entered into the SVM and be processed through machine learning.

The result from the steps presented by Han, Lu, Tai, and Zhao is that LiDAR infused technology can detect and compile data of surrounding pedestrians in at a real-time speed. This is an extremely important due to the need for precision timing in locating pedestrian movements whilst a vehicle is in motion.


Real-Time Speeds Increase Sustainability
Through efficiency techniques applied to the LiDAR detection process, the overall speed of the subsystem increases. This increase in speed directly correlates to the safety and well-being of pedestrians and persons inside of the vehicle. As reported by the NHTSA, there were 35,092 motor vehicle traffic fatalities in the United States in 2015 which was 2,348 more fatalities than the 32,744 deaths in 2014. This 7.2-percent increase is the largest percentage increase in nearly 50 years [7].

After a general downward trend since the early 2000’s in relation to vehicle traffic fatalities, there has been a recent spike in these deaths. In the field of AV’s, LiDAR provides a solution to help suppress the threat of crashes due to faulty pedestrian detection. Through improved real-time detection speeds, the skepticism that comes with trusting an unmanned vehicle on the road can be lightened and the goal of providing a safer experience on the road can be achieved.


MACHINE LEARNING WITH LIDAR DATA
These LiDAR systems can be further enhanced through the integration of computer vision and machine learning algorithms to “teach” the detection system to identify non-motorized objects without needing to manually classify ROIs. Creating a reliable computer vision algorithm for self-driving cars could provide benefit in requiring less intensive vehicle-to-vehicle communication systems and pre-designed routes and generally decreasing detection error rates.
Machine Learning’s Role in AV Sustainability
Implementation and perfection of machine learning algorithms in AV detection systems is a vital step in increasing their sustainability. Currently, a large amount of the data classification process in prototype AVs is done manually. This includes both programming road-maps and hazards as well as manually defining parameters for obstacles.

Machine learning aims to solve this problem: by exposing a learning algorithm to enough pre-defined data, the algorithm will ideally be able to classify not only the pre-defined data on its own, but also any future data it is exposed to. This is invaluable to AV navigation and non-motorized body detection. If the algorithm can adapt to changes in pedestrian trends and patterns, AVs with higher degrees of autonomy can be produced, minimizing the need for human-vehicle interaction. Furthermore, if the vehicle can adapt dynamically to its environment, the need to pre-program routes and closely monitor non-traffic conditions will decrease, increasing the vehicles’ efficiency.


Fast Regional Convolutional Neural Networks (Fast R-CNN)
University of Texas at Austin ECE researchers Taewan Kim and Joydeep Ghosh look to develop a “fast regional convolution neural network system” to classify data with greater efficiency and smaller error rates. This type of detection system relies on the mathematical principle of convolution. In functional analysis, convolution is a process through which two functions are manipulated to produce a unique third function which integrates aspects of both functions. Convolution is invaluable to modern signal/image processing: when applied to a large array of signals, convolution can produce a unique signal that contains approximated shared properties between the input signals [8].

It is this principle that forms the basis of convolutional neural networks (CNNs). CNNs are a special kind of neural network algorithm that intake images and pass them to “artificial neurons”: mathematical functions designed to simulate the signal and input processing of the human brain. Convolutional neural networks take in images as 3D vectors: these vectors’ components consist of the image’s width and height, with the third “depth” dimension consisting of the image’s color channels (RGB, which is used in the KITTI car, has 3 channels) [9].

The problem with a general convolutional neural network, however, is that it requires a large amount of processing power and efficiency to scan an entire image as a vector. This is the problem that the R-CNN model seeks to remedy. This model first involves passing camera and LiDAR input through a regional proposal network (RPN). In general, these networks intake a full image as input and output a series of rectangular regions. Each of these regions is assigned an “objectness” score, a measurement of how much each object belongs among various classes of objects vs the background [10]. This “objectness” measurement would allow an object detection algorithm to prioritize certain ROIs, allowing for a divide-and-conquer approach that allows for quicker image recognition vital for real-time detection.

Kim and Ghosh propose a model that combines LiDAR input, camera images, and RPN and CNN algorithms. The RPN extracts ROIs from LiDAR data and RGB images and projects the LiDAR data onto images’ coordinate systems, creating a hybrid, three-dimensional LiDAR-image vector system to be passed to the CNN. The CNN then assigns a value to each ROI that signifies the probability of each being classified as a pedestrian or a cyclist [11].


Thorough Benchmarking with the KITTI Dataset
A common benchmark dataset used in recognition algorithm research is the KITTI dataset, produced by the Karlsruhe Institute of Technology (KIT). KIT developed the set with attention to the following tasks of interest: stereo, optical flow, visual odometry, 3D object detection and 3D tracking [12]. KITTI proves particularly useful for non-motorized body analysis because it includes data labels for cyclists, unlike previously-used sets [11].

To provide accurate ground data, the data-collection vehicle was outfitted with an internal GPS and a Velodyne HDL-64E LiDAR laser scanner. This scanner rotated at 10 frames per second, producing approximately 100k points per cycle [12]. Additionally, the vehicle was outfitted with 2 grayscale cameras, 2 color cameras, and 4 varifocal lenses to match the vehicle’s scanner data with video of its surroundings.

Kim and Ghosh tested their algorithm by running it on the KITTI dataset’s pedestrian, car, and cyclist data. They were able to obtain as high as 80.01% average precision on vehicles classified as “hard” to detect by KIT, using R-CNN and LiDAR data in conjunction. However, they were only able to obtain maximum average precisions of 52.58% on “hard” cyclists and 71.49% on “hard” pedestrians [11], suggesting the need for improvement before self-learning algorithms can be implemented as a replacement for pre-programmed routes and mapping. Nonetheless, it is an important step forward in increasing the efficiency of AV development and operation.
GOOGLE/WAYMO’S SELF-DRIVING CAR: A CASE STUDY FOR LIDAR AVS

Significant strides in LiDAR obstacle detection have been made in controlled academic environments. However, the real test for the performance and viability comes when these prototypes leave the lab and are tested in real-world driving environments. Waymo, an extension of Google’s self-driving vehicle project, has consistently utilized LiDAR in its prototypes since 2009, and made history when it developed the first wholly self-driving prototype to complete a completely-autonomous trip on public roads in October 2015 [13]. Its progress, as well as limitations tied to it, allow it to serve as a case study for the development of autonomous detection systems.

It is erroneous to simply use the term “self-driving car” when referring to this project; Google and Waymo have gone through several iterations of prototypes since the project’s inception in 2009. However, it is consistent in its use of LiDAR as part of the detection process. Current models utilize the Velodyne HDL-64E laser scanner, the same utilized by the KITTI testing vehicle. This device utilizes 64 channels of light to capture 2.2 million points per second in a 360-degree, 120-meter range [14].

Waymo does not publish its research, so it is difficult to discuss it to extensive technical detail. From videos released, it can be discerned that the vehicle can detect stationary objects such as traffic cones, motorized vehicles, cyclists, and stationary traffic gates such as stoplights and railroad crossings along a predetermined route. Each of these bodies is divided into its own particular class; it is unknown to what extent these classes were defined manually by Waymo or determined through machine learning. However, the vehicle can stop when required, yielding to motorized vehicles attempting to converge into its lane, avoiding stationary obstacles, and even allowing cyclists to merge when they extend their arm to signal a turn, shown in Figure 4 [13]. Though its exact functions are unknown, Waymo stands out as an active development in LiDAR-based navigation.



FIGURE 4 [13]

A still from Waymo’s published urban navigation video displaying detection of and yielding to a signaling cyclist
CURRENT WEAKNESSES AND LIMITATIONS OF LIDAR-BASED AVS
As discussed, LiDAR is invaluable in joint-technology detection systems for its capability to assign 3D-vector data to otherwise depthless camera data. It has shown promise when used in tandem with other positioning devices and algorithms. However, there are certain weaknesses and limitations associated with LiDAR in its current states that pose challenges to possible wide-scale implementation. Problems of chief concern include LiDAR’s susceptibility to natural and artificial interference, cost-efficiency, the current need for route pre-programming, and susceptibility to cyber-attacks.
Weaknesses of LiDAR Detection Technology
LiDAR photodetection in AVs relies on light photons being able to complete their round trip to and from their surroundings. However, LiDAR has difficulty returning accurate data when conditions such as fog and heavy weather are present [11].

In addition to natural problems, there is the need to consider the extent of human interference with these detection systems. In 2015, University of Cork Computer Security researcher Jonathan Petit managed to spoof an IBEO Lux LiDAR unit by recording the pulses of light generated by the unit and generating pulses of light with a synchronized frequency using a laser and a pulse generator device that cost only $60 to assemble. Petit reported that he could spoof the Lux at distances up to 100 meters away without even needing to target the sensor directly [15]. Cork reported that his device only worked on the particular Lux unit, but a precedent has nevertheless been established regarding LiDAR vulnerability to spoofing, and systems to avoid such manipulation must be developed and implemented in the future as a precaution.


Current State of LiDAR Cost-Efficiency
The current manufacturing costs involved in the production of LiDAR photodetectors also poses a problem to the eventual commercialization of self-driving vehicles. The Velodyne HDL-64E, for instance, costs upwards of $75 thousand per unit. Velodyne announced in late 2016 the development of their VLP-32A sensor, with a target mass-production cost of $500, two orders of magnitude cheaper than the previous generation of “puck” sensors [16]. However, this goal is still far from being realized.

Velodyne has announced plans to develop solid-state LiDAR detection systems that do not require the large, expensive, rotating “pucks” traditionally used, which could potentially address the cost-efficiency problem. However, this development was only announced in December 2016, and Velodyne admits they have not been able to develop a solid-state system that offers the full 360-degree range capabilities of their previous models, so it could take years until a satisfactory, cost-effective solution is developed [17].


Need for Pre-Planning and Route Mapping
Waymo has been demonstrated to be one of the most sophisticated attempts at LiDAR-based navigation put forward by a high-profile technological company in the previous few years. However, it is not without its flaws. One such flaw is the need for extensive preparation and pre-programming of road systems. Currently, its routes must be planned beforehand; it would not be able to detect, for instance, a construction zone if it appeared overnight. So far, the only non-obstacle variables Waymo can account for are stoplights and intersection stops [18]. The navigation potential of AVs is bound severely when it is limited to mapped routes. This is a problem that may be remedied in the future as computer vision and neural network algorithms improve, but this is still a distant goal as discussed by Kim and Ghosh.
Susceptibility to Cyber-Attacks
It is important to recognize that AVs, being a combination of automobile and computer technology, are vulnerable to computer issues such as security breaches and cyber-attacks which may pose danger in public driving scenarios. In an article published in Science Direct, University of Utah and University of Texas at Austin civil engineering researchers Daniel Fagnant and Kara Kockelman posit a hypothetical two-stage security breach wherein AV navigation systems are infected with malicious code over a span of weeks or months and then ultimately ordered to disobey the law, such as suddenly accelerating to 70 mph and veering left [1]. In addition to the potential for directly-malicious attacks, there is a potential that vehicle-to-vehicle communication systems could be hijacked for unauthorized access of user data.

In either case, there is a need to develop thorough, wide-scale malware defense systems for AV navigation and communication systems before fully-automated vehicles can be perfected and implemented. A 2016 International Data Corporation predicts worldwide revenue for the development of cybersecurity systems to reach $101.6 billion by 2020, up 38% from the $73.7 billion spent in 2016 [19]. This problem is not completely insurmountable. Unlike computer malware protection systems, which largely arose reactively in response to prior-existing viruses, AV protection systems can begin development with these threats in mind and prepare accordingly. However, it is an issue that requires careful monitoring as development continues.


EMERGENCY DECISION-MAKING: THE NEED FOR AUTONOMOUS VEHICLE ETHICS
As demonstrated, LiDAR obstacle detection poses a major step forward in the development of AVs from a technical level. However, engineering does not exist in a vacuum. In addition to evaluating AV development from a technical standpoint, we must view possible inhibitions from social and ethical standpoints.

AVs still generally attract scrutiny and skepticism from the public. A March 2016 survey by the American Automobile Association states that three out of four Americans say they would feel “afraid” to ride in self-driving cars, and only one in five would trust a self-driving vehicle to safely transport them [20].

This generally-skeptical avenue towards AVs opens itself up to another discussion: the need for autonomous vehicle ethics research. What are the implications of AVs acting autonomously in emergency situations where avoiding danger is virtually impossible?
The Problem of Vehicle Decision-Making
It is nearly impossible to guarantee situations where autonomous vehicles are not subject to risk of accident, particularly in the presence of non-technological, non-motorized bodies as well as human drivers. Thus, an obstacle-detection system must be complemented by an algorithm that would direct the vehicle when confronted with such a situation.

In their current state, AV decision-making algorithms are not entirely able to produce results that avoid accidents. Though most accidents involving Waymo’s self-driving car were on account of external human drivers, there was an instance in February 2016 where the car’s software caused it to strike a bus when attempting to avoid a pile of sandbags in the middle of the road. Unlike previous crashes, Waymo was forced to take direct responsibility for causing the incident [21].

This incident raises a troubling question: how much information should an AV manufacturer have to disclose regarding the algorithms in its vehicles’ code? Would a consumer need to wager the vehicle would not endanger them due to an algorithm they do not know? Furthermore, is there a potential that a significant flaw in the algorithm could be experienced only after meeting a certain set of conditions outside of normal testing? What constitutes an “acceptable” level of risk and disclosure is subjective, and how should this standard be enforced? This is a debate that ethicists and lawmakers must engage in the future if AVs are to ever integrate into the public.

Currently, there is no way to know for sure the whole extent of risk involved when an AV is on the road, and it is likely that it will be impossible for AV manufacturers to imagine the full extent of risk-bearing situations, an unfortunate truth that fails to ease the social skepticism towards AVs.


Moral Ambiguity: Balancing Law and Safety
One may assume that programming the vehicle to strictly adhere to the law would be satisfactory. However, there are variables in every situation that complicate self-driving beyond simply “follow the law”. Ethical situations abound in the average commute: even something as simple as deciding to let a cyclist pass constitutes an ethical decision, states Transportation Research Scientist Noah Goodall. These actions, Goodall claims, constitute a transfer of risk from one party to another [22]. Would it be considered ethical for a machine to impart risk on other vehicles or non-motorized bodies, simply for the sake of obeying written laws?

In its current state, legislation regarding road laws is not designed with algorithm-based self-driving vehicles in mind. Indeed, even a situation as simple as driving around a branch on the road rather than waiting for it to be cleared constitutes a deliberate human act that is not directed by law, and therefore an act an AV would not know to do from an algorithm designed under the framework of current laws [23]. Therefore, AVs face a legal barrier in addition to an ethical one, and careful attention will need to be given to ensure the AV can obey objective law without compromising the safety of its surroundings.


CONCLUSIONS: THE FUTURE OF AUTONOMOUS VEHICLE DEVELOPMENT AND LIDAR’S PLACE IN IT
As we have discussed, there are still several technical and ethical barriers that must be overcome before wide-scale implementation of self-driving vehicles can become a reality. Indeed, there is a wide array of both known issues and hypotheticals that cloud the future of AVs. However, we do not believe these barriers are insurmountable.

We have identified various technical problems associated with AVs that adopt a LiDAR-based model. Indeed, there are exploitable weaknesses that arise from a system that relies solely on LiDAR for imaging. However, as discussed, many of the prototype AVs in service today rely on a wide array of sensors and technologies in tandem, meaning LiDAR plays a particular role where its strengths are utilized and its weaknesses are mitigated.

As we found in our research, current methods of object detection without the use of technical communication are not yet close to satisfactory. Kim and Ghosh’s model, for instance, still returns low detection rates for non-motorized objects. However, as research on computer vision and neural networks increases over the next decade, we expect that these systems may one day be sophisticated enough for implementation in an AV prototype.

We believe that, from a technical standpoint, the LiDAR- and computer vision-based AV has immense development potential, and is a viable option to solve the self-driving vehicle challenge. However, it is still subject to social and ethical limitations. For instance, many states have yet to implement legislation allowing the testing of AVs in urban areas, where there would merit the most benefit for training. However, as AVs continue to develop, these states will certainly be required to come to decisions eventually. We are optimistic that lawmakers, ethicists, and engineers will be able to strike a satisfactory balance as the technology continues to develop. However, this will not be a major issue until AVs begin to leave the prototyping stage and present themselves to be truly viable fully-driving vehicles.


SOURCES
[1] D. Fagnant, K. Kochelman. “Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations.” Elsevier. 5.16.2015. Accessed 1.25.2017 http://ac.els-cdn.com/S0965856415000804/1-s2.0-S0965856415000804-main.pdf?_tid=2b7c5d68-e439-11e6-92c4-00000aacb362&acdnat=1485484667_72f2410a74d983c43ba39a63b043e357

[2] “Injury Prevention & Control: Motor Vehicle Safety.” Centers for Disease Control and Prevention. 4.11.2016. Accessed 2.26.2017.

[3] T Triplett, R. Santos, S. Rosenbloom, B. Tefft. “American Driving Survey: 2014-2015.” AAA Foundation for Traffic Safety. Accessed 3.28.2017. https://www.aaafoundation.org/sites/default/files/AmericanDrivingSurvey2015.pdf

[4] “How does LiDAR work? The science behind the technology.” LiDAR-UK.com. Accessed 1.11.2017. http://www.lidar-uk.com/how-lidar-works/

[5] W. Jun, T. Wu, Z. Zheng. “LIDAR and Vision based Pedestrian Detection and Tracking System.” 2015 IEEE International Conference on Progress in Informatics and Computing. 6.13.2016. Accessed 1.22.2017. http://ieeexplore.ieee.org.pitt.idm.oclc.org/stamp/stamp.jsp?arnumber=7489821

[6] X. Han, J. Lu, Y. Tai, C. Zhao. “A Real-time LIDAR and Vision based Pedestrian Detection System for Unmanned Ground Vehicles.” 2015 3rd IAPR Asian Conference on Pattern Recognition. 4.3.2015. Accessed 1.10.2017.

http://ieeexplore.ieee.org.pitt.idm.oclc.org/stamp/stamp.jsp?arnumber=7486580

[7] “2015 Motor Vehicle Crashes: Overview.” U.S Department of Transportation. 8.2016. Accessed 3.29.2017

https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812318

[8] S. Smith. “The Scientist and Engineer's Guide to Digital Signal Processing”. Accessed 3.1.2017.

http://www.dspguide.com/ch6/2.htm

[9] A. Karpathy. “Convolutional Neural Networks.” Stanford University. Accessed 3.1.2017. http://cs231n.github.io/convolutional-networks/

[10] S. Ren, K. He, R. Girshick, J. Sun. “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.” Microsoft Research. 2015. Accessed 3.1.2017. https://papers.nips.cc/paper/5638-faster-r-cnn-towards-real-time-object-detection-with-region-proposal-networks.pdf

[11] T. Kim, J. Ghosh. “Robust Detection of Non-Motorized Road Users using Deep Learning on Optical and LIDAR Data.” 2016 IEEE 19th International Conference on Intelligent Transportation Systems. 11.1.2016. Accessed 1.10.2017. http://ieeexplore.ieee.org.pitt.idm.oclc.org/stamp/stamp.jsp?arnumber=7795566

[12] “The KITTI Vision Benchmark Suite.” Karlsruhe Institute of Technology. Accessed 2.25.2017. http://www.cvlibs.net/datasets/kitti/

[13] “Technology.” Waymo. 2016. Accessed 2.8.2017. https://waymo.com/tech/

[14] “HDL-64E.” Velodyne LiDAR. Accessed 2.15.2017. http://velodynelidar.com/hdl-64e.html

[15] M. Harris. “Researcher Hacks Self-Driving Car Sensors.” IEEE Spectrum. 9.4.2015. Accessed 3.1.2017. http://spectrum.ieee.org/cars-that-think/transportation/self-driving/researcher-hacks-selfdriving-car-sensors

[16] E. Ackerman. “Cheap Lidar: The Key to Making Self-Driving Cars Affordable.” IEEE Spectrum. 9.22.2016. Accessed 3.2.2017. http://spectrum.ieee.org/transportation/advanced-cars/cheap-lidar-the-key-to-making-selfdriving-cars-affordable

[17] E. Ackerman. “Velodyne Says It's Got a "Breakthrough" in Solid State Lidar Design.” IEEE Spectrum. 12.13.2016. Accessed 2.28.2017. http://spectrum.ieee.org/cars-that-think/transportation/sensors/velodyne-announces-breakthrough-in-solid-state-lidar-design

[18] L. Gomes. “Hidden Obstacles for Google’s Self-Driving Cars.” MIT Technology Review. 8.28.2014. Accessed 3.1.2017. https://www.technologyreview.com/s/530276/hidden-obstacles-for-googles-self-driving-cars/

[19] “Worldwide Revenue for Security Technology Forecast to Surpass $100 Billion in 2020.” International Data Corporation. 10.12.2016. Accessed 3.1.2017.

http://www.idc.com/getdoc.jsp?containerId=prUS41851116

[20] J. Hsu. “75% of U.S. Drivers Fear Self-Driving Cars, But It's an Easy Fear to Get Over.” IEEE Spectrum. 3.7.2016. Accessed 2.26.2017. http://spectrum.ieee.org/cars-that-think/transportation/self-driving/driverless-cars-inspire-both-fear-and-hope

[21] A. Davies. “Google’s Self-Driving Car Caused Its First Crash.” Wired. 2.29.2016. Accessed 2.28.2017. https://www.wired.com/2016/02/googles-self-driving-car-may-caused-first-crash/

[22] N. Goodall. “Machine Ethics and Automated Vehicles.” Road Vehicle Automation, Springer, 2014, pp. 93-102. Accessed 1.12.2017.

[23] P. Lin. “The Ethics of Autonomous Cars.” The Atlantic. 10.8.2013. Accessed 1.11.2017. https://www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/
ACKNOWLEDGEMENTS
We would like to thank the University of Pittsburgh for hosting this conference and allowing us the opportunity to write and present on this topic.

We also thank our Writing Instructor Nancy Koerbel for providing feedback that let us converge on an appropriate paper topic.

We would also like to thank our co-chair Ryan Schwartz for helping us through the outlining and drafting process.

We must also thank our conference chair Kevin Shaffer for answering our questions regarding the conference and helping us make the transition from ideas to a presentable conference paper.



Finally, we must thank all those who are putting forth the effort and investments to further society towards making self-driving cars a functional, ethical, and safe reality.



University of Pittsburgh Swanson School of Engineering

03.31.2017


Download 60.53 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page