Conference Session C12 Paper #110 Disclaimer —



Download 56.19 Kb.
Date09.06.2017
Size56.19 Kb.
#20169

Conference Session C12

Paper #110


Disclaimer — This paper partially fulfills a writing requirement for first year (freshman) engineering students at the University of Pittsburgh Swanson School of Engineering. This paper is a student, not a professional, paper. This paper is based on publicly available information and may not be provide complete analyses of all relevant data. If this paper is used for any purpose other than these authors’ partial fulfillment of a writing requirement for first year (freshman) engineering students at the University of Pittsburgh Swanson School of Engineering, the user does so at his or her own risk.
THE FUTURE OF TRANSPORTATION: LIGHT DETECTION AND RANGING SYSTEMS IN AUTONOMOUS VEHICLES
Ian Arcudi, ima14@pitt.edu, Mahboobin 10:00, Nick Cerep, nac109@pitt.edu, Mena 3:00



Abstract —One of the most important advancements in recent years has been the development of autonomous cars. Light Detection and Ranging, or “LIDAR”, technology is a major component that makes these self-driving cars possible. This paper will illustrate why LIDAR is the most efficient and practical method for autonomous cars to scan their surroundings. The paper will show that it is the best solution to many crucial problems associated with self-driving cars, as it enables the car to safely navigate obstacles, while at the same time obeying the laws of the road. As a result of this technological advancement, as well as the shifting economy that makes these systems more affordable, transportation has the potential to become significantly safer and more efficient in the near future. This paper will discuss how the components of LIDAR work together cohesively, explaining the role that lasers and mathematics have in creating the continuously changing environment surrounding the vehicle. It will also explain how the system collects data and how that data is processed by using built-in algorithms. The mechanisms of how this technology is programmed are clarified so that one is able to understand the ethical issue surrounding this topic. The ethical dilemma of how to program this technology in a situation where a crash is unavoidable will be thoroughly explored in this paper. We will discuss the factors that the system musty consider to make the most beneficial ethical decision for society, and use our knowledge to propose possible solutions.
Key Words—Autonomous cars, electrical engineering, ethics, lasers, LIDAR
INTRODUCTION TO LIDAR IN AUTONOMOUS CARS
Light Detection and Ranging or “LIDAR” systems are incredibly involved devices. They include a range of different electrical systems that work together to create and interpret an accurate scan of the environment. One application of this technology is the role it plays in the development of autonomous, or self-driving, vehicles. Autonomous vehicles can prove to be a ground-breaking new product that completely changes the way in which people get from place to place. The use of LIDAR technology is perhaps one of the most key features of what makes these automobiles function properly. LIDAR utilizes various programs that work in unison to record data and ultimately piece this information together. With all of the accumulated data, LIDAR is capable of creating the model that it then uses to navigate roads. By recording the angle and distance at which lasers are reflected back from the surface in contact, the LIDAR device is able to gather crucial data to help develop the surroundings. An example showing how the application of this technique is beneficial, is the lasers being used to identify edges of roads, which determines where the car needs to be on it. All of the information gathered with these cooperating systems will then be sent into advanced algorithms that will organize the data into terms that the car can use to decipher everything needed to transport the vehicle along its way. The system is also capable of detecting obstacles in the car’s way such as pedestrians, debris, and potholes, along with detecting bends in the road ahead. However, along with the many benefits of LIDAR, comes possible ethical dilemmas. With LIDAR powering the car and making the decisions behind the wheel, some people will naturally be wary of a computer navigating its way through possible dangerous situations, as it will ultimately lack basic human instinct. However, LIDAR does have built in functions to help it manage difficult or complicated situations. These functions, along with gathered data, are used to keep passengers, pedestrians and society as safe as possible while the car executes its way to the destination. All of these components mesh into one final system, known as LIDAR.
CHANGING THE AUTOMOBILE INDUSTRY
In our modern society, safety and cost are two of the most important aspects to consider when purchasing a car. While new measures are always being taken to ensure the safety of drivers, an alternative solution of giving control to the car over the driver could prove to be the most effective. LIDAR technology allows this to be a possibility in the near future. In addition to having safety benefits, LIDAR is also rapidly becoming more affordable. These two factors make LIDAR one of the most significant advancements in recent years in the field of electrical engineering.

Over the past few years, major companies such as Audi, Lexus, and Google [1] have explored the possibility of using LIDAR in self-driving cars. In particular, Google has already produced their own autonomous cars, and has conducted extensive testing on roads across the United States. According to data gathered by Google, “One of the analyses showed that when a human was behind the wheel, Google’s cars accelerated and braked significantly more sharply than they did when piloting themselves. Another showed that the cars’ software was much better at maintaining a safe distance from the vehicle ahead than the human drivers were [2].” This illustrates the safety benefits of driverless cars, as they are able to react faster than humans, and unlike humans, are programmed to obey laws. Furthermore, these cars have another advantage over human drivers, as they are not able to become distracted.

While all of these companies have been successful in producing autonomous cars that utilize LIDAR technology, others, who use a different systems for navigation and detection in their cars, such as radar, have not been so fortunate. Notably, Tesla has repeatedly insisted that they will not incorporate LIDAR into any of their self-driving car models. An article comparing the cars made by Tesla and Google states “With the recent fatality in a Tesla that was operating under Autopilot, Tesla’s choice is under attack. Assessing the competing claims requires understanding the strengths, weaknesses, and compromises inherent in the different sensor types [3].” This statement strongly suggests that there are drawbacks to building autonomous cars that do not use LIDAR systems. This is even more apparent because there has yet to be a reported accident that could be attributed to the fault of an autonomous car operating with LIDAR, while there have already been several fatalities associated with cars that use other technology [3]. The developments that all of these companies have made in this field have proven that there is great potential for improving the safety of roads in the near future. Likewise, the cost of these LIDAR units are also becoming more affordable to produce, which is another indication of the positive outlook of this technology for the future.

Only two years ago, LIDAR units produced by a company called Velodyne, sold for roughly $80,000 per unit, not including the automobile. This presented a major problem for the producers of autonomous cars, as most of them were not willing to spend that much money. As the market shifted, these systems became increasingly affordable, eventually falling to a price of only about $8,000 per unit for a lower-tier model with less lasers. According to the Society of Automotive Engineers, this trend is expected to continue over the next few years, allowing autonomous cars to be integrated into society and purchased by the public [4]. In fact, a smaller company called Osram Opto Semiconductors, even plans to sell LIDAR units at only a few hundred dollars per unit. As a result of this shift in price, competition has been created between the companies that produce self-driving cars, as more of them are now able to thoroughly test and mass-produce their products.

The increasing affordability and safety of LIDAR in autonomous cars will change the entire landscape of the automobile industry. Providing this safer alternative to driving will lead to a decrease in car crashes. Moreover, providing this option at a reasonable price will allow a large portion of the general public to experience the benefits that come with LIDAR technology.

Although it is clear that self-driving cars are becoming a safe and cost-efficient form of transportation, the problem of how to perfect the LIDAR technology still remains. Therefore, the actions that LIDAR systems perform, such as emitting beams and detecting surfaces, must be understood in order to appreciate the impact that this technology has on society.



HOW DOES LIDAR WORK?
A Brief Overview



The essential purpose of LIDAR technology in the context of autonomous cars is to scan the surroundings, process the data collected, and classify the information by grouping it into different categories. The two main groups in which data can be classified are stationary objects and moving objects. For instance, a car approaching in the passing lane would be considered a moving object, while a curb or a cone in the road would be examples of stationary objects. Making this initial distinction is critical because of the processes that follow. Within each of these broad categories, another set of filters and classes exists to ensure that LIDAR will make the most precise and accurate classification for each surface scanned. When the data received by the scanner has gone through the initial sorting, it then follows a sequential order of steps to extract all of the information that can be obtained from it. For moving objects, the data is passed along four unique subsystems, each with a separate function [5]. Likewise, for stationary objects, the data is processed through a program that consists of several steps which analyze the data [6]. When these processes have finally been completed, the programs automatically prompt the next action that will be taken by the autonomous car.
Stationary Object Detection
As the LIDAR unit on the autonomous car scans the environment around it, the laser collects data points in the form of elevation signals. One of the two main functions of LIDAR is to detect stationary, or motionless, objects. These include surfaces such as road edges and curbs. Knowing the location of these surfaces is important to the self-driving car because this information allows it to stay within the boundaries of the road, so that it can safely and precisely navigate. In order to achieve this, the system works quickly, with a standard LIDAR unit completing about 200 scans per second. These scans are done by first identifying the ground surface as a reference point of zero elevation, and then comparing it to the rest of the data points, which include all of the points on the surface at different elevations. Furthermore, each of these scans produces its own set of data points, which are then sent through a process that is programmed into the system. This algorithm utilizes techniques such as pattern recognition and filtering to make distinctions between different types of road segments. All of these functions are included in the the major steps of this process, as shown in the figure below [6] [Figure 1].

Figure 1

Process for classifying stationary objects or surfaces.



The first step in the sequence for object identification occurs at the instant that the elevation input signals from all points on the roads are received by the LIDAR system. This stage is called candidate selection, as its purpose is to establish a potential candidate for a road edge. Road edges are defined as points that have sharp angles to the ground plane, above a given threshold. The first action taken by the program in this step is to apply a Gaussian difference filter to the data. The purpose of the filter in this situation is to enhance the clarity of the image that is scanned. A sample LIDAR scan can be seen below for reference, as well as the graph showing the elevation data [Figure 2].

Figure 2

Example of a LIDAR scan and a graph showing the elevation along different points on the ground.
As a result of the application of the Gaussian difference filter, edges and differences in road elevation are sharpened in great detail. While this makes the identification of relative maxima and minima in elevation more easily distinguishable, it also creates false peaks that could be mistaken for relative extrema. To reduce this noise, another threshold is programmed into the system so that extrema with elevation values that have a magnitude that is less than the threshold are filtered out of the candidate selection mechanism. Finally, the points of data that have elevations above the threshold are transmitted to the next phase of this process for further manipulation [6].

The next step of classification is the extraction of certain features from the elevation data. From the information collected, the variance, standard deviation, and total number of data points in elevation are returned. These are later used as inputs for a classifier function that ultimately determines the points that are edges or curbs. This equation, as shown below [Figure 3] relates the features extracted from the data with two classifier parameters in order to obtain the target result [6]. In this equation, the product of the standard deviation and the alpha parameter is added to the quotient of the gamma parameter over the total number of data points. The two classifier parameters are preset constants that are able to produce an objective f, whose value determines the identity of the segment of data points being examined. These constants are used to filter out data that does not meet an additional elevation threshold, so that these points do not become classified as a road edge.


Figure 3

This equation determines whether each segment is classified as a maximum or minimum in elevation.
After the data is processed through this classifier equation, and the identity of the surface that was scanned has been determined to be an edge or curb, the false alarm mitigation step is activated. For segments that were not labeled as such surfaces by the classifier function, this step is skipped, as it is irrelevant in that particular situation. The false alarm mitigation procedure includes yet another function that the data is passed through to check for any errors that were made in previous steps. To accomplish this, the width of the road segment collected is analyzed, and it is compared with the threshold width that had already been set. The result of this function is determined by raising a false alarm if the width of the segment being analyzed is less than the distance between the two nearest relative maxima. If no false alarms are raised, the road segment can finally be classified as a road edge or a curb. This process of filtering data points by their characteristics and elevations ensures that the final result of this program will be sufficiently accurate, thus enabling the autonomous car to act accordingly and safely [6].

Detection of Objects in Motion
Detecting, classifying and tracking moving entities is a major function of LIDAR based systems. Using a dense combination of lasers, photos, mathematics and electrical devices, the system is capable of accomplishing this

task. While using a few important subsystems, LIDAR determines the approximate class of each entity and can even predict how the entity will move and behave. The dynamic behavior and size of each entity are very closely related to classifying the entities as either pedestrians or vehicles. This is the functionality and structure of most current LIDAR systems. There have been attempts to implement other systems into autonomous cars, but each of these came with problems that eventually made the visual detection of entities ineffective. Some past attempts at detection systems include: Voting schemes, Markov-based chains, and multi-hypothesis theories. Voting schemes have the lack of a consistent mathematical framework which rendered them incapable of the rugged calculations needed to properly calculate and track entities. The Markov-based chains depend on the Markov states transitions which take a considerable amount of time to calculate, lessening the effectiveness of the overall calculation. Multi-hypothesis method is based on feature tracking by means of a stochastic filter, and its main fault is the incredibly high computational cost. Also, the vast complexity involved with trying to interpret and maintain the multitude of hypotheses needed to complete the tasks eventually eradicated any chance of it being an effective solution to the detection problem. The most common connection between all of these failed systems is that none of them utilized LIDAR. LIDAR is the central part of the current linear KF-based method of detecting moving entities and it has proven to provide an easier path towards completing the necessary segmentation and data processing steps. LIDAR laser measurements are a reliable way of obtaining information with little to no effect from weather changes, making the entire system more effective. This is quite possibly one of LIDAR’s biggest strengths, in that it can beat the weather conditions and provide dependable results every day. This function gives the autonomous car a range of possibilities for avoiding crashes and keeping its passengers safe with the most practical technique to date [5].

LIDAR will also determine if there are multiple moving objects in the range at once. This allows for the car to detect endless amounts of entities which can pose a risk or possibly affect the path in which the vehicle drives. Vehicles and pedestrians are main sources of concern for the car as it navigates, as both can prove to be obstacles with dynamic movements. They are also generally hard to predict as there are many factors that can alter how the autonomous car must travel in order to stay unscathed. This information then gets sent to another subsystem of LIDAR, which uses it to begin the process of obtaining a more clear and definite read on the moving object.

The first step in tracking moving entities is to detect them. Using lasers, the system can determine a difference in the length of the rays. This allows LIDAR to separate the entities from an estimated background. The surrounding area is segmented in the laser space. To accurately segment the entity from the background, a linear KF-based system extracts segments that are used for tracking and data association. These segments are considered to evolve in time, meaning that they are expected to move around. If the entity seems to be changing its position, then the system is able to classify this as a moving entity. The range of segments that define an entity are essentially a cloud of range points that are said to “make up” these objects of interest. At the very center of the ranges of points, a characteristic point is established by LIDAR. This characteristic point is used as the place of reference to any kinematic behavior of the cluster [5]. Data association is done using two situations that relate segments to different pieces of gathered information. Segment-to-segment and segment-to-object-tracker are the two types of data association performed. Segment-to-segment is the process of associating detected segments with all of the other non-classified objects. Segment-to-object-tracker is the association of observed segments with existing objects. The very first situation occurs when one or more current segments is related to a past segment, which means that there is reason to merge all of the valid segments into one. This can also mean to separate them if it seems that the relation is due to a prior position of the same entity. The second data association is done with the main purpose of classifying the object. This entire process reflects how the final classification of the entity as either a vehicle or a pedestrian is closely related to the dynamic behavior and established size of the segments and clusters that encompass the entity.



Next, LIDAR accesses a camera and performs a few very complex calculations to create an accurate representation of the detected entity or entities. Within a few rigorous equations set to calculate new variables that represent the segments in the laser space, the system also uses scanners to read the pictures being taken and classifying where in the picture these entities are located. With a clear picture of the vehicle or pedestrian, and a definite position within the image frame, the data is sent to the AdaBoost Classifier, which molds the data and the picture into one cohesive tool for tracking the entities. Now the LIDAR system is capable of tracking the entities at hand by following the changes in laser space and the drift of the image frame from its original position [5][Figure 4].




Figure 4

The combination of laser and photo imaging for detection in LIDAR.
Having the two detection types classified separately and then combined is an incredibly effective way of determining and tracking a range of entities. A few tests have been done with the system using data sequences, and two results were observed. When only the laser space detection was used to detect entities, it was found to be quite capable of determining defining features and trackable movements of pedestrians, while the detection of vehicles was much more undefinable and undetailed. In other words, the system was capable of defining and tracking movements of the pedestrians down to limb movement, while the vehicles came out as more of a harder to track, blob. On the other hand, when only the image based detection of the system was used, it was more effective at the detection of vehicles than it was in the defining of pedestrian entities. This is because the size and shape of pedestrians vary greatly, meaning that its rigid body is somewhat more complex to calculate from an image alone. These concepts of tracking pedestrians and other vehicles also play an important role in the ethical issue involving LIDAR. By combining the two sets of detection processes, LIDAR becomes very qualified to track obstacles, in an efficient way that will prevail over time.
SUSTAINABILITY
LIDAR technology is a main piece in the rapidly advancing autonomous vehicle field. With effective imaging and motion tracking capabilities LIDAR is proving to be one of the most, if not the most capable technologies to properly map the surroundings of these cars. The processes explained above are quite accurate and essential when attempting to perform the tasks that LIDAR is counted on to do. However, LIDAR is a relatively new technology, which means that it has much room to improve. A recent announcement from the automotive startup Quanergy, states that the company has not only developed their own, cheaper LIDAR system, but that it is actually capable of far greater precision than any pre-existing LIDAR system to date.

Figure 5



The S3 LIDAR unit will increase the sustainability of LIDAR in autonomous cars with its affordable price and solid-state design.
This new system is called the S3, which Quanergy says costs less than 250 US dollars and will be made available for use in cars very soon. It is designed to be a more versatile and affordable source of image processing and tracking than anything currently being used. It boasts a solid-state design which contains virtually no moving parts. Thus, apart from being more affordable and more effective, the new S3 is supposed to even last longer within these autonomous cars. With no moving parts, it is much more difficult for a misfire to cause the damaging or breaking of any of the pieces which help it function. Being based almost completely on laser pulse design in a solid state system allows this S3 to also morph the directions in which it emits its lasers. It is capable of sending photons out approximately every microsecond which means it can collect about a million data points per second. Since this system is solid state, it does not spin like most LIDAR systems, and this allows the system to focus its data gathering wherever it needs it the most, almost like a set of human eyes scanning the road. This includes heightened peripheral vision and better all-around mapping. This new S3 unit is proof at the constantly developing and rapidly expanding world of LIDAR technology [7]. It is not just the S3 that is advancing the prominence of LIDAR however.

Other companies are approaching better LIDAR systems as well. Chief technology officer at Delphi Johnny Owens is quoted saying, “One area of focus will be automotive-grade, that is, can we make parts that will last the life of the car in all kinds of weather and on all kinds of roads?” Owens followed up by saying, “elimination of moving parts with solid-state technology will go a long way toward allowing LiDAR sensors to be included in consumer products like vehicles with minimal maintenance costs [8].” Delphi and Owens are working towards a piece of LIDAR technology which they believe will be on the same level if not more affordable and effective than the S3. Others such as Velodyne have already developed new LIDAR systems. Velodyne has actually already had an order from Ford placed to implement their new systems into the new autonomous cars that they have begun research on. Thus, proving that even more companies have bought into the future of autonomous cars that contain these new age, sleeker LIDAR systems. Even if the automobile industry became uninterested in the future, tech companies such as Google hare even front runners on the development of vehicles containing LIDAR. With so much focus and research being put into this technology and the development of autonomous cars, it is going to be a major focus of the automobile industry and others for the foreseeable future. Interest from all types of different companies provides a stable platform for the constant rise of LIDAR in vehicles, proving its worth and sustainability.


AN ETHICAL DILEMMA INVOLVING LIDAR
As difficult as the programming of LIDAR technology is, this task is much simpler than the ethical dilemma involved with the decision making process that LIDAR goes through. All of the engineers that are working towards a solution to this dilemma must keep the safety of society as a top priority because of the immense responsibility that is put on these autonomous cars. This is supported by the NSPE Code of Ethics for Engineers, which states, “Engineers, in the fulfillment of their professional duties, shall hold paramount the safety, health, and welfare of the public” [9]. According to Google, “There have only been 13 minor fender-benders in more than 1.8 million miles of autonomous and manual driving” by autonomous cars produced by their company [10]. This data supports the theory that self-driving cars have already reached the point of being safer than human drivers. However, the world that we live in is not perfect, so the engineers working on this problem must consider situations in which a crash involving the autonomous car is inevitable.

In order to program this decision making into a car, engineers must decide for themselves what is truly right or wrong in terms of being the most beneficial to society. While this decision is typically made based on human instinct in a split second, it must be predetermined in LIDAR to act in a certain way for every situation. The controversy associated with this dilemma stems from this because what are considered “good” and “bad” ethics are not agreed upon by everyone. This is a big concern for those working on this project because most humans are not comfortable deciding who gets injured and who does not in one of these situations. One example of this type of scenario is a car heading toward a single lane tunnel, and a pedestrian walking out in front of the car right before the car reaches the tunnel. When an autonomous car is in this position, the LIDAR unit must be programmed to decide whether to crash into the wall next to the tunnel, or hit the pedestrian. Unfortunately, this means that technically, the LIDAR system must be programmed to injure, or even kill. This grey area causes testing restrictions, and will possibly delay the release of these products to the public. A consensus for a definite solution to this dilemma has not yet been reached, so various codes of ethics and case studies are currently being observed in order to gain a better understanding of the possible solutions to this issue.

In the field of engineering, various codes of ethics are frequently referenced to assure that the common morals and values of society are not violated in each project. In the case of integrating LIDAR technology into autonomous cars, the arguments of ethics are especially applicable because of the lack of human control and decision making. When discussing the previously mentioned scenario involving a crash, research scientist Jean-Francois Bonnefon observed that “Participants strongly agreed that it would be more moral for AVs to sacrifice their own passengers when this sacrifice would save a greater number of lives overall” [11]. This statement presents a possible solution to the ethical dilemma, placing the safety of others above the safety of those in the autonomous vehicles. The IEEE Code of Ethics advises, “To treat fairly all persons and to not engage in acts of discrimination based on race, religion, gender, disability, or age”, so none of these would be able to be used as determining factors that affect the LIDAR unit’s decision [12]. Understanding the codes of ethics is essential in this context because it tells the programmers what factors they can and cannot ignore when making this critical judgement. While the opinions of the entire population cannot all be obtained, the engineering codes of ethics can aid in this process by providing a set of common values shared by most of society.

In addition to codes of ethics, case studies can also be valuable resources in gaining an insight to what is considered morally right or wrong. One case study in particular involving the issue of how to program LIDAR is being done by the Massachusetts Institute of Technology. This study, referred to as “The Moral Machine,” is a completely open survey on the internet that asks users about various scenarios that would require the programming of ethics into LIDAR units. An example of one of these scenarios is shown below, with each picture representing the choice that the user must make. This type of case study is unique because it places the user in the shoes of a programmer, asking the same questions that a programmer would ask themselves in these situations, but without actually requiring any programming. When discussing this issue on the official website for this case study, the author states “The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb” [13]. This quote further demonstrates the importance of this issue, and how developing a solution depends on the opinions of the population as a whole. The responses from this experiment are extremely valuable because its results give an insight into the moral values of a large portion of society.

The Moral Machine is an example of a successful case study in this area of research, as a great amount of knowledge has already been gained from it. For instance, the age and number of people involved in the crash proved to be the most important factors to society in making these critical decisions. Using this data, engineers may be able to develop an algorithm that includes some of the variables discussed, in order to program the complicated set of ethics that is required to be incorporated into the LIDAR units in autonomous cars.




Figure 4

An example of a question asked by the Moral Machine. Users are asked to decide which of these actions is more morally correct. In this figure, the user must decide to either crash into the barrier, killing the passengers, or swerve into the left lane, killing the pedestrians.
Situations like those discussed are not often considered before they actually occur because the split-second decisions that must be made are normally based completely off of human instinct. Also, programming LIDAR is typically considered to be an exact science, so this ethical dilemma causes serious complications to the overall production of this system. As LIDAR technology in autonomous cars continues to develop, steps are actively being taken towards a definitive solution to this problem. The puzzle of programming LIDAR systems to function in autonomous cars may finally be completed with the resolution of this ethical dilemma.
A SMARTER TRANSPORTATION SOLUTION
Autonomous cars are quickly advancing and pushing their way into the everyday life of society. With LIDAR constantly guiding these vehicles through their paths, the systems become not only safer and smarter, but also a more efficient option to improve the lifespan of each car. LIDAR-based autonomous cars have been programmed to follow rules of the road better than human drivers [1], and pose a credible solution to the rapidly growing problem of the distracted driver. LIDAR is also keeping a clean track record as other types of visual mapping and detection processes are doing quite the opposite. In the example of Tesla, it can be seen that the lack of a LIDAR system in their cars could very well be the reason for its failure to avoid multiple crashes and failure to keep pedestrians and passengers safe. LIDAR’s clean track record is, of course, a product of its rigorous and constant obstacle detection systems. The process of detection is broken down into 2 main groups. One being the stationary object detection group, which finds objects in the roadway such as cones, curbs, potholes and even debris from past crashes on the road. The second being moving object detection, which not only finds obstacles with dynamic movements, but identifies and tracks them. These obstacles are referred to as entities and are classified as either pedestrians or vehicles. Vehicles are considered any other motorized form of travel, while pedestrians actually includes humans and animals alike. The detection of both stationary and moving objects is only possible through the application of many complex algorithms and processors. Despite being treated as different, the two share a lot of the same ideas in terms of gathering information and putting it in terms that the system can use to stay unscathed. By utilizing various functions and techniques, LIDAR offers a safe, new way to guide the vehicle. There are, of course, some ethical dilemmas that are making it difficult for this technology to make uncontested headway as the undisputed best form of travel. LIDAR has built in functions to decide how to act in situations that can potentially end with injury or destruction of property. Naturally there are some who contest that a computer-based system can ever make split-second decisions like humans can, but with undistracted, constant attention to details, LIDAR offers a liable option to avoid more crashes and keep safety a major concern. Overall, it seems apparent that LIDAR is a technology that must continue to be utilized, as it poses many incredible benefits to visualizing and guiding autonomous cars through the future.
SOURCES
[1] A. Iliaifar. “LIDAR, Lasers, and Logic: Anatomy of an Autonomous Vehicle”. Accessed 3/30/17. Last updated 2/6/13. http://www.digitaltrends.com/cars/lidar-lasers-and-beefed-up-computers-the-intricate-anatomy-of-an-autonomous-vehicle/

[2] T. Simonite. “Data Shows Google’s Robot Cars Are Smoother, Safer Drivers Than You or I”. MIT Technology Review. Accessed 3/30/17. Last updated 10/25/13. https://www.technologyreview.com/s/520746/data-shows-googles-robot-cars-are-smoother-safer-drivers-than-you-or-i/

[3] M. Barnard. “Tesla and Google Disagree About LIDARWhich is Right?”. Clean Technica. Accessed 3/30/17. Last updated 7/29/16. https://cleantechnica.com/2016/07/29/tesla-google-disagree-lidar-right/

[4] B. Berman. “Lower-cost lidar is key to self-driving future”. SAE International. Accessed 3/30/17. Last updated 2/11/15. http://articles.sae.org/13899/

[5] C. Premebida. “A Lidar and Vision-based Approach for Pedestrian and Vehicle Detection and Tracking”. Accessed 3/30/17. Last updated 10/22/07. http://ieeexplore.ieee.org/document/4357637/#full-text-section.



[6] W. Zhang. “LIDAR-Based Road and Road-Edge Detection”. IEEE Xplore. Accessed 3/30/17. http://ieeexplore.ieee.org.pitt.idm.oclc.org/stamp/stamp.jsp?arnumber=5548134

[7] E. Ackerman. “Quanergy Announces $250 Solid-State LIDAR for Cars, Robots, and More”. Accessed 3/30/17. Last updated 1/7/16. http://spectrum.ieee.org/cars-that-think/transportation/sensors/quanergy-solid-state-lidar

[8] “Driverless Cars Shrink LIDAR Technology”. Accessed 3/30/17. Last updated 4/6/16. http://www.connectorsupplier.com/driverless-cars-shrink-lidar-technology/

[9] “NSPE Code of Ethics for Engineers.” Accessed 3/30/17. https://www.nspe.org/resources/ethics/code-ethics

[10] R. Montenegro. “Google's Self-Driving Cars Are Ridiculously Safe.” Accessed 3/30/17.

http://bigthink.com/ideafeed/googles-self-driving-car-is-ridiculously-safe

[11] J. Bonnefon, A. Shariff, I. Rahwan. “The social dilemma of autonomous vehicles.” Accessed 3/30/17. Last updated 6/24/16.

http://science.sciencemag.org/content/352/6293/1573.full

[12] “IEEE Code of Ethics.” Accessed 3/30/17.

http://www.ieee.org/about/corporate/governance/p7-8.html#top

[13] “Moral Machine” Accessed 3/30/17.

http://moralmachine.mit.edu/


ADDITIONAL SOURCES
R. Dominguez. “LIDAR based perception solution for autonomous vehicles”. Accessed 3/3/17. Last Updated 1/3/12. http://ieeexplore.ieee.org/document/6121753/#full-text-section.

M. Himmelsbach. “LIDAR-based 3D Object Perception”. Accessed 3/30/17. http://www.cs.princeton.edu/courses/archive/spring11/cos598A/pdfs/Himmelsbach08.pdf



B. Schwarz. “LIDAR Mapping the World in 3D”. Accessed 3/30/17. http://www.velodynelidar.com/lidar/hdldownloads/Nature%20Photonics.pdf

ACKNOWLEDGEMENTS
We would like to thank Michelle Banas and Joshua Zelesnick for providing feedback and advice that will improve the quality of our paper.




University of Pittsburgh Swanson School of Engineering

1/26/17


Download 56.19 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page