Intellligent systems roadmap



Download 40.32 Kb.
Date28.05.2018
Size40.32 Kb.
#50860
INTELLLIGENT SYSTEMS ROADMAP
Topic Area: Autonomy
Ella Atkins, University of Michigan

Introduction

Autonomy can help us achieve new levels of efficiency, capability, and resilience through software-based sense-decide-act cycles. Autonomy, however, can have a wide variety of definitions and interpretations. When do today’s “automation aids” designed to provide information for human pilot/operator decision-making become “autonomous systems” capable of decision-making without constant human supervision? In Aerospace, we tend to think of autonomy in terms of improved situational awareness which in turn leads to better collaborative human-machine decision-making. We program “autonomy” with explicit purposes: maintaining stable vehicle control, detecting and avoiding collision with other aircraft and terrain, optimizing a flight plan or science data collection and processing activities. Most autonomy infused to-date has been focused on achieving safe, efficient Aerospace vehicle and payload operation, in some cases enabling unmanned air and space missions that could not otherwise be achieved due to limited data throughput, delays, and human situational awareness. Yet the public lacks trust in autonomy. Some even conjure frightening images of “machine takeovers” from science fiction even though most aerospace engineers instead envision autonomy to keep an aircraft from crashing while providing high-quality science and surveillance data, which we will eventually depend on as much as GPS and real-time traffic maps today.


What is autonomy, and why is it important? Merriam-Webster defines autonomy as “the state of existing or acting separately from other; the power or right of a country, group, etc., to govern itself”. In contrast, automation is defined as “the state of being operated automatically” where automatic is defined for a machine as “having controls that allow something to work or happen without being directly controlled by a person”. The distinction between autonomy and automation is related to the level of authority or “self-governance” endowed to human operator(s) versus machine.
Autonomy will enable new missions and holds promise to make existing missions even more efficient and safe. Commercial transport aircraft are an extremely safe means of transit, and people simply assume with reason that GPS data will always be available in open areas. Malicious operator actions such as those taken by terrorists on 9/11 and those taken more recently by the co-pilot on Germanwings Flight 9525 suggest infusion of refuse-to-crash autonomy with override capability into future transport aircraft. Software and hardware systems will be imperfect and potentially insecure, so any transfer of authority must be thoroughly analyzed to ensure overall risk is constant or reduced. Similarly, autonomy has so far been resisted for space missions, primarily due to risk since unlike aviation there are few opportunities to service spacecraft. However, the space ground system section of this roadmap provides examples of where increasing autonomy could be prudently introduced for satellite command and control ground systems.
A 2014 National Research Council (NRC) report entitled “Autonomy Research for Civil Aviation: Toward a New Era of Flight” intentionally used the term “increasingly autonomous” (or IA) without explicitly defining autonomy to avoid the inevitable debate in finding a one “true” definition of autonomy. IA systems were viewed as a progressively-sophisticated suite of capabilities with “the potential to improve safety and reliability, reduce costs, and enable new missions”, providing focus on barriers and research needs as opposed to a more controversial focus on “authority shift”. The NRC report’s barriers and high-priority research projects are listed in Appendix A with more information available in the NRC report downloadable from http://www.nap.edu/openbook.php?record_id=18815. This intelligent systems roadmap effort certainly does not seek to replicate the NRC process. It instead focuses on presenting areas of autonomy research identified by our technical committee members and participants in an autonomy workshop breakout session held in August 2014 in Dayton, OH. Our roadmap is more specific than the NRC report in that it represents the AIAA intelligent systems constituency primarily, yet it is broader in that it extends beyond civil aviation to also include government and university researchers as well as space applications.
To build an enduring roadmap to autonomy research, this report focuses on identifying autonomy challenges rather than proposing projects, since specific autonomy research projects of interest to different research groups and funding agencies would likely encompass several of the below challenges in an application-oriented framework, e.g., aircraft autonomy, spacecraft autonomy, cooperative control or system-wide management as in air traffic control, etc. Autonomy challenges are divided into three categories: fundamental challenges that underpin most any Aerospace system endowed with autonomy, systems engineering challenges, and challenges in minimizing risk / ensuring safe operation. This roadmap document closes with a discussion on autonomy infusion opportunities that might lead to successful development, testing, and acceptance of autonomy in future Aerospace systems.

Fundamental challenges


Autonomy will be embedded in complex systems that execute multiple local, vehicle, and system-wide sense-decide-act cycles. To act with authority rather than constant backup from a human supervisor, the autonomy must be capable of achieving a level of situational awareness, adaptability, and indeed “cleverness” that has not yet been realized in automation aids. Specific cross-cutting autonomy challenges are summarized below:

  • Handling rare events: What strategies will succeed, and what tests can we perform to assure such a system?

  • Handling unmodeled events: How does an autonomous system detect an event that is not modeled, and deal with such an event in a manner that avoids disaster at least, and accomplishes the mission at best?

  • Creative” exploration and exploitation of sensed data: Sensors such as cameras, radar, lidar, and sonar/ultrasonic augment traditional inertial and global positioning sensors with a new level of complex data. An autonomous system must be capable of interpreting and acting on incoming information, not just feed it back to the user.  This requires autonomy capable of acquiring and processing sensor data in real-time to go from effective data representations to decisions.

  • New information-rich sensors: Sensors themselves still do not provide the diverse and comprehensive dataset comparable to the human sensor system. Autonomy therefore can also benefit from new sensor mechanisms to generate data which can be transformed into knowledge.

  • Advanced knowledge representations: Autonomous systems must be capable of capturing complex environment properties with effective multidimensional knowledge representations.  Once representations are formulated, knowledge engineering is required offline to endow the autonomy with a baseline capability to make accurate and “wise” (optimal) decisions. System-wide adaptation of engineered knowledge will also be essential in cases where the environment is poorly-modeled or understood.

  • Intent prediction: Autonomous systems will ultimately interact with people as well as other autonomous vehicles/agents. The autonomous system must not only be act in a logical and transparent manner, it also must be capable of predicting human intent to the extent required for effective communication and co-habitation of a common workspace.

  • Tools and algorithms for multi-vehicle cooperation: Autonomous vehicles must cooperate with each other, particularly when operating in close proximity to each other in highly-dynamic, poorly-modeled, or hazardous environments. Research must extend past assumptions that platforms are homogeneous. Indeed, vehicles may have distinct and potentially non-overlapping capabilities with respect to motion (e.g., travel speeds, range/endurance), sensing, and onboard storage and processing capacity. Autonomous teams must optimize behaviors to achieve new levels of capability and efficiency in group sensing and action.



Systems engineering challenges


  • Establishing a common design tool/language base: Traditional V or Vee models of systems engineering have proven difficult to apply to complex safety-critical systems such as modern aircraft and spacecraft. Model-based engineering shows promise yet protocols are not yet mature and accepted across the different disciplines contributing to system design. Autonomy will add to existing system complexity due to the need for adaptability and complexity in most cases.

  • Validation, verification, and accreditation (V, V & A): V&V of complex systems with unknowns has provided substantial challenges, particularly when budget constraints are tight. Autonomy will be particularly difficult to V&V because in past systems we have relied on human operators, not software, to provide a “backup”, and we have been tolerant of “imperfect human response”. For autonomy, we must establish system incorporate probabilistic or uncertain models into V&V to ensure a sufficient level of probabilistic validation and verification, as the complexity of the system and its environment will prohibit guarantees of V&V. To this end, future autonomy will likely need to incorporate procedures for accreditation and licensing currently available for human operators who cannot be comprehensively evaluated for 100% correct behaviors. We also need the right rules and abstractions to make full V,V&A possible.

  • Robust handling of different integrity levels in requirements specifications: Integrity levels have typically been specified manually by system designers, with levels such as those indicated by the FAA in DO-178B leading to different levels of tolerance to risk. It is costly to require all elements of a tightly-coupled complex system obtain the highest level of integrity required for any component in this system. Automatic and robust techniques to specify and managing integrity levels are needed.

  • System engineering for the worst-case: Nominally, automation and autonomy can be shown to function efficiently and safely. However, rare events can cascade into a worst-case scenario that can provide responses much worse than expected in the design. Research is needed to ensure autonomous systems can be guaranteed not to make a worst-case scenario much worse by, for example, engaging humans or constraining adaptation in a manner that reigns in probability of catastrophic failure.



Safety challenges


  • Risk assessment: Calculation of risk is not straightforward in a complex system. Endowing autonomy with a high level of decision-making authority and ability to adapt compounds the risk assessment. How can component, vehicle, and system-level risk be computed in a highly-autonomous system, and what is the impact of false positives and negatives on the environment and other actors?

  • Risk bound specification: The FAA has established a simple bound on “risk of safety violation per hour of flight”, but it is not clear this single number is the final word, nor is it clear this number translates to different application such as unmanned operations, flights over populated areas, or missions with such high value that risk is tolerated. A major safety challenge is therefore calculating, negotiating, and accepting/establishing bounds on risk/safety for different systems, platforms, and scenarios. To this end, “safe” test scenarios as well as constrained tests that exercise high risk cases may be beneficial to consider.

  • Level of safety with rogue / hostile vehicles: While assessing autonomous system safety with a single vehicle or cooperative team is difficult, this challenge is compounded when rogue or adversarial vehicles are nearby. Safety challenges may be faced due to potential for collision with other vehicles, attack by munitions, or more generally adversarial actions that compromise targets, jam signals, etc.

  • Reliable fault (or exception) detection and handling: Fault and failure management is a challenge in any complex Aerospace system, regardless of level of autonomy. Today’s systems, however, rely heavily on a human operator assessing the exception and dictating a recovery process. Autonomy is beginning to handle faults/failures on a case-by-case basis, but failures that have not been explicitly considered by system designers remaining difficult to handle through detection and reliable/safe adaptation of models and responses. This problem is compounded for software-enabled autonomy due to the potential for computing system failures, network outages, signal spoofing, and cybersecurity violations.

  • Autonomy-Human Transitions: A major autonomy challenge is to ensure transitions of authority from autonomy-to-human (and vice versa) are unsurprising, informative, and safe. This challenge is motivated by numerous documented “mode confusion” cases in flight decks and by accidents where automation “shut down” in the most difficult high-workload scenarios without providing warning or any type of gradual authority transition. Autonomy may initiate actions to “buy time” in cases where transitions would otherwise be necessarily abrupt.

Roadmap to Success

Autonomy research is currently “on the radar” of most major funding agencies, but setbacks and changes in leadership could compromise momentum present today. The ISTC highly encourages the Aerospace autonomy research community to heed lessons learned in other fields to enable long-term progress toward autonomy that will truly advance our Aerospace platform and system-wide capabilities. Below is a brief list of “rules” that we believe will promote a successful, collaborative community-based effort toward future Aerospace autonomy goals.

Always identify clear and tangible benefits of “new” autonomy; motivate the effort. Autonomy research must be pursued because it is of potential benefit, not just because it is “cool” for the researcher.

Be honest about challenges and tradeoffs and discover how to avoid them. Don’t just advertise autonomy by showing the “one demo that worked properly”.

Talk to people in other fields to ensure the “engineering autonomy” viewpoint grows to be more comprehensive. Autonomy has the potential to benefit business, education, and other use cases related to air and space flight systems.

Develop and capitalize on collaborations, open source, standards (for representing data also), grand challenges (a recurring theme), policy changes, and crowd sourcing. To this end, we recommend the community create grand challenges to motivate and evaluate autonomy; develop benchmarks and metrics.

Remember regulatory, legal, social challenges (public education & trust). These must be kept in mind particularly when proposing autonomous systems that will carry or otherwise interact with the public.

Education and outreach are essential elements of long-term success in developing and infusing Aerospace autonomy technology. To that end we recommend the following:

Develop Aerospace autonomy tutorials; this may be an appropriate task for the AIAA ISTC.

Educate through online interactive demos that are fun. Autonomy researchers can gain trust in the community by helping all understand how autonomy can improve both mission capabilities and safety.

Find places to "easily" transition autonomy in Aerospace to demonstrate safety improvements. Autonomy infusion opportunities include emergency autoland for civil aircraft in “simple” cases (e.g., engine-out) and maturing detect-and-avoid capabilities such as the ground collision avoidance system at AFRL.

Encourage co-design of autonomy and human factors to enable interfaces to be informative, unsurprising, and safe.



Appendix A: Summary of NRC Autonomy Research for Civil Aviation Report: Barriers and Research Agenda1


The NRC report is heavily cited in this roadmap because of its analogous focus on autonomy or “increasingly autonomous” (IA) systems and because it represents a consensus view among community experts. Note that the NRC report focuses on civil aviation, so our roadmap aims to address other use cases, e.g., DoD and commercial, as well as considering autonomy research needs for space applications.

Barriers were divided into three groups: technology, regulation and certification, and additional. The full list is presented below for completeness. Most of these technology and regulatory barriers have unambiguous meanings. Legal and social issues focused on liability, fear/trust, as well as safety and privacy concerns associated with deploying increasingly autonomous (IA) crewed and uncrewed aircraft into public airspace over populated areas. The committee called out certification, adaptive/nondeterministic systems, trust, and validation and verification as particularly challenging barriers to overcome.


Technology Barriers:

1. Communications and data acquisition

2. Cyberphysical security

3. Decision making by adaptive/nondeterministic systems2

4. Diversity of aircraft

5. Human–machine integration

6. Sensing, perception, and cognition

7. System complexity and resilience

8. Verification and validation (V&V)
Regulation and Certification Barriers

1. Airspace access for unmanned aircraft

2. Certification process

3. Equivalent level of safety

4. Trust in adaptive/nondeterministic IA systems
Additional Barriers

1. Legal issues

2. Social issues
The NRC committee identified eight high-priority research agenda topics for civil aviation autonomy. These were further classified into “most urgent and difficult” and “other high priority” categories. These projects are listed below with the verbatim summary description of each topic.

Most Urgent and Difficult Research Projects:

  1. Behavior of Adaptive/Nondeterministic Systems: Develop methodologies to characterize and bound the behavior of adaptive/nondeterministic systems over their complete life cycle.

  2. Operation Without Continuous Human Oversight: Develop the system architectures and technologies that would enable increasingly sophisticated IA systems and unmanned aircraft to operate for extended periods of time without real-time human cognizance and control.

  3. Modeling and Simulation: Develop the theoretical basis and methodologies for using modeling and simulation to accelerate the development and maturation of advanced IA systems and aircraft.

  4. Verification, Validation, and Certification: Develop standards and procedures for the verification, validation, and certification of IA systems and determine their implications for design.


Additional High-Priority Research Projects:

  1. Nontraditional Methodologies and Technologies: Develop Methodologies for Accepting technologies not traditionally used in civil aviation (e.g., open-source software and consumer electronic products) in IA systems.

  2. Role of Personnel and Systems: Determine how the roles of key personnel and systems, as well as related human-machine interfaces, should evolve to enable the operation of advanced IA systems.

  3. Safety and Efficiency: Determine how IA systems could enhance the safety and efficiency of civil aviation.

  4. Stakeholder Trust: Develop processes to engender broad stakeholder trust in IA systems in the civil aviation system.



1 Committee on Autonomy Research for Civil Aviation, Autonomy Research for Civil Aviation: Toward a New Era of Flight, Aeronautics and Space Engineering Board, National Research Council (NRC), National Academies Press, 2014, ISBN: 978-0-309-30614-0. http://www.nap.edu/catalog/18815/autonomy-research-for-civil-aviation-toward-a-new-era-of


Download 40.32 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page