On using formal methods for safe and robust robot autonomy
Abstract/Contents
- Abstract
- Advances in the fields of artificial intelligence and machine learning have unlocked a new generation of robotic systems---"learning-enabled" robots that are designed to operate in unstructured, uncertain, and unforgiving environments, especially settings where robots are required to interact in close proximity with humans. However, as learning-enabled methods, especially "deep" learning, continue to become more pervasive throughout the autonomy stack, it also becomes increasingly difficult to ascertain the performance and safety of these robotic systems and explain their behavior, necessary prerequisites for their deployment in safety-critical settings. This dissertation develops methods drawing upon techniques from the field of formal methods, namely Hamilton-Jacobi (HJ) reachability and Signal Temporal Logic (STL), to complement a learning-enabled robot autonomy stack, thereby leading to safer and more robust robot behavior. The first part of this dissertation investigates the problem of providing safety assurance for human-robot interactions, safety-critical settings wherein robots must reason about the uncertainty in human behavior to achieve seamless interactions with humans. Specifically, we develop a two-step approach where we first develop a learning-based human behavior prediction model tailored towards proactive robot planning and decision-making, which we then couple with a reachability-based safety controller that minimally intervenes whenever the robot is near safety violation. The approach is validated through human-in-the-loop simulation as well as on an experimental vehicle platform, demonstrating clear connections between theory and practice. The second part of this dissertation examines the use of STL as a formal language to incorporate logical reasoning into robot learning. In particular, we develop a technique, named STLCG, that casts STL into the same computational language as deep neural networks. Consequently, by using STLCG to express designers' domain expertise into a form compatible with neural networks, we can embed domain knowledge into learned components within the autonomy stack to provide additional levels of robustness and interpretability.
Description
Type of resource | text |
---|---|
Form | electronic resource; remote; computer; online resource |
Extent | 1 online resource. |
Place | California |
Place | [Stanford, California] |
Publisher | [Stanford University] |
Copyright date | 2021; ©2021 |
Publication date | 2021; 2021 |
Issuance | monographic |
Language | English |
Creators/Contributors
Author | Leung, Karen Yan Ming |
---|---|
Degree supervisor | Pavone, Marco, 1980- |
Thesis advisor | Pavone, Marco, 1980- |
Thesis advisor | Gerdes, J. Christian |
Thesis advisor | Schwager, Mac |
Degree committee member | Gerdes, J. Christian |
Degree committee member | Schwager, Mac |
Associated with | Stanford University, Department of Aeronautics and Astronautics |
Subjects
Genre | Theses |
---|---|
Genre | Text |
Bibliographic information
Statement of responsibility | Karen Yan Ming Leung. |
---|---|
Note | Submitted to the Department of Aeronautics and Astronautics. |
Thesis | Thesis Ph.D. Stanford University 2021. |
Location | https://purl.stanford.edu/kk533yj7758 |
Access conditions
- Copyright
- © 2021 by Karen Yan Ming Leung
- License
- This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).
Also listed in
Loading usage metrics...