Safety-critical machine learning : development and testing
Abstract/Contents
- Abstract
- As machine-learning systems begin deployment in safety-critical domains such as medical imaging and autonomous driving, model failure is increasingly costly. In such applications, it is dangerous to deploy models whose robustness and failure modes we do not understand or cannot certify. This thesis presents techniques to both train and test safety-critical machine-learning systems. In the first part of this work, we employ the lens of distributional robustness to develop safety-critical models. Instead of just performing well on a nominal training set, distributionally robust models are designed to perform well on uncertainty sets around the data-generating distribution. In regimes with limited, bounded uncertainty sets (e.g. adversarial perturbations to images), we train certifiably robust models with negligible losses in computational or statistical efficiency. In regimes with high uncertainty, we learn how to balance safety and model performance using synthetic data. We employ the latter technique in the domain of autonomous racing, demonstrating safe yet competitive autonomous racing algorithms on real 1/10th-scale vehicles. In the second part of the thesis, we frame testing safety-critical models through the lens of risk. In contrast to formal verification and other traditional software-testing techniques, we present a "risk-based framework, " where the goal is to calculate the probability of failure under a base distribution of environment behavior. For safety-critical algorithms, this probability is small and the resulting technical challenge is a rare-event simulation problem. We develop a novel, provably efficient rare-event simulation method that combines exploration, exploitation, and optimization techniques to efficiently find failure modes and estimate their rate of occurrence. We apply this technique as a tool for rapid sensitivity analysis and model comparison in a variety of applications, showcasing its usefulness in efficiently testing safety-critical autonomous systems
Description
Type of resource | text |
---|---|
Form | electronic resource; remote; computer; online resource |
Extent | 1 online resource |
Place | California |
Place | [Stanford, California] |
Publisher | [Stanford University] |
Copyright date | 2020; ©2020 |
Publication date | 2020; 2020 |
Issuance | monographic |
Language | English |
Creators/Contributors
Author | Sinha, Aman | |
---|---|---|
Degree supervisor | Duchi, John | |
Thesis advisor | Duchi, John | |
Thesis advisor | Kochenderfer, Mykel J, 1980- | |
Thesis advisor | Pavone, Marco, 1980- | |
Thesis advisor | Sadigh, Dorsa | |
Thesis advisor | Tedrake, Russ | |
Degree committee member | Kochenderfer, Mykel J, 1980- | |
Degree committee member | Pavone, Marco, 1980- | |
Degree committee member | Sadigh, Dorsa | |
Degree committee member | Tedrake, Russ | |
Associated with | Stanford University, Department of Electrical Engineering |
Subjects
Genre | Theses |
---|---|
Genre | Text |
Bibliographic information
Statement of responsibility | Aman Sinha |
---|---|
Note | Submitted to the Department of Electrical Engineering |
Thesis | Thesis Ph.D. Stanford University 2020 |
Location | electronic resource |
Access conditions
- Copyright
- © 2020 by Aman Sinha
- License
- This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).
Also listed in
Loading usage metrics...