Sample-efficient uncertainty calibration for reliable autonomous systems
- As the capabilities of autonomous systems continue to improve, they are increasingly deployed in unstructured environments for safety-critical tasks where the cost of failure is high. These systems take as input observations from the external environment, rely on intermediate learning-based components, and output actions that govern system behavior. To operate safely and reliably in high-stakes environments, these systems must have a meaningful and well-calibrated notion of uncertainty over the inputs, over the outputs, and for the internal components. In the first part of this thesis, we address the problem of designing a meaningful and calibrated notion of uncertainty over the system inputs. Knowing when the underlying data distribution has changed and the system may be operating outside its domain of competency is a critical form of uncertainty, as such distribution shifts may lead to catastrophic failures for learning-based systems. To address this problem, we present a warning system that issues alerts when the data distribution shifts, and show that these alerts are calibrated in the sense that there are rarely false alarms (backed with statistical guarantees). This alert system issues alerts an order of magnitude faster than prior work. In the second part of this thesis, we address the problem of calibrating the uncertainty representations used by intermediate components, enabling improved real-time decision-making. In the third part of this thesis, we address the problem of designing a meaningful and calibrated notion of uncertainty over the system outputs. Recognizing when the outputs of the system may lead to a dangerous mistake is a critical form of uncertainty; thus, we present a warning system that issues alerts when a dangerous situation is imminent, and show that these alerts are calibrated by providing a statistical guarantee on their false negative rate. This calibration is achieved with limited data using techniques from conformal prediction.
|Type of resource
|electronic resource; remote; computer; online resource
|1 online resource.
|Degree committee member
|Degree committee member
|Stanford University, School of Engineering
|Stanford University, Department of Electrical Engineering
|Statement of responsibility
|Submitted to the Department of Electrical Engineering.
|Thesis Ph.D. Stanford University 2023.
- © 2023 by Rachel Luo
- This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).
Also listed in
Loading usage metrics...