Sample-efficient uncertainty calibration for reliable autonomous systems

Placeholder Show Content

Abstract/Contents

Abstract
As the capabilities of autonomous systems continue to improve, they are increasingly deployed in unstructured environments for safety-critical tasks where the cost of failure is high. These systems take as input observations from the external environment, rely on intermediate learning-based components, and output actions that govern system behavior. To operate safely and reliably in high-stakes environments, these systems must have a meaningful and well-calibrated notion of uncertainty over the inputs, over the outputs, and for the internal components. In the first part of this thesis, we address the problem of designing a meaningful and calibrated notion of uncertainty over the system inputs. Knowing when the underlying data distribution has changed and the system may be operating outside its domain of competency is a critical form of uncertainty, as such distribution shifts may lead to catastrophic failures for learning-based systems. To address this problem, we present a warning system that issues alerts when the data distribution shifts, and show that these alerts are calibrated in the sense that there are rarely false alarms (backed with statistical guarantees). This alert system issues alerts an order of magnitude faster than prior work. In the second part of this thesis, we address the problem of calibrating the uncertainty representations used by intermediate components, enabling improved real-time decision-making. In the third part of this thesis, we address the problem of designing a meaningful and calibrated notion of uncertainty over the system outputs. Recognizing when the outputs of the system may lead to a dangerous mistake is a critical form of uncertainty; thus, we present a warning system that issues alerts when a dangerous situation is imminent, and show that these alerts are calibrated by providing a statistical guarantee on their false negative rate. This calibration is achieved with limited data using techniques from conformal prediction.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2023; ©2023
Publication date 2023; 2023
Issuance monographic
Language English

Creators/Contributors

Author Luo, Rachel
Degree supervisor Pavone, Marco, 1980-
Degree supervisor Savarese, Silvio
Thesis advisor Pavone, Marco, 1980-
Thesis advisor Savarese, Silvio
Thesis advisor Guibas, Leonidas J
Thesis advisor Sadigh, Dorsa
Degree committee member Guibas, Leonidas J
Degree committee member Sadigh, Dorsa
Associated with Stanford University, School of Engineering
Associated with Stanford University, Department of Electrical Engineering

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Rachel Luo.
Note Submitted to the Department of Electrical Engineering.
Thesis Thesis Ph.D. Stanford University 2023.
Location https://purl.stanford.edu/hd110jx7607

Access conditions

Copyright
© 2023 by Rachel Luo
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Also listed in

Loading usage metrics...