Essays on trustworthy data-driven decision making

Placeholder Show Content

Abstract/Contents

Abstract
Data-driven decision-making systems are deployed ubiquitously in practice, and they have been drastically changing the world and people's daily life. As more and more decisions are made by automatic data-driven systems, it becomes increasingly critical to ensure that such systems are \textit{responsible} and \textit{trustworthy}. In this thesis, I study decision-making problems in realistic contexts and build practical, reliable, and trustworthy methods for their solutions. Specifically, I will discuss the robustness, safety, and fairness issues in such systems. In the first part, we enhance the robustness of decision-making systems via distributionally robust optimization. Statistical errors and distributional shifts are two key factors that downgrade models' performance in deploying environments, even if the models perform well in the training environment. We use distributionally robust optimization (DRO) to design robust algorithms that account for statistical errors and distributional shifts. In Chapter 2, we study distributionally robust policy learning using historical observational data in the presence of distributional shifts. We first present a policy evaluation procedure that allows us to assess how well the policy does under the worst-case environment shift. We then establish a central limit theorem for this proposed policy evaluation scheme. Leveraging this evaluation scheme, we further propose a novel learning algorithm that is able to learn a policy that is robust to adversarial perturbations and unknown covariate shifts with a performance guarantee based on the theory of uniform convergence. Finally, we empirically test the effectiveness of our proposed algorithm in synthetic datasets and demonstrate that it provides the robustness that is missing using standard policy learning algorithms. We conclude the paper by providing a comprehensive application of our methods in the context of a real-world voting dataset. In Chapter 3, we focus on the impact of statistical errors in distributionally robust optimization. We study the asymptotic normality of distributionally robust estimators as well as the properties of an optimal confidence region induced by the Wasserstein distributionally robust optimization formulation. In the second part, we study the A/B tests under a safety budget. Safety is crucial to the deployment of any new features in online platforms, as a minor mistake can deteriorate the whole system. Therefore, A/B tests are the standard practice to ensure the safety of new features before launch. However, A/B tests themselves may still be risky as the new features are exposed to real user traffic. We formulated and studied optimal A/B testing experimental design that minimizes the probability of false selection under pre-specified safety budgets. In our formulation based on ranking and selection, experiments need to stop immediately if the safety budgets are exhausted before the experiment horizon. We apply large deviations theory to characterize optimal A/B testing policies and design associated asymptotically optimal algorithms for A/B testing with safety constraints. In the third part, we study the fairness testing problem. Algorithmic decisions may still possess biases and could be unfair to different genders and races. Testing whether a given machine learning algorithm is fair emerges as a question of first-order importance. In this part, We present a statistical testing framework to detect if a given machine learning classifier fails to satisfy a wide range of group fairness notions. The proposed test is a flexible, interpretable, and statistically rigorous tool for auditing whether exhibited biases are intrinsic to the algorithm or due to the randomness in the data. The statistical challenges, which may arise from multiple impact criteria that define group fairness and are discontinuous on model parameters, are conveniently tackled by projecting the empirical measure onto the set of group-fair probability models using optimal transport. This statistic is efficiently computed using linear programming, and its asymptotic distribution is explicitly obtained. The proposed framework can also be used to test composite fairness hypotheses and fairness with multiple sensitive attributes. The optimal transport testing formulation improves interpretability by characterizing the minimal covariate perturbations that eliminate the bias observed in the audit.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2022; ©2022
Publication date 2022; 2022
Issuance monographic
Language English

Creators/Contributors

Author Si, Nian
Degree supervisor Blanchet, Jose H
Thesis advisor Blanchet, Jose H
Thesis advisor Glynn, Peter W
Thesis advisor Johari, Ramesh, 1976-
Degree committee member Glynn, Peter W
Degree committee member Johari, Ramesh, 1976-
Associated with Stanford University, Department of Management Science and Engineering

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Nian Si.
Note Submitted to the Department of Management Science and Engineering.
Thesis Thesis Ph.D. Stanford University 2022.
Location https://purl.stanford.edu/wp879fn6428

Access conditions

Copyright
© 2022 by Nian Si
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Also listed in

Loading usage metrics...