Statistical analysis of scientific machine learning

Placeholder Show Content

Abstract/Contents

Abstract
Massive data collection and computational capabilities have enabled data-driven scientific discoveries and control of engineering systems. However, there are still several questions that should be answered to understand the fundamental limits of just how much can be discovered with data and what is the value of additional information. For example, 1) How can we learn a physics law or economic principle purely from data? 2) How hard is this task, both computationally and statistically? 3) What's the impact on hardness when we add further information (e.g., adding data, model information)? I'll answer these three questions in this thesis in two learning tasks. A key insight in both two cases is that using direct plug-in estimators can result in statistically suboptimal inference. For the first learning task, the thesis focus on variational formulations for differential equation models. I discuss a prototypical Poisson equation. I provide a minimax lower bound for this problem. Based on the lower bounds, I discover that the variance in the direct plug-in estimator makes sample complexity suboptimal. I also consider the optimization dynamic for different variational forms. Finally, based on our theory, I explain an implicit acceleration of using a Sobolev norm as the objective function for training. The second learning task this thesis discuss is (linear) operator learning, which has wide applications in causal inference, time series modeling, and conditional probability learning. I build the first min-max lower bound for this problem. The min-max rate has a particular structure where the more challenging parts of the input and output spaces determine the hardness of learning a linear operator. Analysis also shows that an intuitive discretization of the infinite-dimensional operator could lead to a sub-optimal statistical learning rate. Then, I'll discuss how, by suitably trading-off bias and variance, I can construct an estimator with an optimal learning rate for learning a linear operator between infinite dimension spaces. I also illustrate how this theory can inspire a multilevel machine-learning algorithm of potential practical use.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2023; ©2023
Publication date 2023; 2023
Issuance monographic
Language English

Creators/Contributors

Author Lu, Yiping
Degree supervisor Blanchet Mancilla, Jose
Degree supervisor Ying, Lexing
Thesis advisor Blanchet Mancilla, Jose
Thesis advisor Ying, Lexing
Thesis advisor Ryzhik, Leonid
Degree committee member Ryzhik, Leonid
Associated with Stanford University, School of Engineering
Associated with Stanford University, Computer Science Department

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Yiping Lu.
Note Submitted to the Computer Science Department.
Thesis Thesis Ph.D. Stanford University 2023.
Location https://purl.stanford.edu/pp190dc8926

Access conditions

Copyright
© 2023 by Yiping Lu
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Also listed in

Loading usage metrics...