Improving and accelerating particle-based probabilistic inference

Placeholder Show Content

Abstract/Contents

Abstract
Probabilistic inference is a powerful approach for reasoning under uncertainty that goes beyond point estimation of model parameters to full estimation of the posterior distribution. However, approximating intractable posterior distributions and estimating expectations involving high-dimensional integrals pose algorithmic and computational challenges, especially for large-scale datasets. Two main approaches are sampling-based approaches, such as Markov Chain Monte Carlo (MCMC) and Particle Filters, and optimization-based approaches, like Variational Inference. This thesis presents research on improving and accelerating particle-based probabilistic inference in the areas of MCMC, Particle Filters, Particle-Based Variational Inference, and discrete graphical models. First, we present Sample Adaptive MCMC, a particle-based adaptive MCMC algorithm. We demonstrate how Sample Adaptive MCMC does not require any tuning of the proposal distribution, potentially automating the sampling procedure, and employs global proposals, potentially leading to large speedups over existing MCMC methods. Second, we present a pathwise derivative estimator for Particle Filters including the resampling step. The problem preventing a fully differentiable Particle Filter is the non-differentiability of the discrete particle resampling step. The key idea of our proposed method is to reformulate the Particle Filter algorithm in such a way that eliminates the discrete particle resampling step and makes the reformulated Particle Filter completely continuous and fully differentiable. Third, we propose stochastic variance reduction and quasi-Newton methods for Particle-Based Variational Inference. The insight of our work is that for accurate posterior inference, highly accurate solutions to the Particle-Based Variational Inference optimization problem are needed, so we leverage ideas from large-scale optimization. Lastly, we introduce a meta-algorithm for probabilistic inference in discrete graphical models based on random projections. The key idea is to run approximate inference algorithms for an exponentially large number of samples obtained by random projections. The number of samples used controls the trade-off between the accuracy of the approximate inference algorithm and the variance of the estimator.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2021; ©2021
Publication date 2021; 2021
Issuance monographic
Language English

Creators/Contributors

Author Zhu, Michael Hongyu
Degree supervisor Bohg, Jeannette, 1981-
Degree supervisor Lai, T. L
Thesis advisor Bohg, Jeannette, 1981-
Thesis advisor Lai, T. L
Thesis advisor Ma, Tengyu
Degree committee member Ma, Tengyu
Associated with Stanford University, Computer Science Department

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Michael Hongyu Zhu.
Note Submitted to the Computer Science Department.
Thesis Thesis Ph.D. Stanford University 2021.
Location https://purl.stanford.edu/td650bk3494

Access conditions

Copyright
© 2021 by Michael Hongyu Zhu

Also listed in

Loading usage metrics...