Truncation algorithms for Markov chains and processes
Abstract/Contents
- Abstract
- A well known approach for approximating the stationary distribution of a very large or infinite state space Markov chain is to take a finite sub-matrix of the original infinite transition kernel, augment the entries of the sub-matrix to produce a stochastic matrix, and use the stationary distribution of the resulting stochastic matrix as the approximation. This algorithm is known as truncation and augmentation. We present new sufficient conditions for establishing the convergence of general truncation and augmentation schemes. In particular, we show that convergence is guaranteed when the truncation sets are chosen as sublevel sets of a certain Lyapunov function. We also give generalizations of these results to the continuous state space and continuous time settings. These new sufficient conditions prove useful: for example, they give to our knowledge the first proof of convergence of general augmentations for the toggle switch model in biology. We also present generalizations of known convergence results on "fixed state" augmentations, wherein the augmentation in truncation and augmentation is only performed on a single column, to continuous state space and continuous time. We observe that the probabilistic concept of regeneration can play a key role in establishing convergence of these schemes, via a suitable probabilistic coupling and a monotone convergence argument. Finally, we present a new algorithm for approximating the stationary distribution of a very large or infinite state space Markov chain using a smaller state space. The algorithm is based on a ratio representation for equilibrium expectations in which the numerator and denominator correspond to expectations defined over paths that start and end within a given return set K. When K is a singleton, this representation is a well known consequence of regenerative process theory. For computational tractability, we ignore contributions to the path expectations corresponding to excursions out of a given truncation set A. This yields a truncation algorithm that is provably convergent as A gets large. Furthermore, in the presence of a suitable Lyapunov function, we can bound the path expectations, thereby providing computable and convergent error bounds for our numerical procedure. We also present a computational study which compares our algorithm to other algorithms in the literature and shows that our method achieves state of the art performance on models from biology and queueing.
Description
Type of resource | text |
---|---|
Form | electronic resource; remote; computer; online resource |
Extent | 1 online resource. |
Place | California |
Place | [Stanford, California] |
Publisher | [Stanford University] |
Copyright date | 2022; ©2022 |
Publication date | 2022; 2022 |
Issuance | monographic |
Language | English |
Creators/Contributors
Author | Infanger, Alexander Gerd-Dara |
---|---|
Degree supervisor | Glynn, Peter W |
Thesis advisor | Glynn, Peter W |
Thesis advisor | Blanchet, Jose H |
Thesis advisor | Iaccarino, Gianluca |
Degree committee member | Blanchet, Jose H |
Degree committee member | Iaccarino, Gianluca |
Associated with | Stanford University, Institute for Computational and Mathematical Engineering |
Subjects
Genre | Theses |
---|---|
Genre | Text |
Bibliographic information
Statement of responsibility | Alex Infanger. |
---|---|
Note | Submitted to the Institute for Computational and Mathematical Engineering. |
Thesis | Thesis Ph.D. Stanford University 2022. |
Location | https://purl.stanford.edu/qq099pf2055 |
Access conditions
- Copyright
- © 2022 by Alexander Gerd-Dara Infanger
- License
- This work is licensed under a Creative Commons Attribution 3.0 Unported license (CC BY).
Also listed in
Loading usage metrics...