Applications of stochastic and optimization models to healthcare research
- This dissertation studies how mathematical modeling can be used in conjunction with empirical data to provide insight into health policy and medical decision-making. We consider three specific questions. First, how should drug safety regulators implement a postmarketing drug surveillance system that accounts for multiple adverse events? Second, what is the aggregate contribution of workplace stressors toward poor health outcomes and health spending in the U.S.? Third, how should rigorous cost-effectiveness analyses be conducted for medical innovations, when data are scarce and unreliable? These are important questions that have thus far eluded definitive answers because existing data sources and models cannot be directly applied to answer these questions satisfactorily. Therefore, we try to address these questions by developing new data-driven mathematical models, which draw ideas from stochastic analysis and optimization theory. In Chapter 1, we develop a new method for postmarketing surveillance of a drug, in order to detect any adverse side effects that were not uncovered during pre-approval clinical trials. Because of the recent proliferation of electronic medical records, regulators can now observe person-level data on drug usage and adverse event incidence in a population. Potentially, they can use these data to monitor the drug, and flag it as unsafe if excessive adverse side effects are observed. There are two key features of this problem that make it challenging. First, the data are accumulated in time, which complicates the regulators' decision process. Second, adverse events that occur in the past can affect the risk that other adverse events occur in the future. We propose a drug surveillance method, called QNMEDS, which simultaneously addresses these two issues. QNMEDS is based on the paradigm of sequential hypothesis testing, and it works by continuously monitoring a vector-valued test-statistic process until it crosses a stopping boundary. Our analysis focuses on prescribing how this boundary should be designed. We use a queueing network to model the occurrence of events in patients, which also allows us to capture the correlations between adverse events. Exact analysis of the model is intractable, and we develop an asymptotic diffusion approximation to characterize the approximate distribution of the test-statistic process. We then use mathematical optimization to design the stopping boundary to control the false alarm rate below an exogenously-specified value and minimize the expected detection time. We conduct simulations to demonstrate that QNMEDS works as designed and has advantages over a heuristic that is based on the classical Sequential Probability Ratio Test. In Chapter 2, we describe a model-based approach to quantify the relationship between workplace stressors and health outcomes and cost. We considered ten stressors: Unemployment, lack of health insurance, exposure to shift work, long working hours, job insecurity, work-family conflict, low job control, high job demands, low social support at work, and low organizational justice. There is widespread empirical evidence that individual stressors are associated with poor health outcomes, but the aggregate health effect of the combination of these stressors is not well understood. Our goal was to estimate the overall contribution of these stressors toward (a) annual healthcare spending, and (b) annual mortality in the U.S. The central difficulty in deriving these estimates is the absence of a single, longitudinal dataset that records workers' exposure to various workplace stressors as well as their health outcomes and spending. Therefore, we developed a model-based approach to tackle this problem. The model has four input parameters which were estimated from separate data sources: (a) the joint distribution of workplace exposures in the U.S., which we estimated from the General Social Survey (GSS); (b) the relative risk of each outcome associated with each exposure, which we estimated from an extensive meta-analysis of the epidemiological literature; (c) the status-quo prevalence of each health outcome; and (d) the incremental cost of each health outcome, which were both estimated using the Medical Panel Expenditure Survey (MEPS). The model separately derives optimistic and conservative estimates of the effect of multiple workplace exposures on health, and uses an optimization-based approach to calculate upper and lower bounds around each estimate to account for the correlation between exposures. We find that more than 120,000 deaths per year and approximately 5-8% of annual healthcare costs are associated with and may be attributable to how U.S. companies manage their work force. Our results suggest that more attention should be paid to management practices as important contributors to health outcomes and costs in the U.S. In Chapter 3, we study the problem of assessing the cost-effectiveness of a medical innovation when data are scarce or highly uncertain. Models based on Markov chains are typically used for medical cost-effectiveness analyses. However, if such models are used for innovations, many elements of the chain's transition matrix may be very imprecise due to data scarcity. While sensitivity analyses can be used to assess the effect of a small number of uncertain parameters, they quickly become computationally intractable as the number of uncertainties grows. At present, only ad-hoc methods exist for performing such analyses when there are a large number of uncertain parameters. Our analysis focuses on an abstraction of this problem, which is how to calculate the best and worst discounted value of a Markov chain over an infinite horizon with respect to a vector of state-wise rewards, when many of its transition elements are only known up to an uncertainty set. We prove the following sharp result: If the uncertainty set has a row-wise property, which is a reasonable assumption for most applied problems, then these values can be tractably computed by iteratively solving certain convex optimization problems. However, in the absence of this row-wise property, evaluating these values is computationally intractable (NP-hard). We apply our method to the evaluate the cost-effectiveness of a new screening method for colorectal cancer, annual fecal immunochemical testing (FIT) for persons over the age of 55. Our results suggest that FIT is a highly cost-effective alternative to the current guidelines, which prescribe screening by colonoscopy at 10-year intervals.
|Type of resource
|electronic; electronic resource; remote
|1 online resource.
|Stanford University, Graduate School of Business.
|Zenios, Stefanos A
|Zenios, Stefanos A
|Porteus, Evan L
|Porteus, Evan L
|Statement of responsibility
|Submitted to the Graduate School of Business.
|Thesis (Ph.D.)--Stanford University, 2014.
- © 2014 by Joel Goh
- This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).
Also listed in
Loading usage metrics...