Essays in quantitative marketing

Placeholder Show Content

Abstract/Contents

Abstract
The first chapter is a joint work with Tilman Drerup that studies the economic consequences of over-delivering versus under-delivering and their implication for firms when they design promises. Firms often need to promise a certain level of service quality to attract customers, and a central question is how to design promises to balance the trade-off between customer acquisition and customer retention. For example, most E-commerce platforms need to promise a certain delivery time. Over-promising may attract more customers at that present moment, but its impact on future retention depends on consumer inertia, learning, and loss aversion. Empirical analysis of this topic is challenging because the realized and promised service qualities are often unobserved or lack exogenous variation. To study this problem, we leverage a novel dataset from Instacart that directly observes variation in promised and actual delivery time. We apply a generalized propensity score method to nonparametrically estimate the impact of delivery time on customer retention. Consistent with reference dependence and loss aversion, we document that customers are around 92% more responsive once the delivery becomes late. Our results inform a structural model of learning and reference dependence that illustrates the importance of estimating loss aversion and distinguishing promise-based reference points from expectation-based reference points: the company would forgo millions of dollars in revenue if it underestimates loss aversion or assumes expectation-based reference points. The second chapter studies how to better leverage data by combining naturally-occurring observational data with randomized controlled trials. Randomized controlled trials generate experimental variation that can credibly identify causal effects, but often suffer from limited scale, while observational datasets are large but often violate desired identification assumptions. To improve estimation efficiency, I propose a method that leverages imperfect instruments - pretreatment covariates that satisfy the relevance condition but may violate the exclusion restriction. I show that these imperfect instruments can be used to derive moment restrictions that, in combination with the experimental data, improve estimation efficiency. I outline estimators for implementing this strategy, and show that my methods can reduce variance by up to 50%; therefore, only half of the experimental sample is required to attain the same statistical precision. I apply my method to a search listing dataset from Expedia that studies the causal effect of search rankings on clicks, and show that the method can substantially improve the precision. The third chapter is a joint work with Harikesh Nair and Fengshi Niu, in which we study how to use auction throttling to identify the online advertising effect. Causally identifying the effect of digital advertising is challenging, because experimentation is expensive, and observational data lacks random variation. This chapter identifies a pervasive source of naturally occurring, quasi-experimental variation in user-level ad-exposure in digital advertising campaigns. It shows how this variation can be utilized by ad-publishers to identify the causal effect of advertising campaigns. The variation pertains to auction throttling, a probabilistic method of budget pacing that is widely used to spread an ad-campaign's budget over its deployed duration, so that the campaign's budget is not exceeded or overly concentrated in any one period. The throttling mechanism is implemented by computing a participation probability based on the campaign's budget spending rate and then including the campaign in a random subset of available ad- auctions each period according to this probability. We show that access to logged-participation probabilities enables identifying the local average treatment effect (LATE) in the ad-campaign. We present a new estimator that leverages this identification strategy and outline a bootstrap procedure for quantifying its variability. We apply our method to real-world ad-campaign data from an e-commerce advertising platform that uses such throttling for budget pacing. We show our estimate is statistically different from estimates derived using other standard observational methods such as OLS and two-stage least squares estimators. Compared to the implausible 600% conversion lifts estimated using naive observational methods, our estimated conversion lift is 110%, a far more plausible number.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2023; ©2023
Publication date 2023; 2023
Issuance monographic
Language English

Creators/Contributors

Author Gui, Zhida
Degree supervisor Nair,Harikesh
Degree supervisor Sahni, Navdeep
Thesis advisor Nair,Harikesh
Thesis advisor Sahni, Navdeep
Thesis advisor Donkor, Kwabena
Thesis advisor Hartmann, Wesley
Degree committee member Donkor, Kwabena
Degree committee member Hartmann, Wesley
Associated with Stanford University, Graduate School of Business

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility George Zhida Gui.
Note Submitted to the Graduate School of Business.
Thesis Thesis Ph.D. Stanford University 2023.
Location https://purl.stanford.edu/cw738sd5782

Access conditions

Copyright
© 2023 by Zhida Gui
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Also listed in

Loading usage metrics...