Strategic and adaptive execution

Placeholder Show Content

Abstract/Contents

Abstract
First, we consider a trader who aims to liquidate a large position in the presence of an arbitrageur who hopes to profit from the trader's activity. The arbitrageur is uncertain about the trader's position and learns from observed price fluctuations. This is a dynamic game with asymmetric information. We present an algorithm for computing perfect Bayesian equilibrium behavior and conduct numerical experiments. Our results demonstrate that the trader's strategy differs significantly from one that would be optimal in the absence of the arbitrageur. In particular, the trader must balance the conflicting desires of minimizing price impact and minimizing information that is signaled through trading. Accounting for information signaling and the presence of strategic adversaries can greatly reduce execution costs. Second, we consider a model in which a trader seeks to maximize expected risk-adjusted profit while trading a single security. In our second model, there is no arbitrageur, and each price change is a linear combination of observed factors, impact resulting from the trader's current and prior activity, and unpredictable random effects. The trader must learn coefficients of a price impact model while trading. We propose a new method for simultaneous execution and learning -- the confidence-triggered regularized adaptive certainty equivalent (CTRACE) policy -- and establish a polylogarithmic finite-time expected regret bound. This bound implies that CTRACE is efficient in the sense that the ([epsilon], [delta])-convergence time is bounded by a polynomial function of 1/[epsilon] and log(1/[delta]) with high probability. In addition, we demonstrate via Monte Carlo simulation that CTRACE outperforms the certainty equivalent policy and a recently proposed reinforcement learning algorithm that is designed to explore efficiently in linear-quadratic control problems.

Description

Type of resource text
Form electronic; electronic resource; remote
Extent 1 online resource.
Publication date 2012
Issuance monographic
Language English

Creators/Contributors

Associated with Park, Beomsoo
Associated with Stanford University, Department of Electrical Engineering
Primary advisor Van Roy, Benjamin
Thesis advisor Van Roy, Benjamin
Thesis advisor Giesecke, Kay
Thesis advisor Johari, Ramesh, 1976-
Advisor Giesecke, Kay
Advisor Johari, Ramesh, 1976-

Subjects

Genre Theses

Bibliographic information

Statement of responsibility Beomsoo Park.
Note Submitted to the Department of Electrical Engineering.
Thesis Thesis (Ph.D.)--Stanford University, 2012.
Location electronic resource

Access conditions

Copyright
© 2012 by Beomsoo Park

Also listed in

Loading usage metrics...