Accelerating machine learning algorithms with adaptive sampling

Placeholder Show Content

Abstract/Contents

Abstract
The era of huge data necessitates highly efficient machine learning algorithms. Many common machine learning algorithms, however, rely on computationally intensive subroutines that are prohibitively expensive on large datasets. Oftentimes, existing techniques subsample the data or use other methods to improve computational efficiency, at the expense of incurring some approximation error. This thesis demonstrates that it is often sufficient, instead, to substitute computationally intensive subroutines with a special kind of randomized counterparts that results in almost no degradation in quality. The results in this thesis are based on techniques from the adaptive sampling literature. Chapter 1 begins with an introduction to a specific adaptive sampling problem: that of best-arm identification in multi-armed bandits. We first provide a formal description of the setting and the best-arm identification problem. We then present a general algorithm, called successive elimination, for solving the best-arm identification problem. The techniques developed in Chapter 1 will be applied to different problems in 2, 3, and 4. In Chapter 2, we discuss an how the k-medoids clustering problem can be reduced to a sequence of best-arm identification problems. We use this observation to present a new algorithm, based on successive elimination, that matches the prior state-of-the-art in clustering quality but reaches the same solutions much faster. Our algorithm achieves an n/logn reduction in sample complexity over prior state-of-the-art, where n is the size of the dataset, under general assumptions over the data generating distribution. In Chapter 3, we analyze the problem of training tree-based models. The majority of the training time for such models is in splitting each node of the tree, i.e., determining the feature and corresponding threshold at which to split each node. We show that the node-splitting subroutine can be reduced to a best-arm identification problem and present a state-of-the-art algorithm for training trees. Our algorithm depends only on the relative quality of each possible split, rather than explicitly depending on the size of the training dataset, and reduces the explicit dependence on dataset size n from O(n), for the most commonly-used prior algorithm, to O(1). Our algorithm applies generally to many tree-based models, such as Random Forests and XGBoost. In Chapter 4, we study the Maximum Inner Product Search problem. We observe that, as with the k-medoids and node-splitting problems, the Maximum Inner Product Search problem can be reduced to a best-arm identification problem. Armed with this observation, we present a novel algorithm for the Maximum Inner Product Search problem in high dimensions. Our algorithm reduces the explicit scaling with d, the dimensionality of the dataset, O(sqrt(d)) to O(1) under reasonable assumptions on the data. Our algorithm has several advantages: it requires no preprocessing of the data, naturally deals with the addition or removal of new datapoints, and includes a hyperparameter to trade off accuracy and efficiency. Chapter 5 concludes this thesis with a summary of its contributions and possible directions for future work.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2023; ©2023
Publication date 2023; 2023
Issuance monographic
Language English

Creators/Contributors

Author Tiwari, Mohit
Degree supervisor Piech, Chris (Christopher)
Thesis advisor Piech, Chris (Christopher)
Thesis advisor Valiant, Gregory
Degree committee member Valiant, Gregory
Associated with Stanford University, School of Engineering
Associated with Stanford University, Computer Science Department

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Mohit Tiwari.
Note Submitted to the Computer Science Department.
Thesis Thesis Ph.D. Stanford University 2023.
Location https://purl.stanford.edu/vp233gb6409

Access conditions

Copyright
© 2023 by Mohit Tiwari
License
This work is licensed under a Creative Commons Attribution 3.0 Unported license (CC BY).

Also listed in

Loading usage metrics...