Coding for computation : learning with limited information

Placeholder Show Content

Abstract/Contents

Abstract
The emerging paradigm of ubiquitous low-cost sensors with wireless connectivity and data processing in the cloud promises to be an enabling technology for applications such as smart cities, industrial automation, intelligent energy or traffic management. However, the addition of billions of these low cost sensors can considerably increase wireless network capacity requirements, which are already strained with current demands. In order to minimize communication across a network, we explore the problem of compressing or coding data from distributed sensors for transmission over a noisy communication channel, and study algorithms for performing computation with this partial information. We first look at low-complexity randomized approaches such as sampling or compressive coding of transmissions to reduce the communication requirements of transmissions. In the scenario, where sensors feed measurements from an underlying unknown time-series to a fusion center, communication bottlenecks preclude the collection of all such measurements. To address this challenge we develop algorithms to learn the parameters of the time-series from randomized coding comprised of subsamples or compressed sampling of measurements transmitted by the sensors. We do this by first estimating the covariance of the measurements from the partial information received and using that to estimate the time-series parameters. Non-asymptotic convergence bounds for these algorithms are derived. Moreover, we show that modifications to the algorithm allow for tighter convergence bounds given priors such as low rankness and sparsity. This procedure is applied for the estimation and control of linear dynamical systems. We also consider a decentralized learning paradigm where nodes on a network cooperate to solve a global optimization problem using their local information. Current techniques to solve this problem such as distributed gradient methods assume that nodes can transmit arbitrarily large messages at each time instant. We use randomized compression approaches mentioned earlier to reduce the size of the messages transmitted and adapt these distributed optimization algorithms for networks with limited communication capabilities. We show an inverse relationship between compression and accuracy of the distributed optimization algorithm. The algorithm is also robust to the use of stochastic gradients, noisy communication links, or time varying networks. Finally, we introduce an approach where coding of structured data such as text or audio is done over its learned representations produced by neural-network based autoencoders. For the transmission of text using limited length encoding, we show how this approach retains more of the semantic content of sentences compared to traditional source and channel coding techniques.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2019; ©2019
Publication date 2019; 2019
Issuance monographic
Language English

Creators/Contributors

Author Rao, Milind Mukesh
Degree supervisor Goldsmith, Andrea, 1964-
Thesis advisor Goldsmith, Andrea, 1964-
Thesis advisor Duchi, John
Thesis advisor Weissman, Tsachy
Degree committee member Duchi, John
Degree committee member Weissman, Tsachy
Associated with Stanford University, Department of Electrical Engineering.

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Milind Rao.
Note Submitted to the Department of Electrical Engineering.
Thesis Thesis Ph.D. Stanford University 2019.
Location electronic resource

Access conditions

Copyright
© 2019 by Milind Mukesh Rao
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Also listed in

Loading usage metrics...