Implementable schemes for lossy compression
- In spite of all the achievements of information theory in the last 60 years, some of its basic questions, specially in the area of source coding, have not found satisfying solutions yet. Finding an implementable scheme for universal lossy compression, that is part of the focus of this work, serves as a good example. Unlike universal lossless compression, for which there are several well-known practical algorithms, e.g., the famous work published in 1978 by Lempel and Ziv which its modified versions are used even today, for the universal lossy compression, the situation is not so positive. We propose two new paradigms for universal lossy compression. One of them takes advantage of Markov chain Monte Carlo combined with simulated annealing, and the other one has Viterbi algorithm at its core. Both algorithms try to approximate the solution of a universal exhaustive search scheme which has enormous computational complexity. The performance of both algorithms, theoretically, gets as close as desired to the solution of the original algorithm, but we do not have a bound on the required computations versus the desired precision. However, we show through our simulations on memoryless binary, and first-order binary Markov sources that the performance of both algorithms gets close to the rate-distortion curve for a number of computations linear in the block size. We also consider the Wyner-Ziv (WZ) problem of lossy compression where the decompressor observes a noisy version of the source, whose statistics are unknown. A new family of WZ coding algorithms is proposed and their universal optimality is proven. Compression consists of sliding-window processing followed by Lempel-Ziv (LZ) compression, while the decompressor is based on a modification of the discrete universal denoiser DUDE algorithm to take advantage of side information. The new algorithms not only universally attain the fundamental limits, but also suggest a paradigm for practical WZ coding. The effectiveness of our approach is illustrated with experiments on binary images, and English text using a low complexity algorithm motivated by our class of universally optimal WZ codes. Finally, we investigate the problem of Multiple Description MD coding of discrete ergodic processes. We introduce the notion of MD stationary coding, and characterize its relationship to the conventional block MD coding. In stationary coding, in addition to the two rate constraints normally considered in the MD problem, we consider another rate constraint which reflects the conditional entropy of the process generated by the third decoder given the reconstructions of the two other decoders. The relationship that we establish between stationary and block MD coding enables us to devise a universal algorithm for MD coding of discrete ergodic sources, based on simulated annealing ideas that were recently proven useful for the standard rate distortion problem.
|Type of resource
|electronic; electronic resource; remote
|1 online resource.
|2009, c2010; 2009
|Stanford University, Department of Electrical Engineering
|Cover, T. M, 1938-2012
|Cover, T. M, 1938-2012
|Statement of responsibility
|Submitted to the Department of Electrical Engineering.
|Thesis (Ph.D.)--Stanford University, 2010.
- © 2010 by Shirin Jalali
- This work is licensed under a Creative Commons Attribution 3.0 Unported license (CC BY).
Also listed in
Loading usage metrics...