Information Thoery
Publications Abstracts Talk


Selected Abstracts

Theory for High Rate Vector Quantization
  • J. Li, N. Chaddha, R. M. Gray, ``Asymptotic performance of vector quantizers with a perceptual distortion measure,'' IEEE Transactions on Information Theory, 45(4):1082-91, May 1999. (download)

    Abstract: Gersho's bounds on the asymptotic performance of vector quantizers are valid for vector distortions which are powers of the Euclidean norm. Yamada, Tazaki and Gray generalized the results to distortion measures that are increasing functions of the norm of their argument. In both cases, the distortion is uniquely determined by the vector quantization error, i.e., the Euclidean difference between the original vector and the codeword into which it is quantized. We generalize these asymptotic bounds to input-weighted quadratic distortion measures and measures that are approximately output-weighted quadratic when the distortion is small, a class of distortion measures often claimed to be perceptually meaningful. An approximation of the asymptotic distortion based on Gersho's conjecture is derived as well. We also consider the problem of source mismatch, where the quantizer is designed using a probability density di erent from the true source density. The resulting asymptotic performance in terms of distortion increase in dB is shown to be linear in the relative entropy between the true and estimated probability densities.

  • R. M. Gray, T. Linder, J. Li, ``A lagrangian formulation of Zador's entropy-constrained quantization theorem,'' IEEE Transactions on Information Theory, 48(3):695-707, 2002. (download)
Multiscale Stochastic Image Modeling
  • Jia Li, Robert M. Gray, Richard A. Olshen, "Multiresolution image classification by hierarchical modeling with two dimensional hidden Markov models," IEEE Transactions on Information Theory, 46(5):1826-41, August 2000.

    Abstract: This paper treats a multiresolution hidden Markov model for classifying images. Each image is represented by feature vectors at several resolutions, which are statistically dependent as modeled by the underlying state process, a multiscale Markov mesh. Unknowns in the model are estimated by maximum likelihood, in particular by employing the expectation-maximization algorithm. An image is classified by finding the optimal set of states with maximum a posteriori probability. States are then mapped into classes. The multiresolution model enables multiscale information about context to be incorporated into classification. Suboptimal algorithms based on the model provide progressive classification that is much faster than the algorithm based on single-resolution hidden Markov models.


@Jia Li          Back to Home