Statistical Clustering
Publications Abstracts Talk


Selected Abstracts

Clustering based on multi-layer mixture models
  • Jia Li, "Clustering based on a multi-layer mixture model," Journal of Computational and Graphical Statistics, (14)3:547-568, 2005.

    Abstract: In model-based clustering, the density of each cluster is usually assumed to be a certain basic parametric distribution, e.g., the normal distribution. In practice, it is often difficult to decide which parametric distribution is suitable to characterize a cluster, especially for multivariate data. Moreover, the densities of individual clusters may be multi-modal themselves, and therefore cannot be accurately modeled by basic parametric distributions. We explore in this paper a clustering approach that models each cluster by a mixture of normals. The resulting overall model is a multi-layer mixture of normals. Algorithms to estimate the model and perform clustering are developed based on the classification maximum likelihood (CML) and mixture maximum likelihood (MML) criteria. BIC and ICL-BIC are examined for choosing the number of normal components per cluster. Experiments on both simulated and real data are presented.
Classification for data with dimension larger than sample size
  • Jia Li, Hongyuan Zha, "Two-way Poisson mixture models for simultaneous document classification and word clustering," Computational Statistics and Data Analysis, 50(1):163-180, 2006.

    Abstract: An approach to simultaneous document classification and word clustering is developed using a two-way mixture model of Poisson distributions. Each document is represented by a vector with each dimension specifying the number of occurrences of a particular word in the document in question. As a collection of documents across several classes usually makes use of a large number of words, the document vectors are of high dimension. On the other hand, the number of distinct words in any single document is usually substantially smaller than the size of the vocabulary, leading to sparse document vectors. A mixture of Poisson distributions is used to model the multivariate distribution of the word counts in the documents within each class. To address the issues of high dimensionality and sparsity, the parameters in the mixture model are regularized by imposing a clustering structure on the set of words. An EM-style algorithm for the two-way mixture model will be derived for parameter estimation with the clustering of words part of the estimation process. The connection of the two-way mixture model with dimension reduction will also be elucidated. Experiments on the newsgroup data have demonstrated promising results.
New metric for comparing unsupervised clustering results
  • Ding Zhou, Jia Li, Hongyuan Zha, "A new Mallows distance based metric for comparing clusterings," Proc. International Conference on Machine Learning (ICML), 8pp., Bonn, Germany, August 2005.

    Abstract: Despite the large number of algorithms developed for clustering, the study on comparing clustering results is limited. In this paper, we propose a measure for comparing clustering results to tackle two issues insufficiently addressed or even overlooked by existing methods: (a) taking into account the distance between cluster representatives when assessing the similarity of clustering results; (b) constructing a unified framework for defining a distance based on either hard or soft clustering and ensuring the triangle inequality under the definition. Our measure is derived from a complete and globally optimal matching between clusters in two clustering results. It is shown that the distance is an instance of the Mallows distance a metric between probability distributions in statistics. As a result, the defined distance inherits desirable properties from the Mallows distance. Experiments show that our clustering distance measure successfully handles cases difficult for other measures.


@Jia Li          Back to Home