Hierarchical Mixture Models: a Probabilistic Analysis

Posted in Science on September 02, 2008


Hierarchical Mixture Models: a Probabilistic Analysis

Mixture models form one of the most widely used classes of generative models for describing structured and clustered data. In this paper we develop a new approach for the analysis of hierarchical mixture models. More specifically, using a text clustering problem as a motivation, we describe a natural generative process that creates a hierarchical mixture model for the data. In this process, an adversary starts with an arbitrary base distribution and then builds a topic hierarchy via some evolutionary process, where he controls the parameters of the process. We prove that under our assumptions, given a subset of topics that represent generalizations of one another (such as baseball - sports - base), for any document which was produced via some topic in this hierarchy, we can efficiently determine the most specialized topic in this subset, it still belongs to. The quality of the classification is independent of the total number of topics in the hierarchy and our algorithm does not need to know the total number of topics in advance. Our approach also yields an algorithm for clustering and unsupervised topical tree reconstruction. We validate our model by showing that properties predicted by our theoretical results carry over to real data. We then apply our clustering algorithm to two different datasets: (i) “20 newsgroups” [19] and (ii) a snapshot of abstracts of arXiv [2] (15 categories, 240,000 abstracts). In both cases our algorithm performs extremely well.

Author: Mark Sandler, Google

Watch Video

Tags: Science, Lectures, Computer Science, Clustering, Machine Learning, VideoLectures.Net