Title: Latent Dirichlet Allocation
# Overview
Latent Dirichlet Allocation (Blei et al, 2003) is a powerful learning
algorithm for automatically and jointly clustering words into "topics" and
documents into mixtures of topics. It has been successfully applied to
model change in scientific fields over time (Griffiths and Steyvers, 2004;
Hall, et al. 2008).
A topic model is, roughly, a hierarchical Bayesian model that associates
with each document a probability distribution over "topics", which are in
turn distributions over words. For instance, a topic in a collection of
newswire might include words about "sports", such as "baseball", "home
run", "player", and a document about steroid use in baseball might include
"sports", "drugs", and "politics". Note that the labels "sports", "drugs",
and "politics", are post-hoc labels assigned by a human, and that the
algorithm itself only assigns associate words with probabilities. The task
of parameter estimation in these models is to learn both what the topics
are, and which documents employ them in what proportions.
Another way to view a topic model is as a generalization of a mixture model
like [Dirichlet Process Clustering](http://en.wikipedia.org/wiki/Dirichlet_process)
. Starting from a normal mixture model, in which we have a single global
mixture of several distributions, we instead say that _each_ document has
its own mixture distribution over the globally shared mixture components.
Operationally in Dirichlet Process Clustering, each document has its own
latent variable drawn from a global mixture that specifies which model it
belongs to, while in LDA each word in each document has its own parameter
drawn from a document-wide mixture.
The idea is that we use a probabilistic mixture of a number of models that
we use to explain some observed data. Each observed data point is assumed
to have come from one of the models in the mixture, but we don't know
which. The way we deal with that is to use a so-called latent parameter
which specifies which model each data point came from.
# Collapsed Variational Bayes
The CVB algorithm which is implemented in Mahout for LDA combines
advantages of both regular Variational Bayes and Gibbs Sampling. The
algorithm relies on modeling dependence of parameters on latest variables
which are in turn mutually independent. The algorithm uses 2
methodologies to marginalize out parameters when calculating the joint
distribution and the other other is to model the posterior of theta and phi
given the inputs z and x.
A common solution to the CVB algorithm is to compute each expectation term
by using simple Gaussian approximation which is accurate and requires low
computational overhead. The specifics behind the approximation involve
computing the sum of the means and variances of the individual Bernoulli
variables.
CVB with Gaussian approximation is implemented by tracking the mean and
variance and subtracting the mean and variance of the corresponding
Bernoulli variables. The computational cost for the algorithm scales on
the order of O(K) with each update to q(z(i,j)). Also for each
document/word pair only 1 copy of the variational posterior is required
over the latent variable.
# Invocation and Usage
Mahout's implementation of LDA operates on a collection of SparseVectors of
word counts. These word counts should be non-negative integers, though
things will-- probably --work fine if you use non-negative reals. (Note
that the probabilistic model doesn't make sense if you do!) To create these
vectors, it's recommended that you follow the instructions in [Creating Vectors From Text](../basics/creating-vectors-from-text.html)
, making sure to use TF and not TFIDF as the scorer.
Invocation takes the form:
bin/mahout cvb \
-i \
-dict \
-o