Theory - EM
Contents
Theory - EM¶
The source for most of the theory here is Machine Learning: A Probablistic Perspective by Kevin P. Murphy [Mur12].
Let us represent all the parameters generally as \(\boldsymbol \theta\).
The Need for EM¶
For computing Maximum Likelihood Estimate (MLE) or Maximum A Posteriori (MAP) Estimate, one approach is to use a generic gradient-based optimizer to find a local minimum of the Negative Log Likelihood:
However, there are constraints that need to be enforced, like
Covariance matrices must be positive semi-definite
Proportions (\(\pi_k\)) must sum to one
This can be tricky. It can be simpler to use an iterative algorithm like Expectation Maximization (EM).
EM alternates between inferring the missing values given the parameters (the E step), and then optimizing the parameters given the “filled in” data (the M step).
EM in GMM¶
Initialization¶
We first initialize our parameters. We can use the results of K-Means Clustering as a starting point.
The E Step¶
Consider the Auxiliary Function of expected complete data log likelihood:
Out of this, \(P(\boldsymbol Z | \boldsymbol X, \boldsymbol \theta)\) is just \(\gamma(z_{nk})\), as seen in Equation (6).
And for the other part, use Equations (3) and (4).
Since the latent variable is only 1 once anytime we evaluate the summation, our final auxiliary function becomes:
The E Step is about computing the missing values.
In practice, however, we only need to compute the responsibilities \(\gamma(z_{nk})\) in the E step. The auxiliary function in its full form is required to get some results in the M step.
The M Step¶
The M step is about using the “filled in data” in the E step and the existing parameters \(\boldsymbol \theta\) to compute revised parameters \(\boldsymbol \theta^*\) such that:
For \(\boldsymbol \pi\), we have:
For the revised values of mean and covariances, we compute the partial derivatives of \(Q\) with respect to these parameters and equate to zero. One can show that the new parameter estimates are given by:
Computing these revised parameters constitutes the M step.
References¶
- Mur12
Kevin P. Murphy. Machine Learning: A Probabilistic Perspective. The MIT Press, 2012. URL: http://probml.ai/.