dc.contributor.author | Jordan, Michael | en_US |
dc.contributor.author | Xu, Lei | en_US |
dc.date.accessioned | 2004-10-20T20:49:25Z | |
dc.date.available | 2004-10-20T20:49:25Z | |
dc.date.issued | 1995-04-21 | en_US |
dc.identifier.other | AIM-1520 | en_US |
dc.identifier.other | CBCL-111 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/7195 | |
dc.description.abstract | "Expectation-Maximization'' (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix $P$, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of $P$ and provide new results analyzing the effect that $P$ has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models. | en_US |
dc.format.extent | 9 p. | en_US |
dc.format.extent | 291671 bytes | |
dc.format.extent | 476864 bytes | |
dc.format.mimetype | application/postscript | |
dc.format.mimetype | application/pdf | |
dc.language.iso | en_US | |
dc.relation.ispartofseries | AIM-1520 | en_US |
dc.relation.ispartofseries | CBCL-111 | en_US |
dc.subject | learning | en_US |
dc.subject | neural networks | en_US |
dc.subject | EM algorithm | en_US |
dc.subject | clustering | en_US |
dc.subject | mixture models | en_US |
dc.subject | statistics | en_US |
dc.title | On Convergence Properties of the EM Algorithm for Gaussian Mixtures | en_US |