Hierarchical Mixtures of Experts and the EM Algorithm
dc.contributor.author | Jordan, Michael I. | en_US |
dc.contributor.author | Jacobs, Robert A. | en_US |
dc.date.accessioned | 2004-10-20T20:49:48Z | |
dc.date.available | 2004-10-20T20:49:48Z | |
dc.date.issued | 1993-08-01 | en_US |
dc.identifier.other | AIM-1440 | en_US |
dc.identifier.other | CBCL-083 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/7206 | |
dc.description.abstract | We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. | en_US |
dc.format.extent | 29 p. | en_US |
dc.format.extent | 190144 bytes | |
dc.format.extent | 678911 bytes | |
dc.format.mimetype | application/octet-stream | |
dc.format.mimetype | application/pdf | |
dc.language.iso | en_US | |
dc.relation.ispartofseries | AIM-1440 | en_US |
dc.relation.ispartofseries | CBCL-083 | en_US |
dc.subject | supervised learning | en_US |
dc.subject | statistics | en_US |
dc.subject | decision trees | en_US |
dc.subject | neuralsnetworks | en_US |
dc.title | Hierarchical Mixtures of Experts and the EM Algorithm | en_US |