Probabilistic Independence Networks for Hidden Markov Probability Models
Author(s)
Smyth, Padhraic; Heckerman, David; Jordan, Michael
DownloadAIM-1565.ps (649.4Kb)
Additional downloads
Metadata
Show full item recordAbstract
Graphical techniques for modeling the dependencies of randomvariables have been explored in a variety of different areas includingstatistics, statistical physics, artificial intelligence, speech recognition, image processing, and genetics.Formalisms for manipulating these models have been developedrelatively independently in these research communities. In this paper weexplore hidden Markov models (HMMs) and related structures within the general framework of probabilistic independencenetworks (PINs). The paper contains a self-contained review of the basic principles of PINs.It is shown that the well-known forward-backward (F-B) and Viterbialgorithms for HMMs are special cases of more general inference algorithms forarbitrary PINs. Furthermore, the existence of inference and estimationalgorithms for more general graphical models provides a set of analysistools for HMM practitioners who wish to explore a richer class of HMMstructures.Examples of relatively complex models to handle sensorfusion and coarticulationin speech recognitionare introduced and treated within the graphical model framework toillustrate the advantages of the general approach.
Date issued
1996-03-13Other identifiers
AIM-1565
CBCL-132
Series/Report no.
AIM-1565CBCL-132
Keywords
AI, MIT, Artificial Intelligence, graphical models, Hidden Markov models, HMM's, learning, probabilistic models, speech recognition, Bayesian networks, belief networks, Markov networks, probabilistic propagation, inference, coarticulation