An HMM is a stochastic finite automaton, where each state generates an observation. The observation can be a discrete symbol or a feature vector. The parameters of the model are the initial state distribution, the transition model, and the observation model. An HMM can be represented by a state transition diagram or by a Bayesian network.
Any HHMM can be converted to HMM. The resulting state-space may be smaller, because it does not contain abstract states. This occurs if there are not many shared sub-structures. The state-space may be larger if there are shared sub-structures because they must be duplicated. Whether the state-space will be lager or smaller, depends on the ratio between the number of hidden states \(n_h^S\) in HHMM and the number of hidden states \(n^S\) in HMM.
In general, a dynamic Bayesian network (DBN) can be converted to an HMM if all the hidden nodes are discrete. In this case, using the HMM inference engine can be faster than the using the junction tree inference engine for small models because the constant factors of the algorithm are lower, but can be exponentially slower for models with many variables (e.g., > 6 binary hidden nodes).
HMM parameter learning: EM algorithm
n sequences with length m
\begin{align}
t^t(s'|s) = \frac{\sum_{i = 1}^n\sum_{j=1}^m p(S_j = s, S_{j+1} = s'|x_{i, 1}\ldots x_{i, m};\underline{\theta})}{\sum_{i = 1}^n\sum_{j=1}^m \sum_{s'} p(S_j = s, S_{j+1} = s'|x_{i, 1}\ldots x_{i, m};\underline{\theta})}
\end{align}
$$\sum_{s'} t(s'|s) = 1$$
No comments :
Post a Comment