Discrete Dynamics in Nature and Society

Discrete Dynamics in Nature and Society / 2014 / Article

Research Article | Open Access

Volume 2014 |Article ID 231704 | https://doi.org/10.1155/2014/231704

Fei Ye, Yifei Wang, "A Novel Method for Decoding Any High-Order Hidden Markov Model", Discrete Dynamics in Nature and Society, vol. 2014, Article ID 231704, 6 pages, 2014. https://doi.org/10.1155/2014/231704

A Novel Method for Decoding Any High-Order Hidden Markov Model

Academic Editor: Weiming Xiang
Received09 Aug 2014
Revised19 Oct 2014
Accepted11 Nov 2014
Published23 Nov 2014

Abstract

This paper proposes a novel method for decoding any high-order hidden Markov model. First, the high-order hidden Markov model is transformed into an equivalent first-order hidden Markov model by Hadar’s transformation. Next, the optimal state sequence of the equivalent first-order hidden Markov model is recognized by the existing Viterbi algorithm of the first-order hidden Markov model. Finally, the optimal state sequence of the high-order hidden Markov model is inferred from the optimal state sequence of the equivalent first-order hidden Markov model. This method provides a unified algorithm framework for decoding hidden Markov models including the first-order hidden Markov model and any high-order hidden Markov model.

1. Introduction

Hidden Markov models are powerful tools for modeling and analyzing sequential data. For several decades, hidden Markov models have been used in many fields including handwriting recognition [13], speech recognition [4, 5], computational biology [6, 7], and longitudinal data analysis [8, 9]. Past and current developments on hidden Markov models are well documented in [10, 11]. A hidden Markov model comprises an underlying Markov chain and an observed process, where the observed process is a probabilistic function of the underlying Markov chain [12]. Given a hidden Markov model, an efficient procedure for finding the optimal state sequence is of great interest in the real-world applications. In the traditional first-order hidden Markov model, the Viterbi algorithm is utilized to recognize the optimal state sequence [13]. Like the Kalman filter, the Viterbi algorithm tracks the optimal state sequence with a recursive method.

In recent years, the theory and applications of high-order hidden Markov models have been substantially advanced, and high-order hidden Markov models are known to be more powerful than the first-order hidden Markov model. There are two basic approaches to study the algorithms of high-order hidden Markov models. The first one is called the extended approach, which is to extend directly the existing algorithms of the first-order hidden Markov model to high-order hidden Markov models [1416]. The second one is called the model reduction method, which is to transform a high-order hidden Markov model to an equivalent first-order hidden Markov model by some means and then to establish the algorithms of the high-order hidden Markov model by using standard techniques applicable to the first-order hidden Markov model [1720].

In this paper, we propose a novel method for decoding any high-order hidden Markov model. First, the high-order hidden Markov model is transformed into an equivalent first-order hidden Markov model by Hadar’s transformation. Next, the optimal state sequence of the equivalent first-order hidden Markov model is recognized by the existing Viterbi algorithm of the first-order hidden Markov model. Finally, the optimal state sequence of the high-order hidden Markov model is inferred from the optimal state sequence of the equivalent first-order hidden Markov model.

2. High-Order Hidden Markov Model and Hadar’s Transformation

Initially suppose two processes and are defined on some probability space , where is an integer index. takes values in a finite set , and takes values in a finite set . Without loss of generality, the elements of can be denoted by . A high-order hidden Markov model is defined as follows.

Definition 1 (see [18, 20]). A high-order hidden Markov model is a doubly stochastic process with an underlying state process that is not directly observable but can be observed only through another stochastic process that is called the observation process. The observation process is governed by the hidden state process and produces the observation sequence. The state process and observation process, respectively, satisfy the following conditions.(a)The hidden state process is a homogeneous Markov chain of order , that is, a stochastic process that satisfies (b)The observation process is governed by the hidden state process according to a set of probability distributions that satisfy

To model the high-order hidden Markov model, the following parameters are needed.(1)State transition probability distribution: (2)Symbol emission probability distribution: (3)Initial state probability distribution: where ,  , and . We call , , and . For convenience, we use the compact notation to indicate the complete parameters of the high-order hidden Markov model.

Definition 2 (see [18]). Let be the mapping of any base number to its decimal value; that is, if , then

Definition 3 (see [17]). Any two models and are defined as equivalent if for any arbitrary observation sequence . In other words, two models are only considered equivalent if they yield the same likelihood, regardless of the specific observation sequence.

Based on Definition 2, we set Since ,  and take values in the set , takes values in the set . Moreover, it is easy to see that the inverse transformation can be implemented as follows:

Remark 4. The function is a one to one correspondence between the set and the set .

Proposition 5 (see [18]). Let for any . If , then ; that is, a transition from to is impossible.

Lemma 6. Let ; then the process forms the first-order homogeneous Markov chain.

Proof. Without loss of generality, we may assume that and , where .
First, we consider the case that . By Proposition 5, it is easy to see that
Next, we consider the case that . Since and , it follows from (10) that where . Moreover, we have
Through the above analysis, we derive that
Analogously, it is easy to see that
Therefore, the process forms the first-order homogeneous Markov chain.

Lemma 7. The two processes and form the first-order hidden Markov model.

Proof. Without loss of generality, we may assume that and , where and . Since , it follows from (10) that where . Moreover, we have
Analogously, it is easy to see that
Combining these with Lemma 6, we prove that the two processes and form the first-order hidden Markov model.

Remark 8. Hadar and Messer [18] had also mentioned the fact that the two processes and form the first-order hidden Markov model, but they did not discuss and prove it in detail.

To model the first-order hidden Markov model , the following parameters are needed.(1)State transition probability distribution: (2)Symbol emission probability distribution: (3)Initial state probability distribution: where and . We call , , and . For convenience, we use the compact notation to indicate the complete parameters of the first-order hidden Markov model.

Proposition 9 (see [18]). Let and for any ; then

Lemma 10. Let be any arbitrary observation sequence; then That is, the high-order hidden Markov model is equivalent to the first-order hidden Markov model .

Proof. For any , let
By Proposition 9, we have

Remark 11. Hadar and Messer [18] had also mentioned the fact that the high-order hidden Markov model is equivalent to the first-order hidden Markov model , but they did not discuss and prove it in detail.

3. Methodology

Theorem 12. Let be any given observation sequence, and assume that ; then

Proof. Without loss of generality, let , where . According to Proposition 9, we have where for .
On the other hand, it is easy to see that
Hence, by Lemma 10, we have

Theorem 13. Let be any given observation sequence, and assume that the state sequence satisfies that is, the state sequence is some optimal state sequence of the high-order hidden Markov model . Let   ; then the state sequence satisfies That is, the state sequence is some optimal state sequence of the first hidden Markov model .

Proof. By Theorem 12, it is easy to see that Meanwhile, we have the equation Hence, we drive that

According to Theorem 13, we can know that some optimal state sequence of the high-order hidden Markov model is mapped to some optimal state sequence of the first-order hidden Markov model . Similarly, we can draw the following conclusion.

Theorem 14. Let be any given observation sequence, and assume that the state sequence satisfies that is, the state sequence is some optimal state sequence of the first-order hidden Markov model . For , let then the state sequence satisfies That is, the state sequence is some optimal state sequence of the high-order hidden Markov model .

Remark 15. Combining Theorem 13 with Theorem 14, it is known that there exists a one to one correspondence between the optimal state sequence of the high-order hidden Markov model and the optimal state sequence of the first-order hidden Markov model .

To decode any high-order hidden Markov model , we transform it into an equivalent first-order hidden Markov model by Hadar’s transformation. Then, do as follows.

Step 1. Determine some optimal state sequence of the first-order hidden Markov model by using the Viterbi algorithm. Without loss of generality, let where .

Step 2. For , using the transformations we have

Step 3. For , , using the transformation we have

Combining Step 2 with Step 3, we obtain the state sequence

According to Theorem 14, the above state sequence is some optimal state sequence of the high-order hidden Markov model .

4. Conclusions

In this paper, a novel method for decoding any high-order hidden Markov model is given. Based on this method, the optimal state sequence of any high-order hidden Markov model can be inferred by the existing Viterbi algorithm of the first-order hidden Markov model. This method has universal character for decoding hidden Markov models and provides a unified algorithm framework for decoding hidden Markov models including the first-order hidden Markov model and any high-order hidden Markov model. For instance, the Viterbi algorithm of the first-order hidden Markov model can be easily derived as a special case of our conclusion when .

This method we analyzed is practical and valuable in its own right. Future research could use this method for applications in handwriting, speech recognition, speaker recognition, emotion recognition, and so forth.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the Major Program of the National Natural Science Foundation of China (no. 71390521), the Postdoctoral Science Foundation of China (no. 2014M551565), and the Scientific Research Foundation of Tongling University (no. 2012tlxyrc04).

References

  1. J. Hu, M. K. Brown, and W. Turin, “HMM based on-line handwriting recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, no. 10, pp. 1039–1045, 1996. View at: Publisher Site | Google Scholar
  2. M. S. Khorsheed, “Recognising handwritten Arabic manuscripts using a single hidden Markov model,” Pattern Recognition Letters, vol. 24, no. 14, pp. 2235–2242, 2003. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  3. T. Artières, S. Marukatat, and P. Gallinari, “Online handwritten shape recognition using segmental hidden markov models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 2, pp. 205–217, 2007. View at: Publisher Site | Google Scholar
  4. B. H. Juang and L. R. Rabiner, “Hidden Markov models for speech recognition,” Technometrics, vol. 33, no. 3, pp. 251–272, 1991. View at: Publisher Site | Google Scholar | MathSciNet
  5. M. Gales and S. Young, “The application of hidden Markov Models in speech recognition,” Foundations and Trends in Signal Processing, vol. 1, no. 3, pp. 195–304, 2008. View at: Publisher Site | Google Scholar
  6. A. Löytynoja and M. C. Milinkovitch, “A hidden Markov model for progressive multiple alignment,” Bioinformatics, vol. 19, no. 12, pp. 1505–1513, 2003. View at: Publisher Site | Google Scholar
  7. L. Regad, F. Guyon, J. Maupetit, P. Tufféry, and A. C. Camproux, “A Hidden Markov Model applied to the protein 3D structure analysis,” Computational Statistics and Data Analysis, vol. 52, no. 6, pp. 3198–3207, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  8. R. M. Altman, “Mixed hidden Markov models: an extension of the hidden Markov model to the longitudinal data setting,” Journal of the American Statistical Association, vol. 102, no. 477, pp. 201–210, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  9. A. Spagnoli, R. Henderson, R. J. Boys, and J. J. Houwing-Duistermaat, “A hidden Markov model for informative dropout in longitudinal response data with crisis states,” Statistics and Probability Letters, vol. 81, no. 7, pp. 730–738, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  10. Y. Ephraim and N. Merhav, “Hidden Markov processes,” IEEE Transactions on Information Theory, vol. 48, no. 6, pp. 1518–1569, 2002. View at: Publisher Site | Google Scholar | MathSciNet
  11. J. A. Bilmes, “What HMMs can do,” IEICE Transactions on Information and Systems, vol. E89-D, no. 3, pp. 869–891, 2006. View at: Publisher Site | Google Scholar
  12. L. E. Baum and T. Petrie, “Statistical inference for probabilistic functions of finite state Markov chains,” The Annals of Mathematical Statistics, vol. 37, no. 6, pp. 1554–1563, 1966. View at: Publisher Site | Google Scholar | MathSciNet
  13. L. R. Rabiner and B.-H. Juang, “An introduction to hidden Markov models,” IEEE ASSP Magazine, vol. 3, no. 1, pp. 4–16, 1986. View at: Publisher Site | Google Scholar
  14. J.-F. Mari, J.-P. Haton, and A. Kriouile, “Automatic word recognition based on second-order hidden Markov models,” IEEE Transactions on Speech and Audio Processing, vol. 5, no. 1, pp. 22–25, 1997. View at: Publisher Site | Google Scholar
  15. J.-F. Mari and F. Le Ber, “Temporal and spatial data mining with second-order hidden markov models,” Soft Computing, vol. 10, no. 5, pp. 406–414, 2006. View at: Publisher Site | Google Scholar
  16. L.-M. Lee, “High-order hidden markov model and application to continuous mandarin digit recognition,” Journal of Information Science and Engineering, vol. 27, no. 6, pp. 1919–1930, 2011. View at: Google Scholar | Zentralblatt MATH
  17. J. A. du Preez, “Efficient training of high-order hidden Markov models using first-order representations,” Computer Speech and Language, vol. 12, no. 1, pp. 23–39, 1998. View at: Publisher Site | Google Scholar
  18. U. Hadar and H. Messer, “High-order hidden Markov models—estimation and implementation,” in Proceedings of the 15th IEEE/SP Workshop on Statistical Signal Processing (SSP '09), pp. 249–252, Cardiff, Wales, September 2009. View at: Publisher Site | Google Scholar
  19. H. A. Engelbrecht and J. A. du Preez, “Efficient backward decoding of high-order hidden Markov models,” Pattern Recognition, vol. 43, no. 1, pp. 99–112, 2010. View at: Publisher Site | Google Scholar
  20. F. Ye, N. Yi, and Y. F. Wang, “EM algorithm for training high-order hidden Markov model with multiple observation sequences,” Journal of Information and Computational Science, vol. 8, no. 10, pp. 1761–1777, 2011. View at: Google Scholar

Copyright © 2014 Fei Ye and Yifei Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1012
Downloads455
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.