International Journal of Stochastic Analysis

Volume 2014, Article ID 628321, 6 pages

http://dx.doi.org/10.1155/2014/628321

## Strong Law of Large Numbers for Hidden Markov Chains Indexed by an Infinite Tree with Uniformly Bounded Degrees

College of Mathematics and Information Science, Wenzhou University, Zhejiang 325035, China

Received 29 August 2014; Accepted 24 November 2014; Published 9 December 2014

Academic Editor: Lukasz Stettner

Copyright © 2014 Huilin Huang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We study strong limit theorems for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees. We mainly establish the strong law of large numbers for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees and give the strong limit law of the conditional sample entropy rate.

#### 1. Introduction

The impact of the theory of hidden Markov models (HMMs) has become widespread for modeling sequences of dependent random variables during the last two decades. Hidden Markov models have many applications in a wide range of areas, such as speech recognition (Rabiner (1998)) [1], image processing [2], DNA sequence analysis (see, e.g., [3, 4]), DNA microarray time course analysis [5], and econometrics [6, 7]. For a good review of statistical and information-theoretic aspects of hidden Markov processes (HMPs), please see [8]. In recent years, the work of Baum and Petrie [9] on finite-state finite-alphabet HMMs has been extended to HMM with finite as well as continuous state spaces and a general alphabet. In particular, statistical properties and ergodic theorems for relative entropy densities of HMMs were developed, and consistency and asymptotic normality of the maximum-likelihood (ML) parameter estimator were proved under some mild conditions [9–12].

In this paper we want to expand tree-indexed homogeneous Markov chain fields to hidden Markov chain fields indexed by an infinite tree.

A tree is a graph which is connected and contains no loops. Given any two vertices , let be the unique path connecting and . Define the graph distance to be the number of edges contained in the path .

Let be an infinite tree with root 0. The set of all vertices with distance from the root is called the th generation of , which is denoted by . We denote by the union of the first generations of . For each vertex , there is a unique path from to and for the number of edges on this path. We denote the first predecessor of by , the second predecessor of by , and the th predecessor of by . The degree of a vertex is defined to be the number of neighbors of it. In this paper, we mainly consider an infinite tree which has uniformly bounded degree; that is, the numbers of neighbors of any vertices in this tree are uniformly bounded. For any two vertices and of tree , write if is on the unique path from the root to . We denote by the vertex farthest from satisfying and . and denote by the number of vertices of .

When the context permits, this type of trees is all denoted simply by .

*Definition 1 (homogeneous Markov chains indexed by tree (see [13, 14])). *Let be an infinite tree with uniformly bounded degrees, let be a finite state space, and let be a stochastic process defined on probability space , taking values in the finite set . Let
be a distribution on , and let
be a transition probability matrix on . If, for any vertex ,
Thus will be called -valued homogeneous Markov chains indexed by infinite tree with the initial distribution (1) and transition probability matrix whose elements are determined by (3).

*Definition 2. *Let be an infinite tree with uniformly bounded degrees and let and be two stochastic processes on a probability space with finite state spaces and , respectively. Let and be two stochastic matrices on and , respectively. Suppose
If for any vertex ,
Moreover, we suppose
where
Then will be called -valued* hidden Markov chain indexed by an infinite tree *, or called* tree-indexed hidden Markov chain* taking values in the finite set .

*Remark 3. *If we sum over in (6) and take conditional expectations with respect to on both sides of the result equation, we can easily arrive at (3). In Definition 2, We can also call the processes and , respectively, to be* state process* and the* observed process* indexed by an infinite tree.

#### 2. Two Useful Lemmas

Let be an infinite tree with uniformly bounded degrees and let be -valued hidden Markov chains indexed by . Let be functions defined on . Let be a real number, , , and now we define a stochastic sequence as follows: At first we come to prove the following fact.

Lemma 4. * is a nonnegative martingale.*

*Proof of Lemma 4. *Obviously, by (6), the process is a* tree-indexed Markov chain* with state space , so that we have
Then we have
here the second equation holds because of (10). Furthermore, we have

On the other hand, we also have
Combining (13) and (14), we get
Thus we complete the proof of Lemma 4.

Lemma 5. *Let be -valued hidden Markov chains indexed by an infinite tree with uniformly bounded degrees. are functions defined as above; denote
**
Let , be a sequence of nonnegative random variables. Denote
**
Then
*

*Proof. *By Lemma 4, we have known that is a nonnegative martingale. According to Doob martingale convergence theorem, we have
which implies that
Combining (9), (18), and (21), we arrive at
Let . Dividing two sides of the above equation by , we get
For case , by using inequalities , , and (23)
Letting in (24), combining with (16), we have
Let . Similarly to the analysis of the case , it follows from (22) that
Letting , we can arrive at
Combining (25) and (27), we obtain (19) directly.

*Corollary 6. Let be -valued hidden Markov chains indexed by an infinite tree with uniformly bounded degrees. If are the uniformly bounded functions defined on , let be nonnegative integer. Then
*

*Proof. *Letting in Lemma 5, noticing that are uniformly bounded, then for all . This theorem follows from Lemma 5 directly.

*3. Strong Law of Large Numbers*

*3. Strong Law of Large Numbers*

*In the following, we always let , , , and denote
here and thereafter denotes the Kronecker function. The following lemma is very useful for proving our main result.*

*Lemma 7 (see [14]). Let be an infinite tree with uniformly bounded degrees and let be a tree-indexed Markov chain with finite state space , which is determined by initial distribution (1) and finite transition probability matrix . Suppose that the stochastic matrix is ergodic, whose unique stationary distribution is ; that is, and . Let be defined as (30). Thus one has
*

*Theorem 8. Let be an infinite tree with uniformly bounded degrees and let be -valued hidden Markov chains indexed by . For all nonnegative integer , define the following weighted empirical measure of triples :
If the transition probability matrix of is ergodic, one has
where is the stationary distribution of the ergodic matrix .*

*Proof. *For any , let
Then we have
Combining (35) and Corollary 6, we obtain
By using Lemma 7, our conclusion (33) holds.

*Corollary 9. Under the conditions of Theorem 8, denote ; then one has
where is the stationary distribution of the ergodic matrix .*

*For every finite , let be -valued hidden Markov chains indexed by an infinite tree with uniformly bounded degrees. We define the offspring empirical measure as follows:
*

*In the following, we consider the limit law of the random sequence of which are defined as above.*

*Theorem 10. Let be an infinite tree with uniformly bounded degrees, and let be -valued hidden Markov chains indexed by . If the transition probability matrix of is ergodic, then
where is the stationary distribution of the ergodic matrix .*

*Proof. *Letting , we have
Comparing (38) with (40), it is easy to see
Taking limit on both sides of above equation as tends to infinity, it follows from Corollary 9 that
where the last equation holds because is unique stationary distribution of the ergodic stochastic matrix ; that is, . Thus we complete the proof of Theorem 10.

*From the expression of (38) we can easily obtain the empirical measure of the observed chain which is denoted by ,
Thus we can obtain Corollary 11.*

*Corollary 11. Under the same conditions of Theorem 10, one has
*

*Let be any function defined on . Denote
By simpe computation, we arrive at Corollary 12.*

*Corollary 12. Under the same conditions of Theorem 10, one also has
*

*Now we define conditional entropy rate of given by
From (6), we obtain that
*

*The convergence of to a constant in a sense ( convergence, convergence in probability, a.e. convergence) is called the conditional version of Shannon-McMillan theorem or the entropy theorem or the AEP (asymptotic equipartition property) in information theory. Here from Corollary 12, if we let
we can easily obtain the Shannon-McMillan theorem with a.e. convergence for conditional entropy theorem of hidden Markov chain fields on tree .*

*Corollary 13. Under the same conditions of Theorem 10, one has
Here one also specifies as zero by convention.*

*4. Conclusion*

*4. Conclusion*

*This paper gives some strong limit theorems for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees. We study the strong law of large numbers for hidden Markov chains fields indexed by an infinite tree with uniformly bounded degrees and give the strong law of the conditional sample entropy rate.*

*Conflict of Interests*

*Conflict of Interests*

*The author declares that there is no conflict of interests regarding the publication of this paper.*

*Acknowledgment*

*Acknowledgment*

*This work was supported by National Natural Science Foundation of China (Grant no. 11201344).*

*References*

*References*

- L. R. Rabiner, “Tutorial on hidden Markov models and selected applications in speech recognition,”
*Proceedings of the IEEE*, vol. 77, no. 2, pp. 257–286, 1989. View at Publisher · View at Google Scholar · View at Scopus - J. Li and R. M. Gray,
*Image Segmentation and Compression Using Hidden Markov Models*, Kluwer Academic Publishers, 2000. - R. Durbin, S. Eddy, A. Krogh, and G. Mitchison,
*Biological Sequence Analysis*, Cambridge University Press, Cambridge, UK, 1998. - T. Koski,
*Hidden Markov Models for Bioinformatics*, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2001. - M. Yuan and C. Kendziorski, “Hidden Markov models for microarray time course data in multiple biological conditions,”
*Journal of the American Statistical Association*, vol. 101, no. 476, pp. 1323–1332, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus - J. D. Hamilton, “A new approach to the economic analysis of nonstationary time series and the business cycle,”
*Econometrica*, vol. 57, no. 2, pp. 357–384, 1989. View at Publisher · View at Google Scholar · View at MathSciNet - C. A. Sims and T. Zha, “Were there regime switches in U.S. monetary policy?”
*The American Economic Review*, vol. 96, no. 1, pp. 54–81, 2006. View at Publisher · View at Google Scholar · View at Scopus - Y. Ephraim and N. Merhav, “Hidden Markov processes,”
*IEEE Transactions on Information Theory*, vol. 48, no. 6, pp. 1518–1569, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus - L. E. Baum and T. Petrie, “Statistical inference for probabilistic functions of finite state Markov chains,”
*Annals of Mathematical Statistics*, vol. 37, pp. 1554–1563, 1966. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - T. Petrie, “Probabilistic functions of finite state Markov chains,”
*Annals of Mathematical Statistics*, vol. 40, pp. 97–115, 1969. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - B. G. Leroux, “Maximum-likelihood estimation for hidden Markov models,”
*Stochastic Processes and their Applications*, vol. 40, no. 1, pp. 127–143, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus - P. J. Bickel and Y. Ritov, “Inference in hidden Markov models. I. Local asymptotic normality in the stationary case,”
*Bernoulli*, vol. 2, no. 3, pp. 199–228, 1996. View at Publisher · View at Google Scholar · View at MathSciNet - I. Benjamini and Y. Peres, “Markov chains indexed by trees,”
*The Annals of Probability*, vol. 22, no. 1, pp. 219–243, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H. Huang and W. Yang, “Strong law of large numbers for Markov chains indexed by an infinite tree with uniformly bounded degree,”
*Science in China Series A: Mathematics*, vol. 51, no. 2, pp. 195–202, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus

*
*