Abstract

We extend the idea of hidden Markov chains on lines to the situation of hidden Markov chains indexed by Cayley trees. Then, we study the strong law of large numbers for hidden Markov chains indexed by Cayley trees. As a corollary, we get the strong limit law of the conditional sample entropy rate.

1. Introduction

Recently, interest in the theory of hidden Markov models (abbreviated HMM hereafter) has become widespread especially in areas such as speech recognition [1], image processing [2], DNA sequence analysis [3, 4], DNA microarray time-course analysis [5], and econometrics [6, 7]. For a good review of statistical and information-theoretic aspects of hidden Markov processes (HMPs); please see Ephraim and Merhav [8]. In recent years, the work of Baum and Petrie [9] on finite-state finite-alphabet HMMs has been expanded to HMM with finite as well as continuous state spaces and a general alphabet. In particular, statistical properties and ergodic theorems for relative entropy densities of HMMs were developed, and consistency and asymptotic normality of the maximum-likelihood (ML) parameter estimator were proved under some mild conditions [912].

In this paper, we extend hidden Markov chain to hidden Markov chain indexed by Cayley trees, then we mainly prove the strong law of large numbers of offspring empirical measure for hidden Markov chain indexed by Cayley trees.

1.1. Notations and Preliminaries

A tree is a graph which is connected and contains no loops. Given any two vertices , let be the unique path connecting and . Define the graph distance to be the number of edges contained in the path .

Let be an infinite tree with root . The set of all vertices with distance from the root is called the th generation of , which is denoted by . We denote by the union of the first generations of . For each vertex , there is a unique path from to and for the number of edges on this path. We denote the first predecessor of by . The degree of a vertex is defined to be the number of neighbors of it. If every vertex of the tree has neighbors in the next generation, we call it Cayley tree, which is denoted by . Thus on Cayley tree , every vertex has degree except that the root vertex has degree . For any two vertices and of tree , write if is on the unique path connecting the root to . For any two vertices and , we denote by the vertex farthest from satisfying and . and denote by the number of vertices of .

In the following, we always let denote the Cayley tree .

Definition 1.1 (-indexed homogeneous Markov chains (see [13, 14])). Let be an infinite Cayley tree and a stochastic process defined on probability space and with finite state space . Let be a distribution on , and a transition probability matrix on . If for vertex , we have then we call to be an -valued homogeneous Markov chain indexed by infinite Cayley tree with the initial distribution (1.1) and transition probability matrix whose elements are determined by (1.3).

Definition 1.2. Let be an infinite Cayley tree and and two finite state spaces. is a stochastic process on a probability space . Let and be two stochastic matrices on and , respectively. Suppose If for vertex , we have Moreover, we suppose that then will be called an -valued hidden Markov chain indexed by an infinite tree or called tree-indexed hidden Markov chain taking values in the finite set .

Remark 1.3. (i) If we sum over in (1.6), we can get, for any , After taking conditional expectations with respect to on both sides of above equation, we arrive at (1.3). Therefore, is a tree-indexed Markov chain.
In Definition 1.2, we also call the processes and to be state process and the observed process, respectively, indexed by an infinite tree.
(ii) Obviously, by (1.6), the process is a tree-indexed Markov chain with state-space .

Property 1. Suppose that is a hidden Markov chain indexed by an infinite tree which take values in , then we have

Proof. Since is a tree-indexed Markov chain, it is easy to see On the other hand, we have The conclusion (1.9) is directly derived from (1.11) and (1.12).

2. Strong Law of Large Numbers

Let be -valued hidden Markov chains indexed by an infinite Cayley tree . For every finite , we define the offspring empirical measure as follows: here and thereafter denotes the Kronecker function. In the rest of this paper, we consider the limit law of the random sequence of which are defined as above.

Theorem 2.1. Let be a Cayley tree and let be -valued hidden Markov chains indexed by . If the transition probability matrix A of is ergodic, then where and thereafter is the stationary distribution of the ergodic matrix ; that is, , and .

We postpone the proof of Theorem 2.1 to Section 3.

From the expression of (2.1), we can easily obtain the empirical measure of the observed chain which is denoted by Thus, we can obtain the following Corollary 2.2.

Corollary 2.2. Under the same conditions of Theorem 2.1, one has

Let be any function defined on . Denote By simple computation, we arrive at the following Corollary 2.3.

Corollary 2.3. Under the same conditions of Theorem 2.1, one also has

Now, we define the conditional entropy rate of given as follows From (1.6), we obtain that

The convergence of to a constant in a sense ( convergence, convergence in probability, a.e. convergence) is called the conditional version of Shannon-McMillan theorem or the entropy theorem or the AEP in information theory. Here from Corollary 2.3, if we let we can easily obtain the Shannon-McMillan theorem with a.e. convergence for conditional sample entropy rate of hidden Markov chain fields on Cayley tree .

Corollary 2.4. Under the same conditions of Theorem 2.1, one has

3. Proof of Theorem 2.1

Let be a Cayley tree and let be -valued hidden Markov chains indexed by . Let be functions defined on . Let be a real number, , , now we define a stochastic sequence as follows: At first, we come to prove the following fact.

Lemma 3.1. The is a nonnegative martingale.

Proof of Lemma 3.1. Obviously, we have here the second equation holds because of (1.10). Furthermore, we have On the other hand, we also have Combining (3.3) and (3.4), we get Thus, we complete the proof of Lemma 3.1.

Lemma 3.2. Let be -valued hidden Markov chains indexed by an infinite Cayley tree . are functions defined as above, denote Let , denote Then,

Proof. By Lemma 3.1, we have known that is a nonnegative martingale. According to Doob’s martingale convergence theorem, we have so that Combining (3.1), (3.8), and (3.11), we arrive at Let . Dividing two sides of the above equation by , we get For case , combining with (3.13), the inequalities and , then it follows that Letting in (3.14), combining with (3.6), we have Let . Similar to the analysis of the case it follows from (3.13) that Letting , we can arrive at Combining (3.15) and (3.17), we obtain (3.9) directly.

Now we define the empirical measures of the Markov chain indexed by Cayley tree as :

The following lemma is very useful for proving our main result.

Lemma 3.3 (see [14]). Let be a Cayley tree and a tree-indexed Markov chain with finite state space , which is determined by any initial distribution (1.1) and finite transition probability matrix . Suppose that the stochastic matrix is ergodic, whose unique stationary distribution is ; that is, and . Let be defined as (3.18). Thus, we have

Corollary 3.4. Let be a Cayley tree and let be -valued hidden Markov chains indexed by . Define the following empirical measure of triples : If the transition probability matrix A of is ergodic, we have where is the stationary distribution of the ergodic matrix .

Proof. For any , let then we have Since is a Cayley tree, we have Combining the above fact with (3.9), (3.23), (3.24), and (3.19), we can derive our conclusion (3.21) directly.

Let us conclude this section by proving our main result Theorem 2.1.

Proof of Theorem 2.1. Comparing (2.1) and (3.20), it is easy to see Taking limit on both sides of the above equation as tends to infinity, it follows from Corollary 3.4 that where the last equation holds because is unique stationary distribution of the ergodic stochastic matrix ; that is, . Thus, we complete the proof of Theorem 2.1.

Acknowledgment

This work was supported by the National Natural Science Foundation of China (Grant no. 11201344).