Abstract
We study the limit law of the offspring empirical measure and for Markov chains indexed by homogeneous tree with almost everywhere convergence. Then we prove a Shannon-McMillan theorem with the convergence almost everywhere.
1. Introduction
A tree is a graph which is connected and contains no circuits, where and denote the vertex set and the edge set, respectively. Given any two vertices , let be the unique path connecting and . Define the graph distance to be the number of edges contained in the path .
Let be an infinite tree with root 0. The set of all vertices with distance from the root is called the th generation of , which is denoted by . We denote by the union of the first generations of . For each vertex , there is a unique path from 0 to , and for the number of edges on this path. We denote the first predecessor of by , the second predecessor of by and denote by the th predecessor of . The degree of a vertex is defined to be the number of neighbors of it. If the degree sequence of a tree is uniformly bounded, we call the tree a uniformly bounded tree. Let be a positive integer. If every vertex of the tree has neighbors in the next generation, we say it Cayley tree, which is denoted by . Thus on Cayley tree, every vertex has degree except that the root index which has degree . For any two vertices and of tree , write if is on the unique path from the root 0 to . We denote by the vertex farthest from 0 satisfying and . and denote by the number of vertices of .
Definition 1.1 (see [1]). Let be an infinite Cayley tree a finite state space, and be a collection of S-valued random variables defined on probability space . Let be a distribution on and be a stochastic matrix on . If for any vertex , , will be called S-valued Markov chains indexed by an infinite tree with the initial distribution (1.1) and transition matrix (1.2) or called tree-indexed Markov chains with state-space . Furthermore, if transition matrix is ergodic, then we call an ergodic Markov chains indexed by the infinite tree .
The above definition is the extension of the definitions of Markov chain fields on trees (see [1, page 456] and [2]). In this paper, we always suppose that the tree-indexed Markov chain is ergodic.
The subject of tree-indexed processes is rather young. Benjamini and Peres [3] have given the notion of the tree-indexed Markov chains and studied the recurrence and ray-recurrence for them. Berger and Ye [4] have studied the existence of entropy rate for some stationary random fields on a homogeneous tree. Ye and Berger (see [5, 6]), by using Pemantleβs result [7] and a combinatorial approach, have studied the Shannon-McMillan theorem with convergence in probability for a PPG-invariant and ergodic random field on a homogeneous tree. Yang and Liu [8] have studied a strong law of large numbers for the frequency of occurrence of states for Markov chains fields on a homogeneous tree (a particular case of tree-indexed Markov chains and PPG-invariant random fields). Takacs (see [9]) have studied the strong law of large numbers for the univariate functions of finite Markov chains indexed by an infinite tree with uniformly bounded degree. Subsequently, Huang and Yang (see [10]) has studied the Shannon-McMillan theorem of finite homogeneous Markov chains indexed by a uniformly bounded infinite tree. Dembo et al., (see [11]) has showed the large deviation principle holds for the empirical offspring measure of Markov chains on random trees and demonstrated the explicit rate function, which is defined in terms of specific relative entropy (see [12]) and CramΓ©rβs rate function.
In this paper, we study the strong law of large numbers for the offspring empirical measure and the Shannon-McMillan theorem with a.e. convergence for Markov chain fields on tree by using a method similar to that of [10].
2. Statements of the Results
For every vertex , the random vector of offspring states is defined as
Let be a -dimensional vector on .
Now we also let the distribution (1.1) serve as the initial distribution. Define the offspring transition kernel from to . We define the law of a tree-indexed process by the following rules.(i)The state of the root random variable is determined by distribution (1.1).(ii)For every vertex with state , the offspring states are given independently of everything else, by the offspring law on , where Here the last equation holds because of the property of conditional independence.
For every finite , let be -valued Markov chains indexed by an infinite tree . Now we define the offspring empirical measure For any state , is the empirical measure, which is defined as follows: where denotes the indicator function as usual and .
In the rest of this paper, we consider the limit law of the random sequence of , which is defined as above.
Theorem 2.1. Let be a Cayley tree , a finite state space, and be tree-indexed Markov chain with initial distribution (1.1) and ergodic transition matrix . Let be defined as (2.3). Thus one has where is the stationary distribution of the ergodic matrix , that is, , and .
Corollary 2.2. Under the condition of Theorem 2.1, suppose that is any function defined on . Denote Then
Proof. Noting that
thus by using Theorem 2.1 we get
Let be a tree graph, be a stochastic process indexed by tree with state space . Denote to be the offspring processes derived by . It is easy to see that
where . Let
will be called the entropy density of . If is a tree-indexed Markov chain with state space defined by Definition 1.1, we have by (2.10)
The convergence of to a constant in a sense ( convergence, convergence in probability, a.e. convergence) is called the Shannon-McMillan theorem or the entropy theorem or the AEP in information theory. Here from Corollary 2.2, if we let
we can easily obtain the Shannon-McMillan theorem with a.e. convergence for Markov chain fields on tree .
Corollary 2.3. Under the condition of Corollary 2.2, let be defined as (2.12). Then
3. Proof of Theorem 2.1
Let be a Cayley tree, a finite state space, and tree-indexed Markov chain with any initial distribution (1.1) and ergodic transition matrix . Let be functions defined on . Letting be a real number, , , now we can define a nonnegative martingale as follows: At first we come to prove the above fact.
Theorem 3.1. , , is a nonnegative martingale.
Proof of Theorem 3.1. Note that, by Markov property and the property of conditional independence, we have On the other hand, we also have Combining (3.2) and (3.3), we get Thus we complete the proof of this theorem.
Theorem 3.2. Let and be defined as above, and denote Let , denote Then
Proof. By Theorem 3.1, we have known that is a nonnegative martingale. According to Doob martingale convergence theorem, we have so that Combining (3.1), (3.7), and (3.10), we arrive at Let . Dividing two sides of above equation by , we get By (3.12) and inequalities , as , it follows that Letting in (3.13), by (3.5) we have Let . Similarly to the analysis of the case , it follows from (3.12) that Letting , we can arrive at Combining (3.14) and (3.16), we obtain (3.8) directly.
Corollary 3.3. Under the conditions of Theorem 3.2, one has where is the stationary distribution of the ergodic matrix , that is, , and .
Proof. For any , let
Then we have
Combing (3.19) and (3.20), we can derive our conclusion by Theorem 3.2.
In our proof, we will use Lemma 3.4.
Lemma 3.4 (see [10]). Let be a Cayley tree, a finite state space, and , tree-indexed Markov chain with any initial distribution (1.1) and ergodic transition matrix . Let be defined as (2.4). Thus one has
Proof of Theorem 2.1. Combining Corollary 3.3 and Lemma 3.4, we arrive at our conclusion directly.
Acknowledgment
This work was supported by National Natural Science Foundation of China (Grant no. 11071104).