Table of Contents
ISRN Applied Mathematics
Volume 2012, Article ID 536530, 9 pages
http://dx.doi.org/10.5402/2012/536530
Research Article

Strong Law of Large Numbers of the Offspring Empirical Measure for Markov Chains Indexed by Homogeneous Tree

College of Mathematics and Information Science, Wenzhou University, Zhejiang 325035, China

Received 24 December 2011; Accepted 26 January 2012

Academic Editors: K. Karamanos and E. Yee

Copyright © 2012 Huilin Huang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We study the limit law of the offspring empirical measure and for Markov chains indexed by homogeneous tree with almost everywhere convergence. Then we prove a Shannon-McMillan theorem with the convergence almost everywhere.

1. Introduction

A tree is a graph 𝐺={𝑇,𝐸} which is connected and contains no circuits, where 𝑇 and 𝐸 denote the vertex set and the edge set, respectively. Given any two vertices 𝛼𝛽𝑇, let 𝛼𝛽 be the unique path connecting 𝛼 and 𝛽. Define the graph distance 𝑑(𝛼,𝛽) to be the number of edges contained in the path 𝛼𝛽.

Let 𝐺 be an infinite tree with root 0. The set of all vertices with distance 𝑛 from the root is called the 𝑛th generation of 𝑇, which is denoted by 𝐿𝑛. We denote by 𝑇(𝑛) the union of the first 𝑛 generations of 𝑇. For each vertex 𝑡, there is a unique path from 0 to 𝑡, and |𝑡| for the number of edges on this path. We denote the first predecessor of 𝑡 by 1𝑡, the second predecessor of 𝑡 by 2𝑡 and denote by 𝑛𝑡 the 𝑛th predecessor of 𝑡. The degree of a vertex is defined to be the number of neighbors of it. If the degree sequence of a tree is uniformly bounded, we call the tree a uniformly bounded tree. Let 𝑑 be a positive integer. If every vertex of the tree has 𝑑 neighbors in the next generation, we say it Cayley tree, which is denoted by 𝑇𝐶,𝑑. Thus on Cayley tree, every vertex has degree 𝑑+1 except that the root index which has degree 𝑑. For any two vertices 𝑠 and 𝑡 of tree 𝑇, write 𝑠𝑡 if 𝑠 is on the unique path from the root 0 to 𝑡. We denote by 𝑠𝑡 the vertex farthest from 0 satisfying 𝑠𝑡𝑠 and 𝑠𝑡𝑡. 𝑋𝐴={𝑋𝑡,𝑡𝐴} and denote by |𝐴| the number of vertices of 𝐴.

Definition 1.1 (see [1]). Let 𝐺 be an infinite Cayley tree 𝑇𝐶,𝑑,𝑆 a finite state space, and {𝑋𝑡,𝑡𝑇} be a collection of S-valued random variables defined on probability space (Ω,,𝐏). Let 𝑝={𝑝(𝑥),𝑥𝑆}(1.1) be a distribution on 𝑆 and 𝑃=(𝑃(𝑦𝑥)),𝑥,𝑦𝑆,(1.2) be a stochastic matrix on 𝑆2. If for any vertex 𝑡, 𝐏𝑋𝑡=𝑦𝑋1𝑡=𝑥and𝑋𝑠for𝑡𝑠1𝑡𝑋=𝐏𝑡=𝑦𝑋1𝑡𝐏𝑋=𝑥=𝑃(𝑦𝑥)𝑥,𝑦𝑆,0=𝑥=𝑝(𝑥)𝑥𝑆,(1.3){𝑋𝑡, 𝑡𝑇} will be called S-valued Markov chains indexed by an infinite tree 𝐺 with the initial distribution (1.1) and transition matrix (1.2) or called tree-indexed Markov chains with state-space 𝑆. Furthermore, if transition matrix 𝑃 is ergodic, then we call {𝑋𝑡,𝑡𝑇} an ergodic Markov chains indexed by the infinite tree 𝑇.

The above definition is the extension of the definitions of Markov chain fields on trees (see [1, page 456] and [2]). In this paper, we always suppose that the tree-indexed Markov chain is ergodic.

The subject of tree-indexed processes is rather young. Benjamini and Peres [3] have given the notion of the tree-indexed Markov chains and studied the recurrence and ray-recurrence for them. Berger and Ye [4] have studied the existence of entropy rate for some stationary random fields on a homogeneous tree. Ye and Berger (see [5, 6]), by using Pemantle’s result [7] and a combinatorial approach, have studied the Shannon-McMillan theorem with convergence in probability for a PPG-invariant and ergodic random field on a homogeneous tree. Yang and Liu [8] have studied a strong law of large numbers for the frequency of occurrence of states for Markov chains fields on a homogeneous tree (a particular case of tree-indexed Markov chains and PPG-invariant random fields). Takacs (see [9]) have studied the strong law of large numbers for the univariate functions of finite Markov chains indexed by an infinite tree with uniformly bounded degree. Subsequently, Huang and Yang (see [10]) has studied the Shannon-McMillan theorem of finite homogeneous Markov chains indexed by a uniformly bounded infinite tree. Dembo et al., (see [11]) has showed the large deviation principle holds for the empirical offspring measure of Markov chains on random trees and demonstrated the explicit rate function, which is defined in terms of specific relative entropy (see [12]) and Cramér’s rate function.

In this paper, we study the strong law of large numbers for the offspring empirical measure and the Shannon-McMillan theorem with a.e. convergence for Markov chain fields on tree 𝑇𝐶,𝑑 by using a method similar to that of [10].

2. Statements of the Results

For every vertex 𝑡𝑇, the random vector of offspring states is defined as𝐂𝑡=𝑋1(𝑡),𝑋2(𝑡),,𝑋𝑑(𝑡)𝑆𝑑.(2.1)

Let 𝐜=(𝑐1,𝑐2,,𝑐𝑑) be a 𝑑-dimensional vector on 𝑆𝑑.

Now we also let the distribution (1.1) serve as the initial distribution. Define the offspring transition kernel 𝑄 from 𝑆 to 𝑆𝑑. We define the law 𝐏 of a tree-indexed process 𝑋 by the following rules.(i)The state of the root random variable 𝑋0 is determined by distribution (1.1).(ii)For every vertex 𝑡𝑇 with state 𝑥, the offspring states are given independently of everything else, by the offspring law 𝑄(𝑥) on 𝑆𝑑, where𝐂𝑄(𝐜𝑥)=𝑄𝑡=𝑐1,𝑐2,,𝑐𝑑𝑋𝑡==𝑥𝑑𝑖=1𝑃𝑐𝑖.𝑥(2.2) Here the last equation holds because of the property of conditional independence.

For every finite 𝑛𝐍, let {𝑋𝑡,𝑡𝑇} be 𝑆-valued Markov chains indexed by an infinite tree 𝑇. Now we define the offspring empirical measure 𝐿𝑛(𝑥,𝐜)=𝑡𝑇(𝑛)𝐈𝑋𝑡,𝐂𝑡=(𝑥,𝐜)||𝑇(𝑛)||(𝑥,𝐜)𝑆×𝑆𝑑.(2.3) For any state 𝑥𝑆, 𝑆𝑛(𝑥) is the empirical measure, which is defined as follows:𝑆𝑛(𝑥)=𝑡𝑇(𝑛)𝐈𝑋𝑡=𝑥||𝑇(𝑛)||𝑥𝑆,(2.4) where 𝐈{} denotes the indicator function as usual and 𝐜=(𝑐1,𝑐2,,𝑐𝑑).

In the rest of this paper, we consider the limit law of the random sequence of {𝐿𝑛(𝑥,𝐜),𝑛1}, which is defined as above.

Theorem 2.1. Let 𝐺 be a Cayley tree 𝑇𝐶,𝑑, 𝑆 a finite state space, and {𝑋𝑡,𝑡𝑇} be tree-indexed Markov chain with initial distribution (1.1) and ergodic transition matrix 𝑃. Let 𝐿𝑛(𝑥,𝐜) be defined as (2.3). Thus one has lim𝑛𝐿𝑛(𝑥,𝐜)=𝜋(𝑥)𝑄(𝐜𝑥)a.e.,(2.5) where 𝜋 is the stationary distribution of the ergodic matrix 𝑃, that is, 𝜋=𝜋𝑃, and Σ𝑥𝑆𝜋(𝑥)=1.

Corollary 2.2. Under the condition of Theorem 2.1, suppose that 𝑓(𝑥,𝐜) is any function defined on 𝑆×𝑆𝑑. Denote 𝐻𝑛(𝜔)=𝑡𝑇(𝑛)𝑓𝑋𝑡,𝐂𝑡.(2.6) Then lim𝑛𝐻𝑛(𝜔)||𝑇(𝑛)||=(𝑥,𝐜)𝑆×𝑆𝑑𝜋(𝑥)𝑄(𝐜𝑥)𝑓(𝑥,𝐜).a.e.(2.7)

Proof. Noting that 𝐻𝑛(𝜔)=𝑡𝑇(𝑛)𝑓𝑋𝑡,𝐂𝑡=(𝑥,𝐜)𝑆×𝑆𝑑𝑡𝑇(𝑛)𝐈𝑋𝑡,𝐂𝑡=(𝑥,𝐜)𝑓(𝑥,𝐜),(2.8) thus by using Theorem 2.1 we get lim𝑛𝐻𝑛(𝜔)||𝑇(𝑛)||=(𝑥,𝐜)𝑆×𝑆𝑑𝑓(𝑥,𝐜)lim𝑛𝐿𝑛=(𝑥,𝐜)(𝑥,𝐜)𝑆×𝑆𝑑𝜋(𝑥)𝑄(𝐜𝑥)𝑓(𝑥,𝐜).a.e.(2.9)
Let 𝐺={𝑇,𝐸} be a tree graph, (𝑋𝑡)𝑡𝑇 be a stochastic process indexed by tree 𝐺 with state space 𝑆. Denote 𝑌𝑡=(𝑋𝑡,𝐂𝑡) to be the offspring processes derived by (𝑋𝑡)𝑡𝑇. It is easy to see that𝐏𝑦𝑇(𝑛)𝑌=𝐏T(𝑛)=𝑦𝑇(𝑛)𝑥=𝑝0𝑡𝑇(𝑛+1){0}𝑃𝑥𝑡𝑥1𝑡𝑥=𝑝0𝑡𝑇(𝑛)𝑄𝐜𝑡𝑥𝑡,(2.10) where 𝐜𝑡𝑆𝑑. Let 𝑓𝑛1(𝜔)=||𝑇(𝑛)||𝑌ln𝐏𝑇(𝑛).(2.11)𝑓𝑛(𝜔) will be called the entropy density of 𝑌𝑇(𝑛). If (𝑋𝑡)𝑡𝑇 is a tree-indexed Markov chain with state space 𝑆 defined by Definition 1.1, we have by (2.10) 𝑓𝑛1(𝜔)=||𝑇(𝑛)||𝑋ln𝑝0+𝑡𝑇(𝑛)𝐂ln𝑄𝑡𝑋𝑡.(2.12)
The convergence of 𝑓𝑛(𝜔) to a constant in a sense (𝐿1 convergence, convergence in probability, a.e. convergence) is called the Shannon-McMillan theorem or the entropy theorem or the AEP in information theory. Here from Corollary 2.2, if we let𝑓(𝑥,𝐜)=ln𝑄(𝐜𝑥),(2.13) we can easily obtain the Shannon-McMillan theorem with a.e. convergence for Markov chain fields on tree 𝑇𝐶,𝑑.

Corollary 2.3. Under the condition of Corollary 2.2, let 𝑓𝑛(𝜔) be defined as (2.12). Then lim𝑛𝑓𝑛(𝜔)=(𝑥,𝐜)𝑆×𝑆𝑑𝜋(𝑥)𝑄(𝐜𝑥)ln𝑄(𝐜𝑥).a.e.(2.14)

3. Proof of Theorem 2.1

Let 𝑇𝐶,𝑑 be a Cayley tree, 𝑆 a finite state space, and {𝑋𝑡,𝑡𝑇} tree-indexed Markov chain with any initial distribution (1.1) and ergodic transition matrix 𝑃. Let 𝑔𝑡(𝑋𝑡,𝐂𝑡) be functions defined on 𝑆×𝑆𝑑. Letting 𝜆 be a real number, 𝐿0={0}, 𝑛=𝜎(𝑋𝑇(𝑛)), now we can define a nonnegative martingale as follows: 𝑡𝑛𝑒(𝜆,𝜔)=𝜆𝑡𝑇(𝑛1)𝑔𝑡(𝑋𝑡,𝐂𝑡)𝑡𝑇(𝑛1)𝐸𝑒𝜆𝑔𝑡(𝑋𝑡,𝐂𝑡)𝑋𝑡.(3.1) At first we come to prove the above fact.

Theorem 3.1. {𝑡𝑛(𝜆,𝜔), 𝑛, 𝑛1} is a nonnegative martingale.

Proof of Theorem 3.1. Note that, by Markov property and the property of conditional independence, we have 𝐸𝑒𝜆𝑛𝑡𝐿𝑔𝑡(𝑋𝑡,𝐂𝑡)𝑛=𝑥𝐿𝑛+1𝑒𝜆𝑛𝑡𝐿𝑔𝑡(𝑋𝑡,𝐜𝑡)𝐏𝑋𝐿𝑛+1=𝑥𝐿𝑛+1𝑋𝑇(𝑛)=𝑥𝐿𝑛+1𝑡𝐿𝑛𝑒𝜆𝑔𝑡(𝑋𝑡,𝐜𝑡)𝑄𝐜𝑡𝑋𝑡=𝑡𝐿𝑛𝐜𝑡𝑆𝑑𝑒𝜆𝑔𝑡(𝑋𝑡,𝐜𝑡)𝑄𝐜𝑡𝑋𝑡=𝑡𝐿𝑛𝐸𝑒𝜆𝑔𝑡(𝑋𝑡,𝐂𝑡)𝑋𝑡a.e.(3.2) On the other hand, we also have 𝑡𝑛+1(𝜆,𝜔)=𝑡𝑛𝑒(𝜆,𝜔)𝜆𝑛𝑡𝐿𝑔𝑡(𝑋𝑡,𝐂𝑡)𝑡𝐿𝑛𝐸𝑒𝜆𝑔𝑡(𝑋𝑡,𝐂𝑡)𝑋𝑡.(3.3) Combining (3.2) and (3.3), we get 𝐸𝑡𝑛+1(𝜆,𝜔)𝑛=𝑡𝑛(𝜆,𝜔)a.e.(3.4) Thus we complete the proof of this theorem.

Theorem 3.2. Let (𝑋𝑡)𝑡𝑇 and {𝑔𝑡(𝑥,𝐜),𝑡𝑇} be defined as above, and denote 𝐺𝑛(𝜔)=𝑡𝑇(𝑛)𝐸𝑔𝑡𝑋𝑡,𝐂𝑡𝑋𝑡.(3.5) Let 𝛼>0, denote 𝐷(𝛼)=limsup𝑛1||𝑇(𝑛)||𝑡𝑇(𝑛)𝐸𝑔2𝑡𝑋𝑡,𝐂𝑡𝑒𝛼|𝑔𝑡(𝑋𝑡,𝐂𝑡)|𝑋𝑡,𝐻=𝑀(𝜔)<(3.6)𝑛(𝜔)=𝑡𝑇(𝑛)𝑔𝑡𝑋𝑡,𝐂𝑡.(3.7) Then lim𝑛𝐻𝑛(𝜔)𝐺𝑛(𝜔)||𝑇(𝑛)||=0a.e.on𝐷(𝛼).(3.8)

Proof. By Theorem 3.1, we have known that {𝑡𝑛(𝜆,𝜔),𝑛,𝑛1} is a nonnegative martingale. According to Doob martingale convergence theorem, we have lim𝑛𝑡𝑛(𝜆,𝜔)=𝑡(𝜆,𝜔)<a.e.(3.9) so that limsup𝑛ln𝑡𝑛+1(𝜆,𝜔)||𝑇(𝑛)||0a.e.(3.10) Combining (3.1), (3.7), and (3.10), we arrive at limsup𝑛1||𝑇(𝑛)||𝜆𝐻𝑛(𝜔)𝑡𝑇(𝑛)𝐸𝑒ln𝜆𝑔𝑡(𝑋𝑡,𝐂𝑡)𝑋𝑡0a.e.(3.11) Let 𝜆>0. Dividing two sides of above equation by 𝜆, we get limsup𝑛1||𝑇(𝑛)||𝐻𝑛(𝜔)𝑡𝑇(𝑛)𝐸𝑒ln𝜆𝑔𝑡(𝑋𝑡,𝐂𝑡)𝑋𝑡𝜆0a.e.(3.12) By (3.12) and inequalities ln𝑥𝑥1(𝑥>0),0𝑒𝑥1𝑥21𝑥2𝑒|𝑥|, as 0<𝜆𝛼, it follows that limsup𝑛1||𝑇(𝑛)||𝐻𝑛(𝜔)𝑡𝑇(𝑛)𝐸𝑔𝑡𝑋𝑡,𝐂𝑡𝑋𝑡limsup𝑛1||𝑇(𝑛)||𝑡𝑇(𝑛)𝐸𝑒ln𝜆𝑔𝑡(𝑋𝑡,𝐂𝑡)𝑋𝑡𝜆𝑔𝐸𝑡𝑋𝑡,𝐂𝑡𝑋𝑡limsup𝑛1||𝑇(𝑛)||𝑡𝑇(𝑛)𝐸𝑒𝜆𝑔𝑡(𝑋𝑡,𝐂𝑡)𝑋𝑡1𝜆𝑔𝐸𝑡𝑋𝑡,𝐂𝑡𝑋𝑡𝜆2limsup𝑛1||𝑇(𝑛)||𝑡𝑇(𝑛)𝐸𝑔2𝑡𝑋𝑡,𝐂𝑡𝑒𝜆|𝑔𝑡(𝑋𝑡,𝐂𝑡)|𝑋𝑡𝜆2limsup𝑛1||𝑇(𝑛)||𝑡𝑇(𝑛)𝐸𝑔2𝑡𝑋𝑡,𝐂𝑡𝑒𝛼|𝑔𝑡(𝑋𝑡,𝐂𝑡)|𝑋𝑡𝜆2𝑀(𝜔)a.e.𝜔𝐷(𝛼).(3.13) Letting 𝜆0+ in (3.13), by (3.5) we have limsup𝑛𝐻𝑛(𝜔)𝐺𝑛(𝜔)||𝑇(𝑛)||0a.e.𝜔𝐷(𝛼).(3.14) Let 𝛼𝜆<0. Similarly to the analysis of the case 0<𝜆𝛼, it follows from (3.12) that liminf𝑛𝐻𝑛(𝜔)𝐺𝑛(𝜔)||𝑇(𝑛)||𝜆2𝑀(𝜔)a.e.𝜔𝐷(𝛼).(3.15) Letting 𝜆0, we can arrive at liminf𝑛𝐻𝑛(𝜔)𝐺𝑛(𝜔)||𝑇(𝑛)||0a.e.𝜔𝐷(𝛼).(3.16) Combining (3.14) and (3.16), we obtain (3.8) directly.

Corollary 3.3. Under the conditions of Theorem 3.2, one has lim𝑛𝐿𝑛(𝑥,𝐜)𝑆𝑛(𝑥)𝑄(𝐜𝑥)=0a.e.,(3.17) where 𝜋 is the stationary distribution of the ergodic matrix 𝑃, that is, 𝜋=𝜋𝑃, and Σ𝑥𝑆𝜋(𝑥)=1.

Proof. For any 𝑡𝑇, let 𝑔𝑡𝑋𝑡,𝐂𝑡𝑋=𝐈𝑡,𝐂𝑡=𝑋(𝑥,𝐜)=𝐈𝑡𝐂=𝑥𝐈𝑡.=𝐜(3.18) Then we have 𝐺𝑛(𝜔)=𝑡𝑇(𝑛)𝐸𝑔𝑡𝑋𝑡,𝐂𝑡𝑋𝑡=𝑡𝑇(𝑛)𝐜𝑡𝑆𝑑𝐈𝑋𝑡𝐜=𝑥𝐈𝑡𝑄𝐜=𝐜𝑡𝑋𝑡=𝑡𝑇(𝑛)𝐈𝑋𝑡=||𝑇=𝑥𝑄(𝐜𝑥)(𝑛)||𝑆𝑛𝐻(𝑥)𝑄(𝐜𝑥)(3.19)𝑛(𝜔)=𝑡𝑇(𝑛)𝑔𝑡𝑋𝑡,𝐂𝑡=𝑡𝑇(𝑛)𝐈𝑋𝑡,𝐂𝑡=||𝑇=(𝑥,𝐜)(𝑛)||𝐿𝑛(𝑥,𝐜).(3.20) Combing (3.19) and (3.20), we can derive our conclusion by Theorem 3.2.
In our proof, we will use Lemma 3.4.

Lemma 3.4 (see [10]). Let 𝑇𝐶,𝑑 be a Cayley tree, 𝑆 a finite state space, and {𝑋𝑡, 𝑡𝑇} tree-indexed Markov chain with any initial distribution (1.1) and ergodic transition matrix 𝑃. Let 𝑆𝑛(𝑥) be defined as (2.4). Thus one has

lim𝑛𝑆𝑛(𝑥)=𝜋(𝑥)a.e.(3.21)

Proof of Theorem 2.1. Combining Corollary 3.3 and Lemma 3.4, we arrive at our conclusion directly.

Acknowledgment

This work was supported by National Natural Science Foundation of China (Grant no. 11071104).

References

  1. J. G. Kemeny, J. L. Snell, and A. W. Knapp, Denumerable Markov Chains, Springer, New York, NY, USA, Second edition, 1976. View at Zentralblatt MATH
  2. F. Spitzer, “Markov random fields on an infinite tree,” Annals of Probability, vol. 3, no. 3, pp. 387–398, 1975. View at Google Scholar · View at Zentralblatt MATH
  3. I. Benjamini and Y. Peres, “Markov chains indexed by trees,” Annals of Probability, vol. 22, no. 1, pp. 219–243, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  4. T. Berger and Z. X. Ye, “Entropic aspects of random fields on trees,” IEEE Transactions on Information Theory, vol. 36, no. 5, pp. 1006–1018, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  5. Z. Ye and T. Berger, “Ergodicity, regularity and asymptotic equipartition property of random fields on trees,” Journal of Combinatorics, Information & System Sciences, vol. 21, no. 2, pp. 157–184, 1996. View at Google Scholar · View at Zentralblatt MATH
  6. Z. Ye and T. Berger, Information Measures for Discrete Random Fields, Science Press, Beijing, China, 1998.
  7. R. Pemantle, “Automorphism invariant measures on trees,” Annals of Probability, vol. 20, no. 3, pp. 1549–1566, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. W. Yang and W. Liu, “Strong law of large numbers for Markov chains field on a Bethe tree,” Statistics & Probability Letters, vol. 49, no. 3, pp. 245–250, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. C. Takacs, “Strong law of large numbers for branching Markov chains,” Markov Processes and Related Fields, vol. 8, no. 1, pp. 107–116, 2001. View at Google Scholar · View at Zentralblatt MATH
  10. H. Huang and W. Yang, “Strong law of large numbers for Markov chains indexed by an infinite tree with uniformly bounded degree,” Science in China A, vol. 51, no. 2, pp. 195–202, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  11. A. Dembo, P. Mörters, and S. Sheffield, “Large deviations of Markov chains indexed by random trees,” Annales de l'Institut Henri Poincaré B, vol. 41, no. 6, pp. 971–996, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  12. A. Dembo and O. Zeitouni, Large Deviations Techniques and Applications, vol. 38 of Applications of Mathematics, Springer, New York, NY, USA, 2nd edition, 1998.