Abstract

We study the limit law of the offspring empirical measure and for Markov chains indexed by homogeneous tree with almost everywhere convergence. Then we prove a Shannon-McMillan theorem with the convergence almost everywhere.

1. Introduction

A tree is a graph 𝐺={𝑇,𝐸} which is connected and contains no circuits, where 𝑇 and 𝐸 denote the vertex set and the edge set, respectively. Given any two vertices π›Όβ‰ π›½βˆˆπ‘‡, let 𝛼𝛽 be the unique path connecting 𝛼 and 𝛽. Define the graph distance 𝑑(𝛼,𝛽) to be the number of edges contained in the path 𝛼𝛽.

Let 𝐺 be an infinite tree with root 0. The set of all vertices with distance 𝑛 from the root is called the 𝑛th generation of 𝑇, which is denoted by 𝐿𝑛. We denote by 𝑇(𝑛) the union of the first 𝑛 generations of 𝑇. For each vertex 𝑑, there is a unique path from 0 to 𝑑, and |𝑑| for the number of edges on this path. We denote the first predecessor of 𝑑 by 1𝑑, the second predecessor of 𝑑 by 2𝑑 and denote by 𝑛𝑑 the 𝑛th predecessor of 𝑑. The degree of a vertex is defined to be the number of neighbors of it. If the degree sequence of a tree is uniformly bounded, we call the tree a uniformly bounded tree. Let 𝑑 be a positive integer. If every vertex of the tree has 𝑑 neighbors in the next generation, we say it Cayley tree, which is denoted by 𝑇𝐢,𝑑. Thus on Cayley tree, every vertex has degree 𝑑+1 except that the root index which has degree 𝑑. For any two vertices 𝑠 and 𝑑 of tree 𝑇, write 𝑠≀𝑑 if 𝑠 is on the unique path from the root 0 to 𝑑. We denote by π‘ βˆ§π‘‘ the vertex farthest from 0 satisfying π‘ βˆ§π‘‘β‰€π‘  and π‘ βˆ§π‘‘β‰€π‘‘. 𝑋𝐴={𝑋𝑑,π‘‘βˆˆπ΄} and denote by |𝐴| the number of vertices of 𝐴.

Definition 1.1 (see [1]). Let 𝐺 be an infinite Cayley tree 𝑇𝐢,𝑑,𝑆 a finite state space, and {𝑋𝑑,π‘‘βˆˆπ‘‡} be a collection of S-valued random variables defined on probability space (Ξ©,β„±,𝐏). Let 𝑝={𝑝(π‘₯),π‘₯βˆˆπ‘†}(1.1) be a distribution on 𝑆 and 𝑃=(𝑃(π‘¦βˆ£π‘₯)),π‘₯,π‘¦βˆˆπ‘†,(1.2) be a stochastic matrix on 𝑆2. If for any vertex 𝑑, 𝐏𝑋𝑑=π‘¦βˆ£π‘‹1𝑑=π‘₯and𝑋𝑠forπ‘‘βˆ§π‘ β‰€1𝑑𝑋=𝐏𝑑=π‘¦βˆ£π‘‹1𝑑𝐏𝑋=π‘₯=𝑃(π‘¦βˆ£π‘₯)βˆ€π‘₯,π‘¦βˆˆπ‘†,0ξ€Έ=π‘₯=𝑝(π‘₯)βˆ€π‘₯βˆˆπ‘†,(1.3){𝑋𝑑, π‘‘βˆˆπ‘‡} will be called S-valued Markov chains indexed by an infinite tree 𝐺 with the initial distribution (1.1) and transition matrix (1.2) or called tree-indexed Markov chains with state-space 𝑆. Furthermore, if transition matrix 𝑃 is ergodic, then we call {𝑋𝑑,π‘‘βˆˆπ‘‡} an ergodic Markov chains indexed by the infinite tree 𝑇.

The above definition is the extension of the definitions of Markov chain fields on trees (see [1, page 456] and [2]). In this paper, we always suppose that the tree-indexed Markov chain is ergodic.

The subject of tree-indexed processes is rather young. Benjamini and Peres [3] have given the notion of the tree-indexed Markov chains and studied the recurrence and ray-recurrence for them. Berger and Ye [4] have studied the existence of entropy rate for some stationary random fields on a homogeneous tree. Ye and Berger (see [5, 6]), by using Pemantle’s result [7] and a combinatorial approach, have studied the Shannon-McMillan theorem with convergence in probability for a PPG-invariant and ergodic random field on a homogeneous tree. Yang and Liu [8] have studied a strong law of large numbers for the frequency of occurrence of states for Markov chains fields on a homogeneous tree (a particular case of tree-indexed Markov chains and PPG-invariant random fields). Takacs (see [9]) have studied the strong law of large numbers for the univariate functions of finite Markov chains indexed by an infinite tree with uniformly bounded degree. Subsequently, Huang and Yang (see [10]) has studied the Shannon-McMillan theorem of finite homogeneous Markov chains indexed by a uniformly bounded infinite tree. Dembo et al., (see [11]) has showed the large deviation principle holds for the empirical offspring measure of Markov chains on random trees and demonstrated the explicit rate function, which is defined in terms of specific relative entropy (see [12]) and CramΓ©r’s rate function.

In this paper, we study the strong law of large numbers for the offspring empirical measure and the Shannon-McMillan theorem with a.e. convergence for Markov chain fields on tree 𝑇𝐢,𝑑 by using a method similar to that of [10].

2. Statements of the Results

For every vertex π‘‘βˆˆπ‘‡, the random vector of offspring states is defined as𝐂𝑑=𝑋1(𝑑),𝑋2(𝑑),…,𝑋𝑑(𝑑)βˆˆπ‘†π‘‘.(2.1)

Let 𝐜=(𝑐1,𝑐2,…,𝑐𝑑) be a 𝑑-dimensional vector on 𝑆𝑑.

Now we also let the distribution (1.1) serve as the initial distribution. Define the offspring transition kernel 𝑄 from 𝑆 to 𝑆𝑑. We define the law 𝐏 of a tree-indexed process 𝑋 by the following rules.(i)The state of the root random variable 𝑋0 is determined by distribution (1.1).(ii)For every vertex π‘‘βˆˆπ‘‡ with state π‘₯, the offspring states are given independently of everything else, by the offspring law 𝑄(β‹…βˆ£π‘₯) on 𝑆𝑑, where𝐂𝑄(𝐜∣π‘₯)∢=𝑄𝑑=𝑐1,𝑐2,…,π‘π‘‘ξ€Έβˆ£π‘‹π‘‘ξ€Έ==π‘₯𝑑𝑖=1𝑃𝑐𝑖.∣π‘₯(2.2) Here the last equation holds because of the property of conditional independence.

For every finite π‘›βˆˆπ, let {𝑋𝑑,π‘‘βˆˆπ‘‡} be 𝑆-valued Markov chains indexed by an infinite tree 𝑇. Now we define the offspring empirical measure πΏπ‘›βˆ‘(π‘₯,𝐜)=π‘‘βˆˆπ‘‡(𝑛)πˆπ‘‹ξ€½ξ€·π‘‘,𝐂𝑑=ξ€Ύ(π‘₯,𝐜)||𝑇(𝑛)||βˆ€(π‘₯,𝐜)βˆˆπ‘†Γ—π‘†π‘‘.(2.3) For any state π‘₯βˆˆπ‘†, 𝑆𝑛(π‘₯) is the empirical measure, which is defined as follows:π‘†π‘›βˆ‘(π‘₯)=π‘‘βˆˆπ‘‡(𝑛)πˆξ€½π‘‹π‘‘ξ€Ύ=π‘₯||𝑇(𝑛)||βˆ€π‘₯βˆˆπ‘†,(2.4) where 𝐈{β‹…} denotes the indicator function as usual and 𝐜=(𝑐1,𝑐2,…,𝑐𝑑).

In the rest of this paper, we consider the limit law of the random sequence of {𝐿𝑛(π‘₯,𝐜),𝑛β‰₯1}, which is defined as above.

Theorem 2.1. Let 𝐺 be a Cayley tree 𝑇𝐢,𝑑, 𝑆 a finite state space, and {𝑋𝑑,π‘‘βˆˆπ‘‡} be tree-indexed Markov chain with initial distribution (1.1) and ergodic transition matrix 𝑃. Let 𝐿𝑛(π‘₯,𝐜) be defined as (2.3). Thus one has limπ‘›β†’βˆžπΏπ‘›(π‘₯,𝐜)=πœ‹(π‘₯)𝑄(𝐜∣π‘₯)a.e.,(2.5) where πœ‹ is the stationary distribution of the ergodic matrix 𝑃, that is, πœ‹=πœ‹π‘ƒ, and Ξ£π‘₯βˆˆπ‘†πœ‹(π‘₯)=1.

Corollary 2.2. Under the condition of Theorem 2.1, suppose that 𝑓(π‘₯,𝐜) is any function defined on 𝑆×𝑆𝑑. Denote 𝐻𝑛(ξ“πœ”)=π‘‘βˆˆπ‘‡(𝑛)𝑓𝑋𝑑,𝐂𝑑.(2.6) Then limπ‘›β†’βˆžπ»π‘›(πœ”)||𝑇(𝑛)||=(π‘₯,𝐜)βˆˆπ‘†Γ—π‘†π‘‘πœ‹(π‘₯)𝑄(𝐜∣π‘₯)𝑓(π‘₯,𝐜).a.e.(2.7)

Proof. Noting that 𝐻𝑛(ξ“πœ”)=π‘‘βˆˆπ‘‡(𝑛)𝑓𝑋𝑑,𝐂𝑑=(π‘₯,𝐜)βˆˆπ‘†Γ—π‘†π‘‘ξ“π‘‘βˆˆπ‘‡(𝑛)πˆπ‘‹ξ€½ξ€·π‘‘,𝐂𝑑=(π‘₯,𝐜)𝑓(π‘₯,𝐜),(2.8) thus by using Theorem 2.1 we get limπ‘›β†’βˆžπ»π‘›(πœ”)||𝑇(𝑛)||=(π‘₯,𝐜)βˆˆπ‘†Γ—π‘†π‘‘π‘“(π‘₯,𝐜)limπ‘›β†’βˆžπΏπ‘›=(π‘₯,𝐜)(π‘₯,𝐜)βˆˆπ‘†Γ—π‘†π‘‘πœ‹(π‘₯)𝑄(𝐜∣π‘₯)𝑓(π‘₯,𝐜).a.e.(2.9)
Let 𝐺={𝑇,𝐸} be a tree graph, (𝑋𝑑)π‘‘βˆˆπ‘‡ be a stochastic process indexed by tree 𝐺 with state space 𝑆. Denote π‘Œπ‘‘=(𝑋𝑑,𝐂𝑑) to be the offspring processes derived by (𝑋𝑑)π‘‘βˆˆπ‘‡. It is easy to see that𝐏𝑦𝑇(𝑛)ξ‚ξ‚€π‘Œ=𝐏T(𝑛)=𝑦𝑇(𝑛)π‘₯=𝑝0ξ€Έξ‘π‘‘βˆˆπ‘‡(𝑛+1)⧡{0}𝑃π‘₯π‘‘βˆ£π‘₯1𝑑π‘₯=𝑝0ξ€Έξ‘π‘‘βˆˆπ‘‡(𝑛)π‘„ξ€·πœπ‘‘βˆ£π‘₯𝑑,(2.10) where πœπ‘‘βˆˆπ‘†π‘‘. Let 𝑓𝑛1(πœ”)=βˆ’||𝑇(𝑛)||ξ‚€π‘Œln𝐏𝑇(𝑛).(2.11)𝑓𝑛(πœ”) will be called the entropy density of π‘Œπ‘‡(𝑛). If (𝑋𝑑)π‘‘βˆˆπ‘‡ is a tree-indexed Markov chain with state space 𝑆 defined by Definition 1.1, we have by (2.10) 𝑓𝑛1(πœ”)=βˆ’||𝑇(𝑛)||𝑋ln𝑝0ξ€Έ+ξ“π‘‘βˆˆπ‘‡(𝑛)𝐂lnπ‘„π‘‘βˆ£π‘‹π‘‘ξ€Έξƒ­.(2.12)
The convergence of 𝑓𝑛(πœ”) to a constant in a sense (𝐿1 convergence, convergence in probability, a.e. convergence) is called the Shannon-McMillan theorem or the entropy theorem or the AEP in information theory. Here from Corollary 2.2, if we let𝑓(π‘₯,𝐜)=βˆ’ln𝑄(𝐜∣π‘₯),(2.13) we can easily obtain the Shannon-McMillan theorem with a.e. convergence for Markov chain fields on tree 𝑇𝐢,𝑑.

Corollary 2.3. Under the condition of Corollary 2.2, let 𝑓𝑛(πœ”) be defined as (2.12). Then limπ‘›β†’βˆžπ‘“π‘›(ξ“πœ”)=βˆ’(π‘₯,𝐜)βˆˆπ‘†Γ—π‘†π‘‘πœ‹(π‘₯)𝑄(𝐜∣π‘₯)ln𝑄(𝐜∣π‘₯).a.e.(2.14)

3. Proof of Theorem 2.1

Let 𝑇𝐢,𝑑 be a Cayley tree, 𝑆 a finite state space, and {𝑋𝑑,π‘‘βˆˆπ‘‡} tree-indexed Markov chain with any initial distribution (1.1) and ergodic transition matrix 𝑃. Let 𝑔𝑑(𝑋𝑑,𝐂𝑑) be functions defined on 𝑆×𝑆𝑑. Letting πœ† be a real number, 𝐿0={0}, ℱ𝑛=𝜎(𝑋𝑇(𝑛)), now we can define a nonnegative martingale as follows: 𝑑𝑛𝑒(πœ†,πœ”)=πœ†βˆ‘π‘‘βˆˆπ‘‡(π‘›βˆ’1)𝑔𝑑(𝑋𝑑,𝐂𝑑)βˆπ‘‘βˆˆπ‘‡(π‘›βˆ’1)πΈξ€Ίπ‘’πœ†π‘”π‘‘(𝑋𝑑,𝐂𝑑)βˆ£π‘‹π‘‘ξ€».(3.1) At first we come to prove the above fact.

Theorem 3.1. {𝑑𝑛(πœ†,πœ”), ℱ𝑛, 𝑛β‰₯1} is a nonnegative martingale.

Proof of Theorem 3.1. Note that, by Markov property and the property of conditional independence, we have πΈξ‚ƒπ‘’πœ†βˆ‘π‘›π‘‘βˆˆπΏπ‘”π‘‘(𝑋𝑑,𝐂𝑑)βˆ£β„±π‘›ξ‚„=π‘₯𝐿𝑛+1π‘’πœ†βˆ‘π‘›π‘‘βˆˆπΏπ‘”π‘‘(𝑋𝑑,πœπ‘‘)𝐏𝑋𝐿𝑛+1=π‘₯𝐿𝑛+1βˆ£π‘‹π‘‡(𝑛)=π‘₯𝐿𝑛+1ξ‘π‘‘βˆˆπΏπ‘›π‘’πœ†π‘”π‘‘(𝑋𝑑,πœπ‘‘)π‘„ξ€·πœπ‘‘βˆ£π‘‹π‘‘ξ€Έ=ξ‘π‘‘βˆˆπΏπ‘›ξ“πœπ‘‘βˆˆπ‘†π‘‘π‘’πœ†π‘”π‘‘(𝑋𝑑,πœπ‘‘)π‘„ξ€·πœπ‘‘βˆ£π‘‹π‘‘ξ€Έ=ξ‘π‘‘βˆˆπΏπ‘›πΈξ€Ίπ‘’πœ†π‘”π‘‘(𝑋𝑑,𝐂𝑑)βˆ£π‘‹π‘‘ξ€»a.e.(3.2) On the other hand, we also have 𝑑𝑛+1(πœ†,πœ”)=𝑑𝑛𝑒(πœ†,πœ”)πœ†βˆ‘π‘›π‘‘βˆˆπΏπ‘”π‘‘(𝑋𝑑,𝐂𝑑)βˆπ‘‘βˆˆπΏπ‘›πΈξ€Ίπ‘’πœ†π‘”π‘‘(𝑋𝑑,𝐂𝑑)βˆ£π‘‹π‘‘ξ€».(3.3) Combining (3.2) and (3.3), we get 𝐸𝑑𝑛+1(πœ†,πœ”)βˆ£β„±π‘›ξ€»=𝑑𝑛(πœ†,πœ”)a.e.(3.4) Thus we complete the proof of this theorem.

Theorem 3.2. Let (𝑋𝑑)π‘‘βˆˆπ‘‡ and {𝑔𝑑(π‘₯,𝐜),π‘‘βˆˆπ‘‡} be defined as above, and denote 𝐺𝑛(ξ“πœ”)=π‘‘βˆˆπ‘‡(𝑛)𝐸𝑔𝑑𝑋𝑑,π‚π‘‘ξ€Έβˆ£π‘‹π‘‘ξ€».(3.5) Let 𝛼>0, denote 𝐷(𝛼)=limsupπ‘›β†’βˆž1||𝑇(𝑛)||ξ“π‘‘βˆˆπ‘‡(𝑛)𝐸𝑔2𝑑𝑋𝑑,𝐂𝑑𝑒𝛼|𝑔𝑑(𝑋𝑑,𝐂𝑑)|βˆ£π‘‹π‘‘ξ€»ξƒ°,𝐻=𝑀(πœ”)<∞(3.6)𝑛(πœ”)=π‘‘βˆˆπ‘‡(𝑛)𝑔𝑑𝑋𝑑,𝐂𝑑.(3.7) Then limπ‘›β†’βˆžπ»π‘›(πœ”)βˆ’πΊπ‘›(πœ”)||𝑇(𝑛)||=0a.e.on𝐷(𝛼).(3.8)

Proof. By Theorem 3.1, we have known that {𝑑𝑛(πœ†,πœ”),ℱ𝑛,𝑛β‰₯1} is a nonnegative martingale. According to Doob martingale convergence theorem, we have lim𝑛𝑑𝑛(πœ†,πœ”)=𝑑(πœ†,πœ”)<∞a.e.(3.9) so that limsupπ‘›β†’βˆžln𝑑𝑛+1(πœ†,πœ”)||𝑇(𝑛)||≀0a.e.(3.10) Combining (3.1), (3.7), and (3.10), we arrive at limsupπ‘›β†’βˆž1||𝑇(𝑛)||ξƒ―πœ†π»π‘›ξ“(πœ”)βˆ’π‘‘βˆˆπ‘‡(𝑛)𝐸𝑒lnπœ†π‘”π‘‘(𝑋𝑑,𝐂𝑑)βˆ£π‘‹π‘‘ξƒ°ξ€»ξ€»β‰€0a.e.(3.11) Let πœ†>0. Dividing two sides of above equation by πœ†, we get limsupπ‘›β†’βˆž1||𝑇(𝑛)||𝐻𝑛(πœ”)βˆ’π‘‘βˆˆπ‘‡(𝑛)𝐸𝑒lnπœ†π‘”π‘‘(𝑋𝑑,𝐂𝑑)βˆ£π‘‹π‘‘ξ€»ξ€»πœ†ξƒ°β‰€0a.e.(3.12) By (3.12) and inequalities lnπ‘₯≀π‘₯βˆ’1(π‘₯>0),0≀𝑒π‘₯βˆ’1βˆ’π‘₯≀2βˆ’1π‘₯2𝑒|π‘₯|, as 0<πœ†β‰€π›Ό, it follows that limsupπ‘›β†’βˆž1||𝑇(𝑛)||𝐻𝑛(πœ”)βˆ’π‘‘βˆˆπ‘‡(𝑛)𝐸𝑔𝑑𝑋𝑑,π‚π‘‘ξ€Έβˆ£π‘‹π‘‘ξ€»ξƒ­β‰€limsupπ‘›β†’βˆž1||𝑇(𝑛)||ξ“π‘‘βˆˆπ‘‡(𝑛)𝐸𝑒lnπœ†π‘”π‘‘(𝑋𝑑,𝐂𝑑)βˆ£π‘‹π‘‘ξ€»ξ€»πœ†ξ€Ίπ‘”βˆ’πΈπ‘‘ξ€·π‘‹π‘‘,π‚π‘‘ξ€Έβˆ£π‘‹π‘‘ξ€»ξƒ°β‰€limsupπ‘›β†’βˆž1||𝑇(𝑛)||ξ“π‘‘βˆˆπ‘‡(𝑛)ξƒ―πΈξ€Ίπ‘’πœ†π‘”π‘‘(𝑋𝑑,𝐂𝑑)βˆ£π‘‹π‘‘ξ€»βˆ’1πœ†ξ€Ίπ‘”βˆ’πΈπ‘‘ξ€·π‘‹π‘‘,π‚π‘‘ξ€Έβˆ£π‘‹π‘‘ξ€»ξƒ°β‰€πœ†2limsupπ‘›β†’βˆž1||𝑇(𝑛)||ξ“π‘‘βˆˆπ‘‡(𝑛)𝐸𝑔2𝑑𝑋𝑑,π‚π‘‘ξ€Έπ‘’πœ†|𝑔𝑑(𝑋𝑑,𝐂𝑑)|βˆ£π‘‹π‘‘ξ€»β‰€πœ†2limsupπ‘›β†’βˆž1||𝑇(𝑛)||ξ“π‘‘βˆˆπ‘‡(𝑛)𝐸𝑔2𝑑𝑋𝑑,𝐂𝑑𝑒𝛼|𝑔𝑑(𝑋𝑑,𝐂𝑑)|βˆ£π‘‹π‘‘ξ€»β‰€πœ†2𝑀(πœ”)a.e.πœ”βˆˆπ·(𝛼).(3.13) Letting πœ†β†’0+ in (3.13), by (3.5) we have limsupπ‘›β†’βˆžπ»π‘›(πœ”)βˆ’πΊπ‘›(πœ”)||𝑇(𝑛)||≀0a.e.πœ”βˆˆπ·(𝛼).(3.14) Let βˆ’π›Όβ‰€πœ†<0. Similarly to the analysis of the case 0<πœ†β‰€π›Ό, it follows from (3.12) that liminfπ‘›β†’βˆžπ»π‘›(πœ”)βˆ’πΊπ‘›(πœ”)||𝑇(𝑛)||β‰₯πœ†2𝑀(πœ”)a.e.πœ”βˆˆπ·(𝛼).(3.15) Letting πœ†β†’0βˆ’, we can arrive at liminfπ‘›β†’βˆžπ»π‘›(πœ”)βˆ’πΊπ‘›(πœ”)||𝑇(𝑛)||β‰₯0a.e.πœ”βˆˆπ·(𝛼).(3.16) Combining (3.14) and (3.16), we obtain (3.8) directly.

Corollary 3.3. Under the conditions of Theorem 3.2, one has limπ‘›β†’βˆžξ€ΊπΏπ‘›(π‘₯,𝐜)βˆ’π‘†π‘›ξ€»(π‘₯)𝑄(𝐜∣π‘₯)=0a.e.,(3.17) where πœ‹ is the stationary distribution of the ergodic matrix 𝑃, that is, πœ‹=πœ‹π‘ƒ, and Ξ£π‘₯βˆˆπ‘†πœ‹(π‘₯)=1.

Proof. For any π‘‘βˆˆπ‘‡, let 𝑔𝑑𝑋𝑑,𝐂𝑑𝑋=πˆξ€½ξ€·π‘‘,𝐂𝑑=𝑋(π‘₯,𝐜)=πˆπ‘‘ξ€Ύξ€½π‚=π‘₯β‹…πˆπ‘‘ξ€Ύ.=𝐜(3.18) Then we have 𝐺𝑛(ξ“πœ”)=π‘‘βˆˆπ‘‡(𝑛)𝐸𝑔𝑑𝑋𝑑,π‚π‘‘ξ€Έβˆ£π‘‹π‘‘ξ€»=ξ“π‘‘βˆˆπ‘‡(𝑛)ξ“πœπ‘‘βˆˆπ‘†π‘‘πˆξ€½π‘‹π‘‘ξ€Ύξ€½πœ=π‘₯β‹…πˆπ‘‘ξ€Ύπ‘„ξ€·πœ=πœπ‘‘βˆ£π‘‹π‘‘ξ€Έ=ξ“π‘‘βˆˆπ‘‡(𝑛)πˆξ€½π‘‹π‘‘ξ€Ύ=||𝑇=π‘₯𝑄(𝐜∣π‘₯)(𝑛)||⋅𝑆𝑛𝐻(π‘₯)𝑄(𝐜∣π‘₯)(3.19)𝑛(πœ”)=π‘‘βˆˆπ‘‡(𝑛)𝑔𝑑𝑋𝑑,𝐂𝑑=ξ“π‘‘βˆˆπ‘‡(𝑛)πˆπ‘‹ξ€½ξ€·π‘‘,𝐂𝑑=||𝑇=(π‘₯,𝐜)(𝑛)||⋅𝐿𝑛(π‘₯,𝐜).(3.20) Combing (3.19) and (3.20), we can derive our conclusion by Theorem 3.2.
In our proof, we will use Lemma 3.4.

Lemma 3.4 (see [10]). Let 𝑇𝐢,𝑑 be a Cayley tree, 𝑆 a finite state space, and {𝑋𝑑, π‘‘βˆˆπ‘‡} tree-indexed Markov chain with any initial distribution (1.1) and ergodic transition matrix 𝑃. Let 𝑆𝑛(π‘₯) be defined as (2.4). Thus one has

limπ‘›β†’βˆžπ‘†π‘›(π‘₯)=πœ‹(π‘₯)a.e.(3.21)

Proof of Theorem 2.1. Combining Corollary 3.3 and Lemma 3.4, we arrive at our conclusion directly.

Acknowledgment

This work was supported by National Natural Science Foundation of China (Grant no. 11071104).