Research Article | Open Access

Volume 2012 |Article ID 167453 | https://doi.org/10.1155/2012/167453

Di Zhao, Hongyi Li, Donglin Su, "A Numerical Algorithm on the Computation of the Stationary Distribution of a Discrete Time Homogenous Finite Markov Chain", Mathematical Problems in Engineering, vol. 2012, Article ID 167453, 10 pages, 2012. https://doi.org/10.1155/2012/167453

# A Numerical Algorithm on the Computation of the Stationary Distribution of a Discrete Time Homogenous Finite Markov Chain

Academic Editor: Zheng-Guang Wu
Received08 Jan 2012
Revised14 Mar 2012
Accepted14 Mar 2012
Published16 Jul 2012

#### Abstract

The transition matrix, which characterizes a discrete time homogeneous Markov chain, is a stochastic matrix. A stochastic matrix is a special nonnegative matrix with each row summing up to 1. In this paper, we focus on the computation of the stationary distribution of a transition matrix from the viewpoint of the Perron vector of a nonnegative matrix, based on which an algorithm for the stationary distribution is proposed. The algorithm can also be used to compute the Perron root and the corresponding Perron vector of any nonnegative irreducible matrix. Furthermore, a numerical example is given to demonstrate the validity of the algorithm.

#### 1. Introduction and Preliminaries

Throughout this paper, the following notations and definitions are used. A matrix is called nonnegative (positive), if all (), denoted by (). Similarly, a vector is called nonnegative (positive) and denoted by (), if all (). Let , we denote (), if (), that is, () for all .

For a square matrix with eigenvalues is called the spectral radius of . If is irreducible, there exists a unique eigenvector such that and . In this case, we say that is the Perron root of and is the Perron vector .

We consider a discrete-time Markov chain with a finite state space . Among ergodic processes, homogeneous Markov chains with finite state space are particularly interesting examples. Such processes satisfy the Markov property, which states that their future behavior, conditional to the past and present, depends only on the present. Precisely, for all , , and for all sequences and ,

The behavior of such a process is characterized by an matrix called the transition matrix .

Its stationary distribution , which is also its asymptotic distribution, is a vector satisfying the following. that is, where is the column vector of all ones.

It has been established that it is possible to represent all possible uses of a software system as a Markov chain . This model is called a Markov chain usage model. In a usage model, states of use (such as state “Document Loaded” in a model representing use of a word processing system) are represented by states in the Markov chain. Transitions between states of use (such as moving from state “Document Loaded” to “No Document Loaded” when the user closes a document in a word processing system) are represented by state transitions between the appropriate states in the Markov chain. Transitions between states of use have associated probabilities which represent the probability of making each transition. A usage model may be created based on information taken from functional specifications, usage specifications, and test objectives.

Considering the problem of software reliability, we represent a software system with states of use by a homogeneous discrete Markov chain (the corresponding transition matrix is ). We denote the initial state probability distribution , where . Then , where stands for the state probability distribution at time . Let () be the probability when the software fails at state . The reliability of at time can be defined as . After a long time running, the state distribution of system will tend to the stationary distribution . Then, the terminating reliability , with which we can evaluate the quality of a software system. By decreasing the of state with the largest probability in the stationary distribution , we can also enhance the reliability of efficiently with limited resources.

A nonnegative matrix is called a row-stochastic matrix (or a stochastic matrix for short) if for all , that is, .

From the well-known Perron-Frobenius theorem, it can be easily deduced that the Perron root of a stochastic matrix equals 1.

Obviously, the transition matrix of a discrete-time homogeneous Markov chain is a stochastic matrix. From (1.3), we have . That is to say, the stationary distribution is also an eigenvector of associated to 1. Since and have the same eigenvalues, is the Perron root of , that is, the solution to . As for the computational aspects of , many approaches have been presented (e.g., see ) based on the Gaussian elimination, direct projection and so on. In this paper, from the viewpoint of the Perron root which has not been discussed, we propose an algorithm for the stationary distribution considering that the computation of is equivalent to the computation of the Perron vector of , which not only can compute the stationary distribution, but also could be used to compute the Perron root and the corresponding Perron vector of any nonnegative irreducible matrix (noting that the stationary distribution is the Perron vector of the transition matrix, which is a special nonnegative matrix).

This paper is organized as follows. In the next section, we propose some lemmas and preliminary results. In Section 3, we prove the convergent theorem and give some facts. In Section 4, we propose an algorithm for the stationary distribution together with a demonstrating numerical example.

#### 2. Some Lemmas

In this section, we present some lemmas which will be used in the proof of the main results. The following facts can be found in [1, 11, 12].

Definition 2.1 (see ). Let be a nonnegative matrix. If for some integer , one says that is primitive.

It is known that any primitive matrix must be irreducible . We will use the following important facts which can be found in [1, 12].

Theorem A (see ). Let , then is irreducible if and only if , where is the unit matrix.

Theorem B (Perron-Frobenius (see )). Let be irreducible. Then, (a), (b) is an eigenvalue of , (c)There exists a vector such that ,(d) is a simple eigen value of .

This theorem guarantees the eigenspace of is one-dimensional. That is, implies . And there exists an unique positive vector whose components sum to 1 such that . This is called the Perron vector .

For the Perron root of nonnegative matrices, many algorithms and bounds estimations have been proposed (see in [13, 14]). In this paper, we will describe the Perron root by using the following Collatz-Wielandt functions [11, 12].

Definition 2.2 (see ). Let be nonnegative, define for any positive vector .

and are both continuous at any .

Lemma 2.3. Let be nonnegative and irreducible. Then, for any , and satisfy the following: (1), (2), (3) gives ; gives ,(4)If is irreducible and , let , then and .

Proof. (1)–(3) are clearly true (see ). For (4), by , it follows that . This gives . Similarly, .

Lemma 2.4 (see ). If is primitive ( for some ), then

#### 3. Main Results

In this section, we will present the main results.

Theorem 3.1. If is irreducible and is primitive such that . Let . Define for , Then, (a), and with , (b),(c).

Proof. By (3.1), we can write (for some ). This means for
By Lemma 2.4, Equations (3.2) and (3.3) imply that By putting , it is clear that (with ) and Since , we get . The Perron-Frobenius theorem (Theorem B) guarantees that is a simple eigenvalue of . So, gives that , which implies that and . On the other hand, by Definition 2.2, gives . By , we conclude that By Lemma 2.3 and (3.1), we have for So, and are both monotonic convergent sequences. This proves (c), completing the proof.

Remark 3.2. From the proof, we know (), and .
For an irreducible matrix , since , are primitive for . Clearly, , we have the following.

Corollary 3.3. If is irreducible and let (for fixed and ). Let . For all , define Then, (a), (b), (c).

For a positive matrix , all the matrices are primitive. The following is obvious.

Corollary 3.4. If and (for ). Let be a positive vector. Define Then, (a), (b), (c).

By (3.1), let for , one has the following.

Corollary 3.5. If , then .

Proof. From Theorem 3.1, it follows that

If is irreducible, it is obvious that is primitive, and . So, we have the following.

Corollary 3.6. If is irreducible, for any , let . Using the sequences and . Then,

#### 4. An Algorithm and a Numerical Example

In this section, we propose a numerical algorithm to compute the stationary distribution of a discrete time homogeneous finite Markov chain.

Algorithm 4.1 (to compute the stationary distribution ). Step 1. Giving a transition matrix of a discrete time homogeneous finite Markov chain, a calculation precision . Choosing parameters: a positive real number and an integer . Setting the initial iterative vector , , .
Step 2. Computing from :
Step 3. Compute and :
Step 4. If , go to Step 5. Otherwise setting , go back to Step 2.
Step 5. Let . Then is the approximation of the Perron root of , and the corresponding is the approximation of the stationary distribution of .

Remark 4.2. From Theorem 3.1, the convergence of Algorithm 4.1 is obvious.

We next give a numerical example.

Example 4.3. For a given finite Markov Chain, with the corresponding transition matrix as the following: finding its approximating stationary distribution with calculation precision .

By choosing the initial iterative vector , parameters , that is, , and applying Algorithm 4.1, the approximating Perron root and Perron vector are obtained after 12 iterations: The iteration results are listed in Table 1.

 1 9.8480005 0.2051667 0.8266823 1.0792825 4.2060003 0.0876250 8.2880001 0.1726667 8.1380005 0.1695417 6.8360000 0.1424167 10.6840000 0.2225833 2 1.7046144 0.2130768 0.9024122 1.0330924 0.6064956 0.0758119 1.4543651 0.1817956 1.4877499 0.1859687 0.8997051 0.1124631 1.8470705 0.2308838 3 1.7026380 0.2128298 0.9654347 1.0107353 0.6010615 0.0751327 1.4877805 0.1859726 1.5420293 0.1927537 0.8042196 0.1005275 1.8622712 0.2327839 4 1.6968167 0.2121021 0.9906667 1.0030047 0.6042080 0.0755260 1.4985833 0.1873229 1.4985833 0.1949336 0.7754812 0.0969351 1.8654419 0.2331802 5 1.6940787 0.2117598 0.9979988 1.0010511 0.6060677 0.0757585 1.5014149 0.1876768 1.5642191 0.1955274 0.7683222 0.0960403 1.8658978 0.2332372 6 1.6931415 0.2116427 0.9997041 1.0003293 0.6067572 0.0758447 1.5020157 0.1877520 1.5653062 0.1956633 0.7668935 0.0958617 1.8658856 0.2332357 7 1.6928755 0.2116094 0.9999617 1.0000869 0.6069643 0.0758705 1.5021036 0.1877629 1.5654967 0.1956871 0.7667158 0.0958395 1.8658446 0.2332305 8 1.6928117 0.2116015 0.9999923 1.0000242 0.6070167 0.0758771 1.5021019 0.1877628 1.5655103 0.1956888 0.7667347 0.0958418 1.8658243 0.2332281 9 1.6927999 0.2116000 0.9999974 1.0000122 0.6070276 0.0758785 1.5020945 0.1877618 1.5655029 0.1956879 0.7667580 0.0958448 1.8658174 0.2332272 10 1.6927987 0.2115998 0.9999989 1.0000043 0.6070291 0.0758786 1.5020909 0.1877613 1.5654980 0.1956872 0.7667685 0.0958460 1.8658154 0.2332269 11 1.6927989 0.2115999 0.9999996 1.0000013 0.6070290 0.0758786 1.5020894 0.1877612 1.5654960 0.1956870 0.7667719 0.0958465 1.8658147 0.2332269 12 1.6927993 0.2115999 0.9999998 1.0000004 0.6070290 0.0758786 1.5020891 0.1877611 1.5654955 0.1956869 0.7667729 0.0958466 1.8658148 0.2332268

#### Acknowledgment

The project was supported by the National Natural Science Foundation of China (Grant no. 60831001).

1. R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, UK, 1985. View at: Zentralblatt MATH
2. P. Regnault, “Estimation using plug-in of the stationary distribution and Shannon entropy of continuous time Markov processes,” Journal of Statistical Planning and Inference, vol. 141, no. 8, pp. 2711–2725, 2011.
3. J. H. Poore, H. D. Mills, and D. Mutchler, “Planning and certifying software system reliability,” IEEE Software, pp. 88–99, 1993. View at: Google Scholar
4. J. H. Poore and C. J. Trmmell, “Application of statistical science to testing and evaluating software intensive systems,” in Statostics, Testing and Defense Acquisition, National Academy Press, Washington, DC, USA, 1998. View at: Google Scholar
5. J. A. Whittaker and M. G. Thomason, “Markov chain model for statistical software testing,” IEEE Transactions on Software Engineering, vol. 20, no. 10, pp. 812–824, 1994. View at: Publisher Site | Google Scholar
6. M. Benzi, “A direct projection method for Markov chains,” Linear Algebra and Its Applications, vol. 386, pp. 27–49, 2004.
7. D. P. Heyman and A. Reeves, “Numerical solution of linear equations arising in Markov chain models,” ORSA Journal on Computing, vol. 1, pp. 52–60, 1989. View at: Google Scholar
8. I. Marek, “Iterative aggregation/disaggregation methods for computing some characteristics of Markov chains. II. Fast convergence,” Applied Numerical Mathematics, vol. 45, no. 1, pp. 11–28, 2003.
9. M. Neumann and J. Xu, “On the stability of the computation of the stationary probabilities of Markov chains using Perron complements,” Numerical Linear Algebra with Applications, vol. 10, no. 7, pp. 603–618, 2003.
10. C. C. Paige, G. P. H. Styan, and P. G. Wachter, “Computation of the stationary distribution of a Markov chain,” Journal of Statistical Computation and Simulation, vol. 4, pp. 173–186, 1975. View at: Google Scholar
11. H. Minc, Nonnegative Matrices, Wiley-Interscience Series in Discrete Mathematics and Optimization, John Wiley & Sons, New York, NY, USA, 1988.
12. R. S. Varga, Matrix Iterative Analysis, vol. 27 of Springer Series in Computational Mathematics, Springer, Berlin, Germany, 2000.
13. F. Duan and K. Zhang, “An algorithm of diagonal transformation for Perron root of nonnegative irreducible matrices,” Applied Mathematics and Computation, vol. 175, no. 1, pp. 762–772, 2006.
14. H. Y. Li, D. Zhao, and F. Dai, “On the spectral radius of a nonnegative centrosymmetric matrix,” Applied Mathematics and Computation, vol. 218, pp. 4962–4966, 2012. View at: Google Scholar