`Mathematical Problems in EngineeringVolume 2013 (2013), Article ID 206375, 7 pageshttp://dx.doi.org/10.1155/2013/206375`
Research Article

## A New GMRES() Method for Markov Chains

1School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
2Department of Basic Courses, Chengdu Textile College, Chengdu 611731, China

Received 11 August 2013; Accepted 2 September 2013

Copyright © 2013 Bing-Yuan Pu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper presents a class of new accelerated restarted GMRES method for calculating the stationary probability vector of an irreducible Markov chain. We focus on the mechanism of this new hybrid method by showing how to periodically combine the GMRES and vector extrapolation method into a much efficient one for improving the convergence rate in Markov chain problems. Numerical experiments are carried out to demonstrate the efficiency of our new algorithm on several typical Markov chain problems.

#### 1. Introduction

The Markov chain is a very robust tool for studying stochastic systems overtime and is in a wide range of applications including queueing systems [1, 2], computer and communication systems [3], information retrieval, and Web ranking [47]. For both discrete and continuous Markov chains, it has always been one of the central work to get their numerical solutions by adopting appropriate methods.

In this paper, we consider a class of Krylov subspace methods, restarted GMRES (GMRES()) method, for the numerical solutions of the stationary probability vector of an irreducible Markov chain. Let be a column stochastic matrix; that is, and , with being the column vector of all ones. We seek the stationary vector that satisfies By the Perron-Frobenius theorem [8, 9], if is irreducible, then there exists a unique solution to (1), which is strictly positive ().

Equation (1) is equivalent to the following singular linear problem: where , with being the identity matrix, and is a singular -matrix with the diagonal elements being negative column sums of its off-diagonal elements.

Now there are many numerical methods for solving (2), such as the traditional iterative methods, the power method or weighted Jacobi method [5, 1012], the aggregation/disaggregation algorithm [1316], and the Krylov subspace methods like the Arnoldi method and the GMRES [3, 1721]. However, these iteration methods for calculating may converge very slowly when the subdominant eigenvalue of satisfies [3, 11]. Thus, it becomes quite required to accelerate the calculation in Markov chain problems.

In this paper, we consider the restarted GMRES method and propose a new way to accelerate its numerical calculation by use of the polynomial-type vector extrapolation methods. In fact, seeking the vector extrapolation method as the accelerator is very popular; see [5, 12, 22] for details. In discussing this issue, we start from the mechanisms of the GMRES() method and vector extrapolation method and then illustrate how to periodically knit these two methods. In numerical experiments, the proposed extrapolation-accelerated GMRES() methods are tested on several Markov chain problems, and the experimental results show its effectiveness.

This paper is organized as follows. Section 2 briefly provides the mechanism of the GMRES methods and some acceleration ones. In Section 3, we first review the vector extrapolation method and then consider how to periodically combine the extrapolation method with the GMRES method for numerical calculation of Markov chains. Section 4 provides experimental evidence of the effectiveness of our approach. Section 5 summarizes the thesis and points out the direction of our future research.

#### 2. Background

In this section, we briefly introduce the mechanism of the GMRES methods and describe some existing modifications to the standard GMRES aimed at accelerating its convergence.

The GMRES method, proposed by Saad and Schultz in [20], is a popular method for the iterative solution of sparse linear systems with an unsymmetrical matrix, If is an initial guess for the true solution of (3) and is the initial residual, we have the equivalent system: where . Let be the Krylov subspace: GMRES is to find an approximate solution such that Here, denotes the Euclidean norm on , as well as the associated-induced matrix norm on .

GMRES is often referred to as an “optimal” method in the sense that it finds the approximate solution in the Krylov subspace that minimizes the residual [20]. However, the amount of storage and computational work grow quadratically with the number of steps. So the restarted version of the algorithm is often used as suggested in [20]. In restarted GMRES (GMRES()), the method is restarted after each cycle of iteration steps, and the current approximate solution becomes the new initial guess for the next iterates. The mechanism of the GMRES() for Markov chains as system (2) is described as follows. For more details, see [20, 23].

Algorithm 1. The Restarted GMRES. (1)Start: choose , , tol (2)For , do (3)Form the approximate solution:  , where minimizes . (4)Restart: while , do and go to step 2.

In general, the convergence rate depends on the restart parameter [24, 25]. Even in the situation in which the appropriate restart parameter results in the satisfactory convergence, the convergence behavior may not be the optimal; since an iterate is restarted, the current approximation space is discarded at each restart, and the orthogonality between approximation spaces generated at successive restarts is not preserved at each restart. For that reason, slow convergence can occur. Therefore, it is necessary to develop more efficient algorithms for enhancing the robustness of restarted GMRES, and thereof now its improvements and modifications continue to receive considerable attentions. Augmented methods are a class of acceleration techniques which seek to avoid stalling by improving information in GMRES at the time of the restart. These acceleration methods are presented by Morgan [19, 26], Saad [27] and Chapman and Saad [28]. The precondition technique is also often used to accelerate the numerical calculation of GMRES; see [17, 23, 2932] for more details. Meanwhile, this paper focuses on undertaking the similar works, that is, speeding up the process and improving the robustness of the restarted GMRES.

#### 3. The Main Algorithm and Practical Implementations

Now let us first present the rationale and theory behind vector extrapolation method. When considering the solutions of systems of linear or nonlinear equations by an iterative method, a sequence of vectors (approximation solutions) is yielded. As the classical iteration process may converge slowly, extrapolation strategy can often be applied to enhance its convergence rates. A detailed review of extrapolation methods can be found in the works of Smith et al. [22], Sidi [12], and Jbilou and Sadok [33]. Now there are mainly two classes of vector extrapolation methods: (1) polynomial-type methods, namely, the minimal polynomial extrapolation (MPE) of Cabay and Jackson [34], the reduced rank extrapolation (RRE) of Eddy [35] and Mešina [36], and the modified minimal polynomial extrapolation (MMPE) method of Sidi et al. [37], Brezinski [38], and Pugachev [39]; and (2) epsilon algorithms, namely, the topological epsilon algorithm of Brezinski [38] and the scalar and vector epsilon algorithms of Wynn [40]. Numerical experiences have illustrated that the polynomial type vector extrapolation methods are in general more economical than the epsilon ones, with respect to the computing time and storage requirements. And for the special touch, Kamvar et al. have recently proposed a new extrapolation, quadratic extrapolation, for computing the PageRank in [5], and Sidi [12] has generalized this method (the generalization one is marked by GQE) and proved that the resulting generalization is very closely related to MPE. Therefore, we focus on the two polynomial extrapolation methods, namely, RRE and GQE, in this paper. Now let us display the implementations of these two algorithms. For more details, we refer the reader to the paper [12] by Sidi.

Vector extrapolation algorithms are derived by considering vector sequences , generated from a fixed-point iterative method of the following form: where is a fixed matrix, is a fixed -dimensional vector, and is an initial vector.

Suppose that we have produced a sequence of iterates , where . Then at the th outer iteration, let be a matrix consisting of the last iterates with being the newest, where is called the window size in usual. It is evident that has the following properties:

The problem to be solved is transformed into obtaining a vector satisfying , and thus we have an updated probability vector a linear combination of the last iterates. Assume , which results in the unique solution of the linear system As illustrated in [12], the efficient implementations of the vector extrapolation methods (RRE and GQE) are presented as follows.

Algorithm 2. The Reduced Rank Extrapolation Method (RRE). (1)Input vectors . (2)Compute , set , and compute the -factorization of , namely, . (3)Solve the linear system . It amounts to solving two triangular systems: for and for . Set , and compute , with . (4)Compute by . (5)Compute . Compute the RRE approximation for the exact solution of system (15) by .

Algorithm 3. The Generalization of Quadratic Extrapolation (GQE).(1)Input vectors . (2)Compute , set , and compute the -factorization of , namely, . (3)Solve the linear system . (4)Set and compute by   (thus, ). (5)Compute .

Clearly, Algorithms 2 and 3 share the common feature: they both contain a factorization in step 2 for , where is unitary and is an upper triangular matrix with positive diagonal elements. In addition, the -factorization is always carried out inexpensively by applying the modified Gram-Schmidt process (MGS) to the vectors ; see [12] for more details. The MGS algorithm is given as follows.

MGS Algorithm(1)Compute , and set .(2)For , set .(3)For , compute and .(4)Compute and .

As previously mentioned, the restarting of GMRES may entail the slow convergence since the current approximation space is discarded at each restart cycle. Meanwhile, the change of the restart parameter will affect the number of iterations and also the execution time in the run time of GMRES. Furthermore, the increasing will decrease the number of iteration needed to converge while increasing the computing work and storage amount required per iteration, especially for large systems of equations, like large-scale Markov chains, and PageRank computation in information retrieval. So it is very necessary to enhance the robustness of GMRES(). And our work falls into the category of accelerating techniques. Next, we will discuss the idea and implementation of the new method.

Recall that in restarted GMRES, once the Krylov subspace reaches dimension , the current approximate solution becomes the new initial guess for the next iterations. The motivation of our new method comes from the successive restarts every iterations. To be specific, we will implement the current approximation, after iterations in a restart cycle, as the initial vector for the extrapolation procedure. And furthermore, the resulting vector by extrapolation method, in turn, can be the new improved starting vector for the next iterations by GMRES(). In summary, the mechanism of our new restarted GMRES method can be characterized as follows.

Algorithm 4. The Extrapolation-Accelerated GMRES() (EGMRES) (1)Choose , , , , and set . (2)Compute using Algorithm 1 (GMRES()). (3)Set . (4)Compute from by applying Algorithm 2 (RRE) and Algorithm 3 (GQE), respectively. (5)If , then . (6)Check convergence. If , stop. Otherwise, set and go to step 2.

Note that, in step 1 tol is the user described tolerance for residual vector 1-norms by GMRES(), is the restart number in GMRES, and is the window size for extrapolation. From Algorithms 2 and 3, both the RRE and GQE methods require vectors as inputs. For instance, when , four approximate vectors are needed as input in the extrapolation procedure. Step 5 is to check the efficiency of extrapolation strategy in current iterations. Step 6 is devised to determine when to flip flop between the extrapolation procedure and GMRES procedure. The performances of our new algorithm and comparisons with the other original algorithms will be discussed in detail below.

#### 4. Numerical Results in Markov Chain Problems

In this section, we will report the numerical experiments to examine the efficiency of our new accelerated algorithm in numerical solution of stationary probability vector for Markov chains. Three typical Markov chain problems have been used in our experiments. All the numerical results are obtained with a MATLAB R2010a implementation on Windows 8 with 2.5 GZ i5 processor and 4 GB memory.

For the sake of justice, the same starting vector , with being the column vector of all ones, is used for all the algorithms. All the iterations are terminated when , where are the approximations obtained by the current iteration. For convenience, in all the tables and figures below, we have abbreviated the RRE-GMRES algorithm and the GQE-GMRES algorithm as RGMRES and GGMRES, respectively. We denote by “rest” the restart number in GMRES, “CPU” the CPU time used in seconds, “it” the iteration counts, and “Res” the residual 1-norms.

Example 1. In this example, we compare the GMRES() algorithm, the RGMRES algorithm, and the GGMRES algorithm for the Markov chain problem. This test problem is a one-dimensional (1D) Markov chains often resulting from a queueing system. The simplest graph of this problem is displayed in Figure 1, where the transition rates are identical. Numerical results are presented in Table 1.

Table 1: Numerical results of the three algorithms for one-dimensional Markov chain with identical transition rates.
Figure 1: Graph for one-dimensional Markov chain with identical transition rates.

It is easy to see from Table 1 that the accelerated GMRES is more effective than the unaccelerated one, both in terms of CPU time and the number of iteration and regardless of the restart number size for GMRES. In particular, the GMRES accelerated by GQE (GGMRES) performs best in most cases. For instance, when the restart number is 20, in comparison with the results of unaccelerated GMRES, the CPU time has been saved by for FGMRES and for GGMRES, and the iteration count has been reduced by for RGMRES and for GGMRES. In Figure 2, we compare the converging speed among the three algorithms. It is clear that the convergence rate has been enhanced greatly when using the accelerating technique in GMRES, and especially that the GGMRES has faster convergence speed than the other algorithms for meeting the convergence precision quickly.

Figure 2: Convergence curves of the three algorithms for Example 1, .

Example 2. This test problem is displayed in Figure 3, which is a 1D chain with uniform weights, except for two weak links with weight in the middle of the chain. This is a typical nearly completely decomposable (NCD) Markov chain problem, and it has been discussed in [4143]. We run the GMRES algorithm, the accelerated GMRES algorithm by vector extrapolation method, namely, RGMRES and GGMRES, on this problem, when the window size in extrapolation takes different values. All the algorithms will be stopped as soon as the residual 2-norms are below . Let in the numerical simulation, and the experimental results are listed in Table 2.

Table 2: Numeric comparisons of three algorithms for the uniform chain with two weak links in the middle ().
Figure 3: Graph for a uniform chain with two weak links in the middle ().

It is seen from Table 2 that the accelerated GMRES methods by vector extrapolation method perform better than the unaccelerated GMRES method both in terms of iteration counts and CPU time, regardless of their window size. Obviously the GMRES method accelerated by GQE outperforms the other two methods. For instance, when the widow size is 4, the iteration number in GGMRES is only of that in RGMRES and less than of that in GMRES. Especially from the view point of CPU time, the GGMRES method has more obvious advantages than the GMRES method and the RGMRES method, for it reduces the convergence time by compared with RGMRES method and by nearly compared with GMRES method.

Example 3. This problem is a 2D lattice (grid) with uniform weights, which has been discussed in [43, 44]. Aggregates of size are used and the gauge variables are defined on the lattice edges and are scalars valued as one, as shown in Figure 4.
In this problem, we run the GMRES, RGMRES and GGMRES, and compare their convergence rates while changing the problem size, with the restart number being 10 and the window size being 2. Numerical results are presented in Table 3. The numerical simulation given in Table 3 for the 2D lattice problem clearly demonstrates the effectiveness of acceleration by vector extrapolation methods. It is seen that the RGMRES algorithm converges faster than the GMRES method, while the GGMRES algorithm performs the best.

Table 3: Numerical results of the three algorithms with various problem sizes for uniform 2D lattice.
Figure 4: Graph for a 2D lattice with uniform weights.

#### 5. Conclusions

In this paper, we have presented a new GMRES method accelerated by vector extrapolation techniques to get the numerical solutions of the stationary probability vector of an irreducible Markov chain, using vector extrapolation techniques. Experimental results on several typical Markov chain problems demonstrate that the extrapolation method is an attractive option for accelerating the convergence of calculation of Markov chain, especially with the GQE (proposed by Sidi in [12]) being the accelerator. We note that, as mentioned previously, preconditioning technique may also be an appropriate strategy for improving convergence rate in Markov chains and would be one of our research topics of this field in future.

#### Acknowledgments

This research is supported by Chinese Universities Specialized Research Fund for the Doctoral Program (20110185110020), NSFC (61170309), Sichuan Province Sci. and Tech. Research Project (2012GZX0080), and Scientific Research Fund of Sichuan Provincial Education Department (T11008).

#### References

1. W. K. Ching, “Iterative methods for queuing systems with batch arrivals and negative customers,” BIT, vol. 43, no. 2, pp. 285–296, 2003.
2. B. Meini, “Solving M/G/1 type Markov chains: recent advances and applications,” Communications in Statistics, vol. 14, no. 1-2, pp. 479–496, 1998.
3. W. J. Stewart, An Introduction to the Numerical Solution of Markov Chains, Princeton University Press, Princeton, NJ, USA, 1994.
4. A. Z. Broder, R. Lempel, F. Maghoul, and J. Pedersen, “Efficient PageRank approximation via graph aggregation,” Information Retrieval, vol. 9, no. 2, pp. 123–138, 2006.
5. S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and G. H. Golub, “Extrapolation methods for accelerating PageRank computations,” in Proceedings of the 12th International World Wide Web Conference, Budapest, Hungary, May 2003.
6. A. N. Langville and C. D. Meyer, “A survey of eigenvector methods for Web information retrieval,” SIAM Review, vol. 47, no. 1, pp. 135–161, 2005.
7. L. Page, S. Brin, R. Motwani, and T. Winograd, “The PageRank citation ranking: bringing order to the web,” Tech. Rep. 1999-0120, Computer Science Department, Stanford, Calif, USA, 1999.
8. A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematics Science, SIAM, Philadelpha, Pa, USA, 1987.
9. H. De Sterck, T. A. Manteuffel, S. F. McCormick, Q. Nguyen, and J. Ruge, “Multilevel adaptive aggregation for Markov chains, with application to web ranking,” SIAM Journal on Scientific Computing, vol. 30, no. 5, pp. 2235–2262, 2008.
10. S. Kamvar, T. Haveliwala, and G. Golub, “Adaptive methods for the computation of PageRank,” Linear Algebra and Its Applications, vol. 386, pp. 51–65, 2004.
11. B. Philippe, Y. Saad, and W. J. Stewart, “Numerical methods in Markov Chain modeling,” Operations Research, vol. 40, pp. 1156–1179, 1992.
12. A. Sidi, “Vector extrapolation methods with applications to solution of large systems of equations and to PageRank computations,” Computers & Mathematics with Applications, vol. 56, no. 1, pp. 1–24, 2008.
13. I. Marek and P. Mayer, “Convergence analysis of an iterative aggregation/disaggregation method for computing stationary probability vectors of stochastic matrices,” Numerical Linear Algebra with Applications, vol. 5, no. 4, pp. 253–274, 1998.
14. I. Marek and P. Mayer, “Convergence theory of some classes of iterative aggregation/disaggregation methods for computing stationary probability vectors of stochastic matrices,” Linear Algebra and Its Applications, vol. 363, pp. 177–200, 2003.
15. P. J. Schweitzer and K. W. Kindle, “An iterative aggregation-disaggregation algorithm for solving linear equations,” Applied Mathematics and Computation, vol. 18, no. 4, pp. 313–354, 1986.
16. W. J. Stewart and W. L. Cao, “Iterative aggregation/disaggregation techniques for nearly uncoupled Markov chains,” Journal of the Association for Computing Machinery, vol. 32, no. 3, pp. 702–719, 1985.
17. A. H. Baker, E. R. Jessup, and T. Manteuffel, “A technique for accelerating the convergence of restarted GMRES,” SIAM Journal on Matrix Analysis and Applications, vol. 26, no. 4, pp. 962–984, 2005.
18. G. H. Golub and C. Greif, “An Arnoldi-type algorithm for computing PageRank,” BIT, vol. 46, no. 4, pp. 759–771, 2006.
19. R. B. Morgan, “Implicitly restarted GMRES and Arnoldi methods for nonsymmetric systems of equations,” SIAM Journal on Matrix Analysis and Applications, vol. 21, no. 4, pp. 1112–1135, 2000.
20. Y. Saad and M. H. Schultz, “GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems,” Society for Industrial and Applied Mathematics, vol. 7, no. 3, pp. 856–869, 1986.
21. G. Wu and Y. Wei, “An Arnoldi-extrapolation algorithm for computing PageRank,” Journal of Computational and Applied Mathematics, vol. 234, no. 11, pp. 3196–3212, 2010.
22. D. A. Smith, W. F. Ford, and A. Sidi, “Extrapolation methods for vector sequences,” SIAM Review, vol. 29, no. 2, pp. 199–233, 1987.
23. J. Baglama, D. Calvetti, G. H. Golub, and L. Reichel, “Adaptively preconditioned GMRES algorithms,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 243–269, 1999.
24. M. Embree, “The tortoise and the hare restart GMRES,” Tech. Rep. 01/22, Oxford University Computing Laboratory Numerical Analysis, Oxford, UK, 2001.
25. W. Joubert, “On the convergence behavior of the restarted GMRES algorithm for solving nonsymmetric linear systems,” Numerical Linear Algebra with Applications, vol. 1, no. 5, pp. 427–447, 1994.
26. R. B. Morgan, “A restarted GMRES method augmented with eigenvectors,” SIAM Journal on Matrix Analysis and Applications, vol. 16, no. 4, pp. 1154–1171, 1995.
27. Y. Saad, “Analysis of augmented Krylov subspace methods,” SIAM Journal on Matrix Analysis and Applications, vol. 18, no. 2, pp. 435–449, 1997.
28. A. Chapman and Y. Saad, “Deflated and augmented Krylov subspace techniques,” Numerical Linear Algebra with Applications, vol. 4, no. 1, pp. 43–66, 1997.
29. M. Benzi and B. Uçar, “Product preconditioning for Markov chain problems,” in Proceedings of the 2006 Markov Anniversary Meeting, Charleston, SC, USA, A. N. Langville and W. J. Stewart, Eds., pp. 239–256, Boson Books, Raleigh, NC, USA.
30. M. Benzi and B. Uçar, “Block triangular preconditioners for M-matrices and Markov chains,” Electronic Transactions on Numerical Analysis, vol. 26, pp. 209–227, 2007.
31. J. Erhel, K. Burrage, and B. Pohl, “Restarted GMRES preconditioned by deflation,” Journal of Computational and Applied Mathematics, vol. 69, no. 2, pp. 303–318, 1996.
32. S. A. Kharchenko and A. Yu. Yeremin, “Eigenvalue translation based preconditioners for the GMRES(k) method,” Numerical Linear Algebra with Applications, vol. 2, no. 1, pp. 51–77, 1995.
33. K. Jbilou and H. Sadok, “Vector extrapolation methods. Applications and numerical comparison,” Journal of Computational and Applied Mathematics, vol. 122, no. 1-2, pp. 149–165, 2000.
34. S. Cabay and L. W. Jackson, “A polynomial extrapolation method for finding limits and antilimits of vector sequences,” SIAM Journal on Numerical Analysis, vol. 13, no. 5, pp. 734–752, 1976.
35. R. P. Eddy, “Extrapolating to the limit of a vector sequence,” in Information Linkage between Applied Mathematics and Industry, P. C. C. Wang, Ed., pp. 387–396, Academic Press, New York, NY, USA, 1979.
36. M. Mešina, “Convergence acceleration for the iterative solution of the equations $\text{X}=\text{A}\text{X}+f$,” Computer Methods in Applied Mechanics and Engineering, vol. 10, no. 2, pp. 165–173, 1977.
37. A. Sidi, W. F. Ford, and D. A. Smith, “Acceleration of convergence of vector sequences,” SIAM Journal on Numerical Analysis, vol. 23, no. 1, pp. 178–196, 1986.
38. C. Brezinski, “Généralisations de la transformation de Shanks, de la table de Padé et de l' ϵ-algorithme,” Calcolo, vol. 11, no. 4, pp. 317–360, 1975.
39. B. P. Pugachev, “Acceleration of the convergence of iterative processes and a method of solving systems of non-linear equations,” USSR Computational Mathematics and Mathematical Physics, vol. 17, no. 5, pp. 199–207, 1978.
40. P. Wynn, “Acceleration techniques for iterated vector and matrix problems,” Mathematics of Computation, vol. 16, pp. 301–322, 1962.
41. C. Isensee and G. Horton, “A multi-level method for the steady state solution of Markov chains,” in Simulation and Visualization, SCS, Magdeburg, Germany, 2004.
42. C. Isensee and G. Horton, “A multi-level method for the steady state solution of discretetime Markov chains,” in Proceedings of the 2nd Balkan conference in informatics, pp. 413–420, Ohrid, Macedonia, November 2005.
43. H. De Sterck, T. A. Manteuffel, S. F. McCormick et al., “Smoothed aggregation multigrid for Markov chains,” SIAM Journal on Scientific Computing, vol. 32, no. 1, pp. 40–61, 2010.
44. H. De Sterck, K. Miller, G. Sanders, and M. Winlaw, “Recursively accelerated multilevel aggregation for Markov chains,” SIAM Journal on Scientific Computing, vol. 32, no. 3, pp. 1652–1671, 2010.