Research Article  Open Access
Convergence of Relaxed Matrix Parallel Multisplitting Chaotic Methods for Matrices
Abstract
Based on the methods presented by Song and Yuan (1994), we construct relaxed matrix parallel multisplitting chaotic generalized USAORstyle methods by introducing more relaxed parameters and analyze the convergence of our methods when coefficient matrices are Hmatrices or irreducible diagonally dominant matrices. The parameters can be adjusted suitably so that the convergence property of methods can be substantially improved. Furthermore, we further study some applied convergence results of methods to be convenient for carrying out numerical experiments. Finally, we give some numerical examples, which show that our convergence results are applied and easily carried out.
1. Introduction
For solving the large sparse linear system where is an real nonsingular matrix and , an iterative method is usually considered. The concept of matrix multisplitting solution of linear system was introduced by O’Leary and White [1] and further studied by many other authors. Frommer and Mayer [2] studied extrapolated relaxed matrix multisplitting methods and matrix multisplitting SOR method. Mas et al. [3] analyzed nonstationary extrapolated relaxed and asynchronous relaxed methods. Gu et al. [4, 5] further studied relaxed nonstationary twostage matrix multisplitting methods and the corresponding asynchronous schemes.
As we know, matrix multisplitting iterative method for linear systems takes two basic forms. When all of the processors wait until they are updated with the results of the current iteration, it is synchronous. That is to say, when they begin the next iteration of asynchronous, they may act more or less independently of each other and use possibly delayed iterative values of the output of the other processors. In view of the potential time saving inherent in them, asynchronous iterative methods, or chaotic as they are often called, have attracted much attention since the early paper of Chazan and Miranker [6] introduced them in the context of point iterative schemes. Naturally, a number of convergence results [7–21] have been obtained. Recently, the convergence of three relaxed matrix multisplitting chaotic AOR methods has been investigated in [11, 13, 14].
A collection of triples , , is called a multisplitting of if is a splitting of for , and ’s, called weighting matrices, are nonnegative diagonal matrices such that
The unsymmetric accelerated overrelaxation (USAOR) method was proposed in [22], if the diagonal elements of the matrix are nonzero. Without the loss of generality, let the matrix be split as where , , and are nonsingular diagonal, strictly lower triangular and upper triangular parts of . The iterative scheme of the USAOR method is defined by that is, where and are real parameters.
In this paper, we will extend the point USAOR iterative method to matrix multisplitting chaotic generalized USAOR method and analyze their convergence for matrices and irreducible diagonally dominant matrices.
Based on matrix multisplitting chaotic generalized AOR method [23], the matrix multisplitting chaotic generalized USAOR method is given by where is an iteration matrix and , with , and , real parameters.
Remark 1. For , , then GUSAOR method reduces to GAOR method [11]; For , , , , the GUSAOR method reduces to the USAOR method [22].
The remainder of this paper is organized as follows. In Section 2, we introduce some notations and preliminaries. In Section 3, we present relaxed matrix parallel multisplitting chaotic GUSAORstyle methods for solving large nonsingular system, when the coefficient matrix is an matrix or irreducible diagonally dominant matrix, and analyze the convergence of our methods. In Section 4, we further study some applied convergence results of methods to be convenient for carrying out numerical experiments. In Section 5, we give some numerical examples, which show that our convergence results are easily carried out. Finally, we draw some conclusions.
2. Notation and Preliminaries
We will use the following notation. Let be an matrix. By diag() we denote the diagonal matrix coinciding in its diagonal with . For , , we write if holds for all . Calling nonnegative if , we say that if and only if . These definitions carry immediately over to vectors by identifying them with matrices. By we define the absolute value of . We denote by the comparison matrix of where for and for , . Spectral radius of a matrix is denoted by . It is well known that if and there exists a vector such that , then .
Definition 2 (see [24]). Let . It is called an(1)Lmatrix if for , and for , ;(2)matrix if it is a nonsingular matrix satisfying ;(3)matrix if is an matrix.
Lemma 3 (see [13]). If is an matrix, then (1); (2)there exists a diagonal matrix whose diagonal entries are positive such that with .
Lemma 4 (see [13]). Let be an matrix and let the splitting be an splitting. If is the diagonal matrix defined in Lemma 3, then .
Finally, a sequence of sets with is admissible if every integer appears infinitely often in the , while such an admissible sequence is regulated if there exists a positive integer such that each of the integers appears at least once in any consecutive sets of the sequence.
3. Relaxed Matrix Multisplitting Parallel Chaotic Methods
Using the given models in [7, 11, 13] and (4), we may describe the corresponding three algorithms of relaxed matrix multisplitting chaotic GUSAORstyle methods, which are as follows.
Algorithm 5 (given the initial vector). For , until convergence. Parallel computing the following iterative scheme with , , , , and , where is the th composition of the affine mapping satisfying
Remark 6. In Algorithm 5, by using a suitable positive relaxed parameter , then we can get the following relaxed Algorithm 7.
Algorithm 7 (given the initial vector). For , until convergence. Parallel computing the following iterative scheme with , , , , , and , where is defined such as Algorithm 5.
Remark 8. In Algorithm 7, we assume that the index sequence is admissible and regulated; then we can get the following Algorithm 9 with the case of relaxed chaotic GUSAOR method.
Algorithm 9 (given the initial vector). For , until convergence. Parallel computing the following iterative scheme with , , , , , , and , where of Algorithm 5 are replaced by and .
Remark 10. Relaxed matrix multisplitting chaotic GUSAORstyle methods introduce more relaxed factors, so our methods are the generalization of [5, 12, 13]. The parameters can be adjusted suitably so that the convergence property of method can be substantially improved.
Using Lemmas 3 and 4, we can get the following convergence result according to Algorithm 5.
Theorem 11. Let be an matrix and , , a multisplitting of . Assume that for , we have the following.(1) are the strictly lower triangular matrices and are the matrices such that the equalities hold.(2), where .Then the sequence generated by Algorithm 5 converges for any initial if and only if , , , where
Proof. Define the iteration matrix in Algorithm 5
where
Obviously, we have to find a constant with and some norm, which are independent of , such that for , .
We first note that the matrices and are matrices for . Thus by Lemma 3 we have the following inequalities:
From this relation, it follows that
Case 1. Let , , . Then
Define
So, for , we have the following relations
Since for , and are both matrices and , , the splittings and are splittings of and , respectively, which are matrices. So, from Lemma 4 we may complete the proving courses, which are listed subsequently.
Case 2. Let , , .
Assume that , . Then we have
Similar to Case 1, we define
Then
It is clear, and are both matrices. Since, for , and are matrices, and , the splittings and are splittings of the matrices and , respectively.
Thus, for Cases 1 and 2, from Lemma 4, we have
So
which implies
If we define
Then we have
According to Algorithm 7, we can also get the following result.
Theorem 12. Let be an matrix and , , a multisplitting of . Assume that for , we have the following.(1) are the strictly lower triangular matrices and are the matrices such that the equalities hold.(2), where .(3) is diagonal matrix defined in Lemma 3 and , , , and in Theorem 11.Then the sequence generated by Algorithm 7 converges for any initial if and only if , , with and , where
Proof. Define the iteration matrix in Algorithm 7 where Obviously, similar to Theorem 11 we only need to find a constant with , which is independent of , such that . From the proof of Theorem 11, we can get Now let denote the last item in above inequalities. Clearly, if , , , then
Using the proving process of Theorems 11 and 12 and [13, Theorem 2.8] we have the following convergence result about Algorithm 9.
Theorem 13. Let be an matrix and , , a multisplitting of . Assume that for , we have the following.(1) are the strictly lower triangular matrices and are the matrices such that the equalities hold.(2), where .(3) is diagonal matrix defined in Lemma 3 and , , and in Theorem 11.(4)The index sequence is admissible and regulated.Then the sequence generated by Algorithm 9 converges for any initial if and only if , , with and , where
Remark 14. It is known that an matrix or a symmetric positive definite matrix is also matrix. Therefore, the convergence results in Theorems 11, 12, and 13 are valid.
Remark 15. Since a strictly or an irreducible diagonally dominant matrix is also satisfying the condition of Theorems 11, 12, and 13, our methods are also valid for these kinds of matrices. Furthermore, for strictly or irreducible diagonally dominant matrices, can take the place of in Theorems 11, 12, and 13.
4. Applied Convergence of Relaxed Chaotic Methods
In Theorems 12 and 13 of this paper, we find that is difficult to compute when carrying out numerical experiments. So for irreducible diagonally dominant matrices, we also have the following applied results of convergence according to Algorithms 7 and 9, respectively.
Theorem 16. Let be an irreducible diagonally dominant matrix and , , a multisplitting of . Assume that for , we have the following.(1) are the strictly lower triangular matrices and are the matrices such that the equalities hold.(2), where .(3) is diagonal matrix defined in Lemma 3 and , , , and in Theorem 11.Then the sequence generated by Algorithm 7 converges for any initial if and only if , , with , where
Proof. We only prove ). Let us define the iteration matrix in Algorithm 7 From the proving process of Theorem 11, we may define At first, we need to prove . Similarly, we may also prove . Assume that With (36), we know that is nonnegative, so according to [24, Theorem 2.7], there exists an eigenvector , , such that hold; that is, Multiplying by , it holds that As , it follows by [5, Theorem 11] that From the proof of [3, Theorem 3.1], we have , . Since the matrices , , and are irreducible, by [24, Theorem 2.1], it follows that With (40) and (41) and being irreducible diagonally dominant by rows, we have Notice that if , we may obtain While if , we also have From (42), (43), and (44), we can get From Theorem 11, we have From (35) and the above proof, we have where , , .
Remark 17. Obviously, convergence results of Theorem 16 are convenient for carrying out numerical experiments. Using the proving process of Theorems 11 and 16 and [13, Theorem 2.8], we can get the following result about Algorithm 9.
Theorem 18. Let be an irreducible diagonally dominant matrix and , , a multisplitting of . Assume that for , we have the following.(1) are the strictly lower triangular matrices and are the matrices such that the equalities hold.(2), where .(3) is diagonal matrix defined in Lemma 3 and , , , and in Theorem 11.(4)The index sequence is admissible and regulated.Then the sequence generated by Algorithm 9 converges for any initial if and only if , , with , where
Remark 19. As a special case, for the relaxed matrix multisplitting chaotic GSSORstyle methods, we have the corresponding convergence results, where , , , with , and real parameters.
5. Numerical Examples
Example 1. By using difference discretization of partial differential equation, we can obtain the corresponding coefficient matrix of the linear system , which is as follows:
Now, we will apply the results of Theorem 16 according to Algorithm 7. From Algorithm 7, we can get the iterative matrix Here, we assume that , . By direct calculations with Matlab 7.1, we have In Table 1, we show that convenience results of Section 4 are convenience for carrying out numerical experiments, where denote the spectral radius of iterative matrix :

Remark 20. Obviously, of Theorems 16 and 18 is applied and easily calculated when carrying out numerical experiments.
Example 2. Consider a matrix of the form where , is a real number, satisfying .
In Theorem 11, let , , , , ; then Algorithm 5 converges for any initial vector . In Theorems 12 and 13, let , , , , , ; then Algorithms 7 and 9 converge for any initial vector . In Theorems 16 and 18, let , , , , , ; then Algorithms 7 and 9 converge for any initial vector , where .
If we choose , in Theorem 11, let , , , , ; then Algorithm 5 converges for any initial vector . In Theorems 12 and 13, let , , , , , ; then Algorithms 7 and 9 converge for any initial vector . In Theorems 16 and 18, let , , , , , ; then Algorithms 7 and 9 converge for any initial vector , where .
6. Conclusions
In this paper, we consider relaxed matrix parallel multisplitting chaotic GUSAORstyle methods for solving linear systems of algebraic equations , in which the coefficient is an matrix or an irreducible diagonally dominant matrices, and analyze the convergence of our methods, which use more relaxed factors and are the generalization of [11, 13, 14]. The parameters can be adjusted suitably so that the convergence property of method can be substantially improved. Furthermore, we further study some applied convergence results of methods to be convenient for carrying out numerical experiments. Finally, we give some applied examples, which show that our convergence results are applied and easily calculated when carrying out numerical experiments.
Particularly, one may discuss how to choose the set of relaxed parameters in order to really accelerate the convergence of the considered method. Furthermore, The optimal choice of this set of relaxed parameters is valuably studied.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This research of this author is supported by NSFC Tianyuan Mathematics Youth Fund (11226337), NSFC (61203179, 61202098, 61170309, 91130024, 61033009, 61272544, and 11171039), Aeronautical Science Foundation of China (2013ZD55006), Project of Youth Backbone Teachers of Colleges and Universities in Henan Province (2013GGJS142), ZZIA Innovation team fund (2014TD02), Major project of development foundation of science and technology of CAEP (2012A0202008), Basic and Advanced Technological Research Project of Henan Province (132300410373, 142300410333), and Natural Science Foundation of Henan Province (14B110023).
References
 D. P. O'Leary and R. E. White, “Multisplittings of matrices and parallel solution of linear systems,” SIAM Journal on Algebraic and Discrete Methods, vol. 6, no. 4, pp. 630–640, 1985. View at: Publisher Site  Google Scholar  MathSciNet
 A. Frommer and G. Mayer, “Convergence of relaxed parallel multisplitting methods,” Linear Algebra and Its Applications, vol. 119, pp. 141–152, 1989. View at: Publisher Site  Google Scholar  MathSciNet
 J. Mas, V. Migallón, J. Penadés, and D. B. Szyld, “Nonstationary parallel relaxed multisplitting methods,” Linear Algebra and Its Applications, vol. 241–243, pp. 733–747, 1992. View at: Google Scholar
 T. X. Gu, X. P. Liu, and L. J. Shen, “Relaxed parallel twostage multisplitting methods,” International Journal of Computer Mathematics, vol. 75, no. 3, pp. 351–367, 2000. View at: Publisher Site  Google Scholar  MathSciNet
 T.X. Gu and X.P. Liu, “Parallel twostage multisplitting iterative methods,” International Journal of Computer Mathematics, vol. 20, no. 2, pp. 153–166, 1998. View at: Google Scholar
 D. Chazan and W. Miranker, “Chaotic relaxation,” Linear Algebra and its Applications, vol. 2, pp. 199–222, 1969. View at: Google Scholar  MathSciNet
 R. Bru, L. Elsner, and M. Neumann, “Models of parallel chaotic iteration methods,” Linear Algebra and its Applications, vol. 103, pp. 175–192, 1988. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 L. Elsner, M. Neumann, and B. Vemmer, “The effect of the number of processors on the convergence of the parallel block Jacobi method,” Linear Algebra and Its Applications, vol. 154–156, pp. 311–330, 1991. View at: Publisher Site  Google Scholar  MathSciNet
 D. Jiang, Z. Xu, Z. Chen, Y. Han, and H. Xu, “Joint timefrequency sparse estimation of largescale network traffic,” Computer Networks, vol. 55, no. 15, pp. 3533–3547, 2011. View at: Publisher Site  Google Scholar
 D.D. Jiang, Z.Z. Xu, H.W. Xu, Y. Han, Z.H. Chen, and Z. Yuan, “An approximation method of origindestination flow traffic from link load counts,” Computers & Electrical Engineering, vol. 37, no. 6, pp. 1106–1121, 2011. View at: Publisher Site  Google Scholar
 P. E. Kloeden and D. J. Yuan, “Convergence of relaxed chaotic parallel iterative methods,” Bulletin of the Australian Mathematical Society, vol. 50, no. 1, pp. 167–176, 1994. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 S.Q. Shen and T.Z. Huang, “New comparison results for parallel multisplitting iterative methods,” Applied Mathematics and Computation, vol. 206, no. 2, pp. 738–747, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 Y. Song and D. Yuan, “On the convergence of relaxed parallel chaotic iterations for Hmatrix,” International Journal of Computer Mathematics, vol. 52, no. 34, pp. 195–209, 1994. View at: Publisher Site  Google Scholar
 D. Yuan, “On the convergence of parallel multisplitting asynchronous GAOR method for Hmatrix,” Applied Mathematics and Computation, vol. 160, no. 2, pp. 477–485, 2005. View at: Publisher Site  Google Scholar  MathSciNet
 L. T. Zhang, T. Z. Huang, T. X. Gu, and X. L. Guo, “Convergence of relaxed multisplitting USAOR methods for $H$matrices linear systems,” Applied Mathematics and Computation, vol. 202, no. 1, pp. 121–132, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 L. Zhang, T. Huang, T. Gu, X. Guo, and J. Yue, “Convergent improvement of SSOR multisplitting method for an $H$matrix,” Journal of Computational and Applied Mathematics, vol. 225, no. 2, pp. 393–397, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 L.T. Zhang, T.Z. Huang, S.H. Cheng, T.X. Gu, and Y.P. Wang, “A note on parallel multisplitting TOR method for $H$matrices,” International Journal of Computer Mathematics, vol. 88, no. 3, pp. 501–507, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 L. T. Zhang, T. Z. Huang, S. H. Cheng, and T. X. Gu, “The weaker convergence of nonstationary matrix multisplitting methods for almost linear systems,” Taiwanese Journal of Mathematics, vol. 15, no. 4, pp. 1423–1436, 2011. View at: Google Scholar  MathSciNet
 L.T. Zhang and J.L. Li, “The weaker convergence of modulusbased synchronous multisplitting multiparameters methods for linear complementarity problems,” Computers & Mathematics with Applications, vol. 67, no. 10, pp. 1954–1959, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 L.T. Zhang and X.Y. Zuo, “Improved convergence theorems of multisplitting methods for the linear complementarity problem,” Applied Mathematics and Computation. In press. View at: Google Scholar
 L.T. Zhang, “A new preconditioner for generalized saddle matrices with highly singular(1,1) blocks,” International Journal of Computer Mathematics, 2013. View at: Publisher Site  Google Scholar
 Y. Zhang, “The USAOR iterative method for linear systems,” Numerical Mathematics, vol. 9, no. 4, pp. 354–365, 1987 (Chinese). View at: Google Scholar  MathSciNet
 Y. Song, “On the convergence of the generalized AOR method,” Linear Algebra and Its Applications, vol. 256, pp. 199–218, 1997. View at: Publisher Site  Google Scholar  MathSciNet
 R. S. Varga, Matrix Iterative Analysis, Prentice Hall, Englewood Cliffs, NJ, USA, 1962. View at: MathSciNet
Copyright
Copyright © 2014 LiTao Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.