Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal

Volume 2014 (2014), Article ID 273873, 10 pages

http://dx.doi.org/10.1155/2014/273873
Research Article

Convergence Results on Iteration Algorithms to Linear Systems

1School of Mathematical Science, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China

2Department of Mathematics, Zhejiang Ocean University, Zhoushan, Zhejiang 316000, China

3School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China

Received 15 April 2014; Accepted 21 April 2014; Published 13 May 2014

Academic Editor: Shan Zhao

Copyright © 2014 Zhuande Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods.

1. Introduction

The primal goal of this paper is to study the iterative methods of the linear systems: where is a given complex or real matrix.

It is well known that linear systems arise in studies in many areas such as engineering and industrial science. For example, in the field of numerical solutions of differential-algebraic equations (DAE) and ordinary differential equations (ODE) [13] it is very important to solve (1). In digital image and signal processing, especially in compressed sensing, Stojnic [4] has mentioned that the systems (1) are the mathematical background of compressed sensing problems and studied sharp lower bounds on the values of allowable sparsity for any given number (proportional to the length of the unknown vector) of equations for the case of the so-called block-sparse unknown vectors. In the blind source separation of signal, Congedo et al. [5] have showed that it is very important to solve (1) and proposed a special method with joint singular value decomposition. In the field of biomedical engineering, Deo et al. [6] have mentioned that the cardiac electrical activity can be described by the bidomain equations and pointed out that the numerical solution of partial differential equations (PDEs) associated with bidomain problems often leads to (1). Moreover, they have proposed a novel preconditioner for the PCG method to solve (1) and a cheap iterative method such as successive overrelaxation (SOR) to further refine the solution for a desired accuracy. In 2008, Shou et al. [7] have showed that the reconstruction of epicardial potentials (EPs) from body surface potentials (BSPs) can be characterized as an ill-posed inverse problem and geometric errors in the ECG inverse problem will directly affect the calculation of transfer matrix in (1). In the field of systems and control science, Ding and Chen [8] have pointed out that Sylvester equations in systems and control especially Lyapunov equations in continuous- and discrete-time stability analysis can be converted into equivalent equations as (1). In the field of machine learning many problems of classification and regression, such as single-hidden layer neural networks [9, 10], support vector machines, functional neural networks, and so on, can be summarized as (1). Therefore, the solution of (1) is very important in scientific computing.

The methods to solve linear systems can be roughly divided into two categories: direct methods and iterative methods 1. Iterative methods are more suitable than direct methods for large linear systems [11, 12]. The current research on iterative algorithms has been more mature, but how to make it fit the new architecture model is complicated. In order to gain more good performance, acceleration has been applied and architecture has been considered [13].

In this paper, we do some research with the iterative algorithm. To this end, the paper is organized as follows. In Section 2, we introduce the backward MPSD (backward modified preconditioned simultaneous displacement) iterative method which is a unified form of some important backward iterations. In Section 3, we first introduce some important lemmas which will be used and then we obtained the convergence results between backward MPSD iteration and Jacobi iteration. We also proposed convergence results between some backward iterations and Jacobi iteration in the corollaries. In Section 4, some examples and numerical experiments have been done to make sure of the correctness of results. Especially, we point out that the backward iteration is better than the original one in many cases just like Example 4.

2. A Unified Framework of Iteration Matrix and Algorithm

The basic idea to solve (1) is matrix splitting. If we let where is a diagonal matrix obtained from and nonsingular and and are strictly lower and upper triangular matrices obtained from , (1) becomes the equivalent one: At this moment, .

The Jacobi iterative matrix is The MPSD (modified preconditioned simultaneous displacement) iterative method is studied in [1417].

If is a real constant, obviously (3) is equivalent to

At the same time, if are real constants, we can obtain the following equivalent from (5):

It is easy to verify that

With (7), we can construct the backward MPSD iterative method as follows: where which we named as backward MPSD iterative matrix.

Also, we have the following algorithm.

Backward MPSD Algorithm

Step  0 (Input). Matrix , vector , , , , algorithm stop cutoff .

Step  1 (Initialization). Compute , , , , and set .

Step  2. Compute matrix according to (9).

Step  3. Compute with (8).

Step  4. If , then stop and accept as the solution of (1); else ; go to step 3.

Remark 1. With special values of , and , we have the following.(1)When , , and , we obtain the Jacobi iterative method.(2)When , , and , we obtain the backward JOR iterative method.(3)When , , and , we obtain the backward G-S iterative method.(4)When , , and , we obtain the backward SOR iterative method.(5)When , , and , we obtain the backward AOR iterative method.(6)When , , and , we obtain the backward SSOR iterative method.(7)When , , and , we obtain the backward EMA iterative method.(8)When , , and , we obtain the backward PSD iterative method.(9)When , , and , we obtain the backward PJ iterative method.

The convergence relationship between the Gauss-Seidel iterative matrix and the Jacobi iterative matrix is studied in [12], and the generalized results are studied in [18]. Some eigenvalue relationships between other iterative matrices and Jacobi iterative matrix are studied with the p-cyclic case in [1926]. Some backward iterations are studied in [27]. In the following we consider the convergence results between the backward MPSD iterative matrix and the Jacobi iterative matrix and obtain convergence relationships between some other backward iterative matrices and Jacobi matrix.

3. Convergence Results

In order to obtain the convergence results, we give some well-known results which will be used in the proof of Theorem 7 as follows.

Definition 2 (see [13]). The splitting with and nonsingular is called a regular splitting if and . It is called a weak regular splitting if and .

It is obvious that a regular splitting is a weak regular splitting.

Lemma 3 (see [13]). The nonnegative matrix is convergent; that is, if and only if exists and .

Lemma 4 (see [13]). Let be a weak regular splitting of , . Then the following statements are equivalent.(1) ; that is, is inverse-positive.(2) .(3) so that .

Lemma 5 (see [12]). Let be an irreducible matrix. Then(1) has a positive real eigenvalue equal to its spectral radius;(2)to , there corresponds an eigenvector ,(3) increases when any entry of increases,(4) is a simple eigenvalue of .

Lemma 6 (see [12]). Let be an irreducible matrix. Then for any , either or

By the lemmas above, we give the convergence theorem in the following.

Theorem 7. Let the coefficient matrix of (1) be irreducible with , the Jacobi matrix, and the backward MPSD iterative matrix. Then, for , , we have the following.(1) , . (2)One and only one of the following mutually exclusive relations is valid.(i) .(ii) .(iii) .

Thus, The Jacobi iterative method and the backward MPSD iterative method are either both convergent or both divergent.

Proof. Combining with Lemma 3, we have , and

Since and is irreducible, and are irreducible. By , we have and irreducible. Thus, by (12), and is irreducible. By Lemma 5, there exists and corresponding vector , such that ; namely, Let ; by calculation, that is,

Since is irreducible, by Lemma 5, . If , by , . If , then because the left side of (14) is nonnegative, Thus . By (14), If , by (16), we have ; that is,

Since and , we obtain that . Thus, . This contradicts . So, .

For mutually exclusive relations, consider the following.

(i) If , let and then

Since and , is a regular splitting:

By , and , we know that . By Lemma 4, . Combine this with the result in (1), we have .

If , by (14), we have Since , that is,

By and , there is Thus,

Combining (23) with (29), we have By Lemma 6, we obtain that .

(ii) If , by (14), we have namely, . Since , we have By Lemma 6, we obtain that .

(iii) If , by (15), we have Since , that is,

By and , there is Thus,

Combining (31) with (33), we have By Lemma 6, we obtain that .

If and , by (1), we obtain that or . Thus, by (i) and (iii), we know that or . This contradicts . So, .

If and , by (i) and (ii), we have . This contradicts . So, .

With special values of , , and , we have the following corollaries.

Corollary 8. Let the coefficient matrix of (1) be irreducible, the Jacobi matrix, and the backward JOR iterative matrix. Then, for , we have the following.(1) , .(2)One and only one of the following mutually exclusive relations is valid. (i) .(ii) .(iii) .

Thus, The Jacobi iterative method and the backward JOR iterative method are either both convergent or both divergent.

Corollary 9. Let the coefficient matrix of (1) be irreducible, the Jacobi matrix, and the backward Gauss-Seidel iterative matrix. Then, we have the following.

(1) , .(2)One and only one of the following mutually exclusive relations is valid.(i) .(ii) .(iii) .

Thus, The Jacobi iterative method and the backward Gauss-Seidel iterative method are either both convergent or both divergent.

Corollary 10. Let the coefficient matrix of (1) be irreducible, the Jacobi matrix, and the backward SOR iterative matrix. Then, for , we have the following.

(1) , .(2)One and only one of the following mutually exclusive relations is valid.  (i) .(ii) .(iii) .

Thus, The Jacobi iterative method and the backward SOR iterative method are either both convergent or both divergent.

Corollary 11. Let the coefficient matrix of (1) be irreducible, the Jacobi matrix, and the backward AOR iterative matrix. Then, for , we have the following.(1) , .(2)One and only one of the following mutually exclusive relations is valid.(i) .(ii) .(iii) .

Thus, The Jacobi iterative method and the backward AOR iterative method are either both convergent or both divergent.

Corollary 12. Let the coefficient matrix of (1) be irreducible, the Jacobi matrix, and the backward SSOR iterative matrix. Then, for , we have the following.

(1) , .(2)One and only one of the following mutually exclusive relations is valid.  (i) .(ii) .(iii) .

Thus, The Jacobi iterative method and the backward SSOR iterative method are either both convergent or both divergent.

Corollary 13. Let the coefficient matrix of (1) be irreducible, the Jacobi matrix, and the backward EMA iterative matrix. Then, for , we have the following.(1) , .(2)One and only one of the following mutually exclusive relations is valid.  (i) .(ii) .(iii) .

Thus, The Jacobi iterative method and the backward EMA iterative method are either both convergent or both divergent.

Corollary 14. Let the coefficient matrix of (1) be irreducible, the Jacobi matrix, and the backward PSD iterative matrix. Then, for , we have the following.(1) , .(2)One and only one of the following mutually exclusive relations is valid. (i) .(ii) .(iii) .

Thus, The Jacobi iterative method and the backward PSD iterative method are either both convergent or both divergent.

Corollary 15. Let the coefficient matrix of (1) be irreducible, the Jacobi matrix, and the backward PJ iterative matrix. Then, for , we have the following.(1) , .(2)One and only one of the following mutually exclusive relations is valid.    (i) .(ii) .(iii) .

Thus, The Jacobi iterative method and the backward PJ iterative method are either both convergent or both divergent.

Remark 16. The convergence results between the backward MPSD and Jacobi iterative matrix are proposed, and The convergence results between some special cases of backward MPSD (such as backward JOR, backward G-S, backward EMA, and backward PSD) and Jacobi iterative matrix are obtained. These results involve some special iterative methods which are proposed in the references.

4. Numerical Examples

In this section, we show five examples. The first three examples are used to show the convergence of the proposed iterative methods. Example 4 is used to show the divergence of the proposed iterative methods. Example 5 shows that the backward iterative methods are better than the origin methods when the upper triangular part dominates the lower triangular part. In the following figures, horizontal axis denotes the numbers of iterations and vertical axis denotes the errors of iterations.

Example 1. Let the coefficient matrix and the vector of (1) be

The Jacobi iterative matrix is

By caculation, we obtain .(1)Let , . We obtain the backward PSD iterative matrix , and (2)Let , . We obtain the backward PJ iterative matrix , and (3)Let , . We obtain the backward JOR iterative matrix , and (4)Let . We obtain the backward EMA iterative matrix , and With these iterative methods and the presented algorithm, the solution is .

Example 2. In order to obtain the numerical solution of the laplace equation under a uniform square mesh of five-point difference approximations, and the interior mesh points as shown in Figure 1 [28], we can obtain the linear system (1), where the matrix and the vector of (1) are

The Jacobi iterative matrix is By caculation, we obtain .(1)Let , . We obtain the backward PSD iterative matrix , and (2)Let , . We obtain the backward PJ iterative matrix , and (3)Let , . We obtain the backward JOR iterative matrix , and (4)Let . We obtain the backward EMA iterative matrix , and

From Figures 2, 3, 4, and 5, the errors of Jacobi iteration are denoted by blue circles and that of MPSD iteration is denoted by red stars. By the figures above, We know that Jacobi iteration is better than backward PSD, JOR, and EMA iteration and is worse than PJ iteration under the values of , , and in this example.

273873.fig.001
Figure 1: Uniform square mesh of five-point difference.
273873.fig.002
Figure 2: The errors of PSD and Jacobi iteration.
273873.fig.003
Figure 3: The errors of PJ and Jacobi iteration.
273873.fig.004
Figure 4: The errors of JOR and Jacobi iteration.
273873.fig.005
Figure 5: The errors of EMA and Jacobi iteration.

Example 3. Let the coefficient matrix of (1) be where , , and [28]. Here, we let and . By caculation, we obtain .(1)Let , . We obtain the backward PSD iterative matrix , and (2)Let , . We obtain the backward PJ iterative matrix , and (3)Let , . We obtain the backward JOR iterative matrix , and (4)Let . We obtain the backward EMA iterative matrix , and

Example 4. Let the coefficient matrix and the vector of (1) be The Jacobi iterative matrix is By caculation, we obtain .(1)Let , . We obtain the backward PSD iterative matrix , and (2)Let , . We obtain the backward PJ iterative matrix , and (3)Let , . We obtain the backward JOR iterative matrix , and (4)Let . We obtain the backward EMA iterative matrix , and It shows that the backward MPSD iteration is invalid for this example.

Example 5. Let the coefficient matrix of (1) be

We can see the analogous matrix in [29]. By caculation, we obtain .

Let , . We obtain the backward Gauss-Seidel iterative matrix , and The Gauss-Seidel iterative matrix , and Thus,

From Figure 6, the red stars denote the error of backward Guass-Seidel iteration and the blue circles denote that of Guass-Seidel. So, the backward iterative methods are better than the original methods under the assumption that the upper triangular part dominates the lower triangular part.

273873.fig.006
Figure 6: The errors of backward Guass-Seidel and Guass-Seidel iteration.

5. Conclusions

The Jacobi iteration is the basic iteration for linear systems and easier to the analysis of the convergence than other iterations. In the paper, we proposed the backward MPSD iteration and obtained the convergence result between backward MPSD iteration (including iterations such as backward JOR, backward G-S, backward EMA, and backward PSD) and Jacobi iteration. We pointed out that the backward MPSD iteration and the Jacobi iteration are either both convergent or both divergent under the assumptions in Theorem 7. So, we can ascertain the convergence or divergence of backward MPSD iteration by Jacobi iteration. In some case, the backward iteration is better than the original one.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The work is partially supported by National Nature Science Foundation of China (1117105, 11101071, and 61001200) and the Fundamental Research Funds for the centeral Universities (ZYGX2009J103).

References

  1. Y. Cao, S. Li, L. Petzold, and R. Serban, “Adjoint sensitivity analysis for differential-algebraic equations: the adjoint DAE system and its numerical solution,” SIAM Journal on Scientific Computing, vol. 24, no. 3, pp. 1076–1089, 2003. View at Publisher · View at Google Scholar · View at Scopus
  2. Y. Cao and L. Petzold, “A subspace error estimate for linear systems,” SIAM Journal on Matrix Analysis and Applications, vol. 24, no. 3, pp. 787–801, 2003. View at Publisher · View at Google Scholar · View at Scopus
  3. Y. Cao and L. Petzold, “A posteriori error estimation and global error control for ordinary differential equations by the adjoint method,” SIAM Journal on Scientific Computing, vol. 26, no. 2, pp. 359–374, 2004. View at Publisher · View at Google Scholar · View at Scopus
  4. M. Stojnic, “l2/l1-optimization in block-sparse compressed sensing and its strong thresholds,” IEEE Journal on Selected Topics in Signal Processing, vol. 4, no. 2, pp. 350–357, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. M. Congedo, R. Phlypo, and D. T. Pham, “Approximate joint singular value decomposition of an asymmetric rectangular matrix set,” IEEE Transactions on Signal Processing, vol. 59, no. 1, pp. 415–424, 2011. View at Google Scholar
  6. M. Deo, S. Bauer, G. Plank, and E. Vigmond, “Reduced-order preconditioning for bidomain simulations,” IEEE Transactions on Biomedical Engineering, vol. 54, no. 5, pp. 938–942, 2007. View at Google Scholar
  7. G. Shou, L. Xia, M. Jiang, Q. Wei, F. Liu, and S. Crozier, “Truncated total least squares: a new regularization method for the solution of ECG inverse problems,” IEEE Transactions on Biomedical Engineering, vol. 55, no. 4, pp. 1327–1335, 2008. View at Publisher · View at Google Scholar · View at Scopus
  8. F. Ding and T. Chen, “Gradient based iterative algorithms for solving a class of matrix equations,” IEEE Transactions on Automatic Control, vol. 50, no. 8, pp. 1216–1221, 2005. View at Publisher · View at Google Scholar · View at Scopus
  9. C. Li, F. Ma, and T. Huang, “2-D Analysis based iterative learning control for linear discrete-time systems with time delay,” Journal of Industrial and Management Optimization, vol. 7, no. 1, pp. 175–181, 2011. View at Publisher · View at Google Scholar · View at Scopus
  10. Y. Yuan, Y. Wang, and F. Cao, “Optimization approximation solution for regression problem based on extreme learning machine,” Neurocomputing, vol. 74, no. 16, pp. 2475–2482, 2011. View at Publisher · View at Google Scholar · View at Scopus
  11. H. Yin, “An iterative method for general variational inequalities,” Journal of Industrial and Management Optimization, vol. 1, no. 2, pp. 201–209, 2005. View at Google Scholar
  12. R. S. Varga, Matrix Iterative Analysis, Springer, Berlin, Germany, 2nd edition, 2000.
  13. A. Berman and R. J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Academic Press, New York, NY, USA, 1979.
  14. X. P. Liu, “Convergence of some iterative methods,” Numerical Computing and Computer Applications, vol. 1, pp. 58–64, 1992 (Chinese). View at Google Scholar
  15. Z. D. Wang, T. Z. Huang, and Z. X. Gao, “Relationship of eigenvalue between MPSD iterative method and Jacobi method,” Southeast Asian Bulletin of Mathematics, vol. 33, pp. 165–178, 2009. View at Google Scholar
  16. N. M. Missirlis and D. J. Evans, “The modified preconditioned simultaneous displacement (MPSD) method,” Mathematics and Computers in Simulation, vol. 26, no. 3, pp. 257–262, 1984. View at Google Scholar · View at Scopus
  17. Z.-D. Wang and T.-Z. Huang, “Comparison results between Jacobi and other iterative methods,” Journal of Computational and Applied Mathematics, vol. 169, no. 1, pp. 45–51, 2004. View at Publisher · View at Google Scholar · View at Scopus
  18. W. Li, L. Elsner, and L. Lu, “Comparisons os spectral radii and the theorem of Stein-Rosenberg,” Linear Algebra and its Applications, vol. 348, no. 1–3, pp. 283–287, 2002. View at Publisher · View at Google Scholar
  19. R. M. Li, “Relationship of eigenvalues for USAOR iterative method applied to a class of p-cyclic matrices,” Linear Algebra and Its Applications, vol. 362, pp. 101–108, 2003. View at Publisher · View at Google Scholar · View at Scopus
  20. A. Hadjidimos, D. Noutsos, and M. Tzoumas, “Towards the determination of the optimal p-cyclic SSOR,” Journal of Computational and Applied Mathematics, vol. 90, no. 1, pp. 1–14, 1998. View at Google Scholar · View at Scopus
  21. S. Galanis, A. Hadjidimos, and D. Noutsos, “A Young-Eidson's type algorithm for complex p-cyclic SOR spectra,” Linear Algebra and Its Applications, vol. 286, no. 1–3, pp. 87–106, 1999. View at Google Scholar · View at Scopus
  22. A. Hadjidimos, D. Noutsos, and M. Tzoumas, “On the exact p-cyclic SSOR convergence domains,” Linear Algebra and Its Applications, vol. 232, no. 1–3, pp. 213–236, 1996. View at Publisher · View at Google Scholar · View at Scopus
  23. A. Hadjidimos, D. Noutsos, and M. Tzoumas, “On the convergence domains of the p-cyclic SOR,” Journal of Computational and Applied Mathematics, vol. 72, no. 1, pp. 63–83, 1996. View at Publisher · View at Google Scholar · View at Scopus
  24. S. Galanis, A. Hadjidimos, and D. Noutsos, “Optimal p-cyclic SOR for complex spectra,” Linear Algebra and Its Applications, vol. 263, no. 1–3, pp. 233–260, 1997. View at Google Scholar · View at Scopus
  25. D. M. Young, Iterative Solution of Large Linear Systems, Academic Press, New York, NY, USA, 1971.
  26. A. Hadjidimos and M. Neumann, “Superior convergence domains for the p-cyclic SSOR majorizer,” Journal of Computational and Applied Mathematics, vol. 62, no. 1, pp. 27–40, 1995. View at Google Scholar · View at Scopus
  27. M. Abate and J. Raissy, “Backward iteration in strongly convex domains,” Advances in Mathematics, vol. 228, no. 5, pp. 2837–2854, 2011. View at Publisher · View at Google Scholar · View at Scopus
  28. Z.-D. Wang and T.-Z. Huang, “The upper Jacobi and upper Gauss-Seidel type iterative methods for preconditioned linear systems,” Applied Mathematics Letters, vol. 19, no. 10, pp. 1029–1036, 2006. View at Publisher · View at Google Scholar · View at Scopus
  29. Z. I. Wonicki, “On performance of SOR method for solving nonsymmetric linear systems,” Journal of Computational and Applied Mathematics, vol. 137, no. 1, pp. 145–176, 2001. View at Publisher · View at Google Scholar · View at Scopus