/ / Article

Research Article | Open Access

Volume 2013 |Article ID 732032 | https://doi.org/10.1155/2013/732032

H. Nasabzadeh, F. Toutounian, "Convergent Homotopy Analysis Method for Solving Linear Systems", Advances in Numerical Analysis, vol. 2013, Article ID 732032, 6 pages, 2013. https://doi.org/10.1155/2013/732032

# Convergent Homotopy Analysis Method for Solving Linear Systems

Academic Editor: Ting-Zhu Huang
Received16 Jun 2013
Accepted22 Aug 2013
Published08 Oct 2013

#### Abstract

By using homotopy analysis method (HAM), we introduce an iterative method for solving linear systems. This method (HAM) can be used to accelerate the convergence of the basic iterative methods. We also show that by applying HAM to a divergent iterative scheme, it is possible to construct a convergent homotopy-series solution when the iteration matrix G of the iterative scheme has particular properties such as being symmetric, having real eigenvalues. Numerical experiments are given to show the efficiency of the new method.

#### 1. Introduction

Computational simulation of scientific and engineering problems often depend on solving linear system of equations. Such systems frequently arise from discrete approximation to partial differential equations. Systems of linear equations can be solved either by direct or by iterative methods. Iterative methods are ideally suited for solving large and sparse systems. For the numerical solution of a large nonsingular linear system, where is given, is known, and is unknown, one class of iterative methods is based on a splitting of the matrix , that is, where is taken to be invertible and cheap to invert, which mean that a linear system with matrix coefficient is much more economical to solve than (1). Based on (2), (1) can be written in the fixed-point form which yields the following iterative scheme for the solution of (1):

A sufficient and necessary condition for (4) to converge to the solution of (1) is , where denotes spectral radius. Some effective splitting iterative methods and preconditioning methods were presented for solving the linear system of (1), see [19]. Recently, Keramati [10], Yusufoğlu [11], and Liu [12] applied the homotopy perturbation method to obtain the solution of linear systems and deduced the conditions for checking the convergence of homotopy series. In this work, we show how the homotopy analysis method may be regarded as an acceleration procedure based on the iterative method (4). We observe that it is not necessary that the basic method (4) be convergent. When , it is sufficient that the eigenvalues , of iteration matrix satisfy the relation , , (or , ). When , by applying the homotopy analysis method to the basic iterative method (4), we can improve the rate of convergence of the iterative method (4). This paper is organized as follows. In Section 2, we introduce the basic concept of HAM, derive the conditions for convergence of the homotopy series, and apply the homotopy analysis method to the Jacobi, Richardson, SSOR, and SAOR methods. In Section 3, some numerical examples are presented to show the efficiency of the method. Finally, we make some concluding remarks in Section 4.

#### 2. Basic Idea of HAM

The homotopy analysis method (HAM) [13, 14] was first proposed by S. J. Liao in 1992. The HAM was further developed and improved by S. Liao for nonlinear problem in [15].

Here, we apply the homotopy analysis method (HAM) to the problem (3) for finding the solution of (1) when . Consider (3), where is unknown vector of (1) and is the iteration matrix of an iterative method. Let denote an initial guess of exact solution , an convergence control parameter. Then, we can apply the homotopy analysis method and define by and a homotopy as follows: where and is an embedding parameter. Hence, it is obvious that And as the embedding parameter increases from to , the solution of varies continuously from the initial approximation to the exact solution of the original equation . The homotopy analysis method uses the parameter as an expanding parameter (see [1618]) to obtain and it gives an approximation to the solution of (3) as

By substituting (9) in (6) and equating the terms with identical powers of , we can obtain

This implies that

Taking yields that where is an initial guess of exact solution . Therefore,

By setting , we obtain

It is obvious that if , then the series, , converges and we have which is the exact solution of (3). A series of vectors can be computed by (14), and our aim is to choose the convergence control parameter so that . For improving the rate of convergence of iterative method, we present the following theorem.

Theorem 1. Suppose that , and let , and , be the eigenvalues of and , respectively. Let , , and let , . If and , , then (i)the quadratic equation , , has simple real roots , (ii) belongs to the interval and , (iii)for each and , the relation holds.

Proof. (i) We begin by defining two index sets and . Since and , for , we have
So, , , has simple real roots
For , we have and
So, by using the assumption , , , has also simple real roots , defined as follows:
(ii) From part (i), we observe that and . This implies that .
(iii) From part (i), we also observe that for each and . We have , and the relation holds.

The following theorem shows that by applying the HAM to a iterative scheme which is divergent, it is possible to construct a convergent homotopy-series vectors when the iteration matrix has particular properties.

Theorem 2. Let , and , , be the eigenvalues of and , respectively. Let and . Suppose that , . (i)If  ,  for  , and , then .(ii)If  ,   for  , and , then .

Proof. From (13), we have . So, it is sufficient to have
Under the hypothesis of part (i), we have , , and the relation (22) holds if , and the proof of (i) is complete. A similar argument holds if ,  for , and part (ii) follows.

When the assumption of part (i) (or part (ii)) of Theorem 2 does not hold, the following theorem shows that, for certain cases, instead of (3) we can consider the equivalent equation in which the iteration matrix has the eigenvalues with the desired properties.

Theorem 3. Let , and , , be the eigenvalues of and , respectively.  If (or ) for , then (or ) for .

Proof. The proof immediately follows from the fact that .

The following corollary shows that by using the modified linear equation (23) and the homotopy analysis method with the corresponding , we can construct a convergent homotopy series for linear system (1).

Corollary 4. If has only real eigenvalues, then there exists such that the series of vectors generated by converges to the exact solution of (1).

Proof . The proof immediately follows from Theorems 2 and 3.

This corollary establishes that the series of vectors generated by (24) always converges if the iteration matrix is a symmetric matrix. When is symmetric with diagonal elements positive real numbers, (1) can be written as follows: , where is the diagonal of . Denoting again by , , and the expressions , , and , respectively, it is obvious the new coefficient is still symmetric and can therefore be written in the form . An immediate consequence of Corollary 4 and the above discussion is the following results. The series of vectors generated by (24) converges when is a symmetric matrix and the iterative method is the Richardson method The series of vectors generated by (24) converges when is a symmetric matrix and the iterative method is the Jacobi method If is a symmetric matrix and the iterative method is SAOR method with then the series of vectors generated by (24) converges if where and stand for the minimum and the maximum eigenvalues of , respectively. This result follows from the fact that in this case the iteration matrix of SAOR method has real eigenvalues (see Theorem 2 in [19]). If is a symmetric matrix and the iterative method is SSOR method, then the series of vectors generated by (24) converges if . This result immediately follows from the fact that the SAOR method reduces to the SSOR method for .

#### 3. Numerical Examples

For numerical comparison, we use some matrices from the University of Florida sparse matrix Collection [20]. These matrices with their properties are shown in Table 1. We determined the spectral radii of iteration matrices of the classical SOR, AOR, SSOR, SAOR, Jacobi, and Richardson methods () as well as those of the corresponding after the application HAM method with the experimentally computed optimal value of . In Tables 24, we list , , the interval of convergence which introduced in Theorems 1 and 2, the experimentally computed optimal value of , and the spectral radius of iteration matrix which introduced in Corollary 4.

 Matrix Order nnz Symmetric Positive definite Condition number cage5 37 233 No No 15.4166 pivtol 102 306 No No 109.607 pde225 225 1065 No No 39.0638 Si2 769 17801 Yes No 170.848 bfwb782 782 5982 Yes No 18.0724
 Matrix Method Convergence interval pde225 SOR ( ) 0.9776 (−1, −0.0743) −0.7423 0.7768 pde225 AOR ( , ) 0.8053 (−1, −0.8096) −0.9400 0.7739 pde225 SSOR ( ) 0.8048 (−1.4441, −1) −1.2941 0.7474 pde225 SAOR ( , ) 0.7773 (−1, −0.6038) −0.8920 0.6673 cage5 SOR ( ) 0.3388 (−1.2814, −1) −1.1591 0.2314 cage5 AOR ( , ) 0.9240 (−1, −0.0589) −0.68 0.3355 cage5 SSOR ( ) 0.6427 (−1.7235, −1) −1.5235 0.4557 cage5 SAOR ( , ) 0.5590 (−1.5609, −1) −1.3756 0.3890 pivtol SOR ( ) 0.9487 (−3.9226, −1) −2.7526 0.8588 pivtol AOR ( , ) 0.9958 (−1, −0.0128) −0.7300 0.7626 pivtol SSOR ( ) 0.8005 (−1.4850, −1) −1.3450 0.7317 pivtol SAOR ( , ) 0.9190 (−1, −0.2198) −0.83 0.6943 bfwb782 SOR ( ) 0.3732 (−1.2270, −1) −1.1470 0.3024 bfwb782 AOR ( , ) 0.9942 (−1, −0.0045) −0.65 0.3988 bfwb782 SSOR ( ) 0.5984 (−1.6665, −1) −1.4704 0.4103 bfwb782 SAOR ( , ) 0.5123 (−1.5123, −1) −1.3423 0.3453 bfwb782 Richardson 0.999998 , 0.8952
 Matrix Method Convergence interval pde225 SOR 2.4937 , −0.3 0.7980 pde225 AOR , 1.5542 , −0.5 0.7745 pde225 SSOR , 1.1112 , 12.0540 0.9388 pde225 SAOR , 4.9844 , 0.2510 0.7741 cage5 SOR 1.0150 , −0.4 0.7425 cage5 AOR , 2.0370 , −0.43 0.3355 cage5 SSOR , 1.8715 , 2.0280 0.7681 cage5 SAOR , 4.0002 , 0.458 0.3748 pivtol SOR 4.3002 , −0.33 0.7778 pivtol AOR , 1.5280 , −0.58 0.6766 pivtol SSOR 2.1132 , −0.55 0.7306 pivtol SAOR , 2.6670 , −0.47 0.7235 bfwb782 Jacobi 1.0554 , −0.81305 0.6794 bfwb782 AOR , 2.1398 , −0.4 0.3996 bfwb782 SSOR 2.1411 , 1.5550 0.7748 bfwb782 SAOR , 4.0000 , 0.45 0.3570
 Matrix Method Convergence interval Si2 SOR ( ) 1.0993 (0, 0.8415) 0.8 0.9974 Si2 AOR ( , ) 1.0799 (0, 2.7964) 2.7 0.9942 Si2 SSOR ( ) 1.24945 (0, 2.0007) 1.9 0.9909 Si2 SAOR ( , ) 1.1380 (0, 2.0949) 2 0.9916 Si2 Jacobi 1.3233 (0, 0.3705) 0.7 0.9999 Si2 Richardson 40.3813 (0, 0.0012) 0.0011 0.9999

In Table 2, we consider the convergent classical methods (). It is easy to verify that the numerical results are consistent with Theorem 1, and we observe that for the convergent classical methods by choosing suitable convergence control parameter , the rate of the convergence of the HAM method is faster than that of corresponding classical method.

In Table 3, we consider the divergent classical methods when (or ) for . We can see that the numerical results are consistent with Theorem 2. These results show that by applying the HAM to a iterative scheme which is divergent, it is possible to construct a convergent homotopy-series vectors when the iteration matrix has mentioned properties.

In Table 4, we report the results obtained for the symmetric matrix Si2 which has positive diagonal elements. For this example, the classical methods diverge and there exist such that and . We observe that the results are consistent with Theorem 3 and Corollary 4. The results show that the HAM method is convergent but the rate of the convergence is slow.

Finally, Tables 3 and 4 show that it is not necessary to choose the the parameters and in the convergence interval of the classical methods. In the case of divergence, under the assumptions of Theorems 2 and 3, the application of the HAM can generate the convergent homotopy-series vectors for linear system (1).

#### 4. Conclusion

In this paper, we proposed to apply the homotopy analysis method to the classical iterative methods for solving the linear system of equations. The theoretical results show the HAM can be used to accelerate the convergence of the basic iterative methods. In addition, we show that by applying the HAM to a divergent iterative scheme, it is possible to construct a convergent homotopy-series solution when the iteration matrix of the iterative scheme has particular properties. The numerical experiments confirm the theoretical results and show the efficiency of the new method.

#### References

1. R. S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Cliffs, NJ, USA, 1962. View at: MathSciNet
2. D. M. Young, Iterative Solution of Large Linear Systems, Academic Press, New York, NY, USA, 1971. View at: MathSciNet
3. L. A. Hageman and D. M. Young, Applied Iterative Methods, Academic Press, New York, NY, USA, 1981, Computer Science and Applied Mathematics. View at: MathSciNet
4. Y.-T. Li, C.-X. Li, and S.-L. Wu, “Improvements of preconditioned AOR iterative method for $L$-matrices,” Journal of Computational and Applied Mathematics, vol. 206, no. 2, pp. 656–665, 2007. View at: Publisher Site | Google Scholar | MathSciNet
5. H. Wang and Y.-T. Li, “A new preconditioned AOR iterative method for $L$-matrices,” Journal of Computational and Applied Mathematics, vol. 229, no. 1, pp. 47–53, 2009. View at: Publisher Site | Google Scholar | MathSciNet
6. L. Wang and Y. Song, “Preconditioned AOR iterative methods for $M$-matrices,” Journal of Computational and Applied Mathematics, vol. 226, no. 1, pp. 114–124, 2009. View at: Publisher Site | Google Scholar | MathSciNet
7. J. H. Yun, “A note on preconditioned AOR method for $L$-matrices,” Journal of Computational and Applied Mathematics, vol. 220, no. 1-2, pp. 13–16, 2008. View at: Publisher Site | Google Scholar | MathSciNet
8. Y. Zhang and T.-Z. Huang, “Modified iterative methods for nonnegative matrices and $M$-matrices linear systems,” Computers & Mathematics with Applications, vol. 50, no. 10–12, pp. 1587–1602, 2005. View at: Publisher Site | Google Scholar | MathSciNet
9. T.-Z. Huang, X.-Z. Wang, and Y.-D. Fu, “Improving Jacobi methods for nonnegative $H$-matrices linear systems,” Applied Mathematics and Computation, vol. 186, no. 2, pp. 1542–1550, 2007. View at: Publisher Site | Google Scholar | MathSciNet
10. B. Keramati, “An approach to the solution of linear system of equations by He's homotopy perturbation method,” Chaos, Solitons & Fractals, vol. 41, no. 1, pp. 152–156, 2009. View at: Publisher Site | Google Scholar
11. E. Yusufoğlu, “An improvement to homotopy perturbation method for solving system of linear equations,” Computers & Mathematics with Applications, vol. 58, no. 11-12, pp. 2231–2235, 2009. View at: Publisher Site | Google Scholar | MathSciNet
12. H.-K. Liu, “Application of homotopy perturbation methods for solving systems of linear equations,” Applied Mathematics and Computation, vol. 217, no. 12, pp. 5259–5264, 2011. View at: Publisher Site | Google Scholar | MathSciNet
13. S. Liao, Beyond Perturbation: Introduction to the Homotopy Analysis Method, vol. 2 of Modern Mechanics and Mathematics, Chapman & Hall/CRC Press, Boca Raton, Fla, USA, 2003. View at: MathSciNet
14. S. J. Liao, The proposed homotopy analysis technique for the solution of nonlinear problem [Ph.D. thesis], Shanghai Jiao Tong University, Shanghai, China, 1992.
15. S. Liao, “On the homotopy analysis method for nonlinear problems,” Applied Mathematics and Computation, vol. 147, no. 2, pp. 499–513, 2004. View at: Publisher Site | Google Scholar | MathSciNet
16. J.-H. He, “Homotopy perturbation technique,” Computer Methods in Applied Mechanics and Engineering, vol. 178, no. 3-4, pp. 257–262, 1999. View at: Publisher Site | Google Scholar | MathSciNet
17. J.-H. He, “A coupling method of a homotopy technique and a perturbation technique for non-linear problems,” International Journal of Non-Linear Mechanics, vol. 35, no. 1, pp. 37–43, 2000. View at: Publisher Site | Google Scholar | MathSciNet
18. J.-H. He, “Homotopy perturbation method: a new nonlinear analytical technique,” Applied Mathematics and Computation, vol. 135, no. 1, pp. 73–79, 2003. View at: Publisher Site | Google Scholar | MathSciNet
19. A. Hadjidimos and A. Yeyios, “Symmetric accelerated overrelaxation (SAOR) method,” Mathematics and Computers in Simulation, vol. 24, no. 1, pp. 72–76, 1982. View at: Publisher Site | Google Scholar | MathSciNet
20. T. A. Davis and Y. Hu, “The University of Florida sparse matrix collection,” Association for Computing Machinery, vol. 38, no. 1, aricle 1, 2011. View at: Publisher Site | Google Scholar | MathSciNet

Copyright © 2013 H. Nasabzadeh and F. Toutounian. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### More related articles

Download other formatsMore
Order printed copiesOrder
Views1232
Downloads685
Citations

#### Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.