#### Abstract

By using homotopy analysis method (HAM), we introduce an iterative method for solving linear systems. This method (HAM) can be used to accelerate the convergence of the basic iterative methods. We also show that by applying HAM to a divergent iterative scheme, it is possible to construct a convergent homotopy-series solution when the iteration matrix *G* of the iterative scheme has particular properties such as being symmetric, having real eigenvalues. Numerical experiments are given to show the efficiency of the new method.

#### 1. Introduction

Computational simulation of scientific and engineering problems often depend on solving linear system of equations. Such systems frequently arise from discrete approximation to partial differential equations. Systems of linear equations can be solved either by direct or by iterative methods. Iterative methods are ideally suited for solving large and sparse systems. For the numerical solution of a large nonsingular linear system, where is given, is known, and is unknown, one class of iterative methods is based on a splitting of the matrix , that is, where is taken to be invertible and cheap to invert, which mean that a linear system with matrix coefficient is much more economical to solve than (1). Based on (2), (1) can be written in the fixed-point form which yields the following iterative scheme for the solution of (1):

A sufficient and necessary condition for (4) to converge to the solution of (1) is , where denotes spectral radius. Some effective splitting iterative methods and preconditioning methods were presented for solving the linear system of (1), see [1–9]. Recently, Keramati [10], Yusufoğlu [11], and Liu [12] applied the homotopy perturbation method to obtain the solution of linear systems and deduced the conditions for checking the convergence of homotopy series. In this work, we show how the homotopy analysis method may be regarded as an acceleration procedure based on the iterative method (4). We observe that it is not necessary that the basic method (4) be convergent. When , it is sufficient that the eigenvalues , of iteration matrix satisfy the relation , , (or , ). When , by applying the homotopy analysis method to the basic iterative method (4), we can improve the rate of convergence of the iterative method (4). This paper is organized as follows. In Section 2, we introduce the basic concept of HAM, derive the conditions for convergence of the homotopy series, and apply the homotopy analysis method to the Jacobi, Richardson, SSOR, and SAOR methods. In Section 3, some numerical examples are presented to show the efficiency of the method. Finally, we make some concluding remarks in Section 4.

#### 2. Basic Idea of HAM

The homotopy analysis method (HAM) [13, 14] was first proposed by S. J. Liao in 1992. The HAM was further developed and improved by S. Liao for nonlinear problem in [15].

Here, we apply the homotopy analysis method (HAM) to the problem (3) for finding the solution of (1) when . Consider (3), where is unknown vector of (1) and is the iteration matrix of an iterative method. Let denote an initial guess of exact solution , an convergence control parameter. Then, we can apply the homotopy analysis method and define by and a homotopy as follows: where and is an embedding parameter. Hence, it is obvious that And as the embedding parameter increases from to , the solution of varies continuously from the initial approximation to the exact solution of the original equation . The homotopy analysis method uses the parameter as an expanding parameter (see [16–18]) to obtain and it gives an approximation to the solution of (3) as

By substituting (9) in (6) and equating the terms with identical powers of , we can obtain

This implies that

Taking yields that where is an initial guess of exact solution . Therefore,

By setting , we obtain

It is obvious that if , then the series, , converges and we have which is the exact solution of (3). A series of vectors can be computed by (14), and our aim is to choose the convergence control parameter so that . For improving the rate of convergence of iterative method, we present the following theorem.

Theorem 1. *Suppose that , and let , and , be the eigenvalues of and , respectively. Let , , and let , . If and , , then *(i)*the quadratic equation , , has simple real roots , *(ii)* belongs to the interval and , *(iii)*for each and , the relation holds. *

*Proof. *(i) We begin by defining two index sets and . Since and , for , we have

So, , , has simple real roots

For , we have and

So, by using the assumption , , , has also simple real roots , defined as follows:

(ii) From part (i), we observe that and . This implies that .

(iii) From part (i), we also observe that for each and . We have , and the relation holds.

The following theorem shows that by applying the HAM to a iterative scheme which is divergent, it is possible to construct a convergent homotopy-series vectors when the iteration matrix has particular properties.

Theorem 2. *Let , and , , be the eigenvalues of and , respectively. Let and . Suppose that , . *(i)*If , for , and , then .*(ii)*If , for , and , then .*

*Proof. *From (13), we have . So, it is sufficient to have

Under the hypothesis of part (i), we have , , and the relation (22) holds if , and the proof of (i) is complete. A similar argument holds if , for , and part (ii) follows.

When the assumption of part (i) (or part (ii)) of Theorem 2 does not hold, the following theorem shows that, for certain cases, instead of (3) we can consider the equivalent equation in which the iteration matrix has the eigenvalues with the desired properties.

Theorem 3. *Let , and , , be the eigenvalues of and , respectively. If (or ) for , then (or ) for . *

*Proof. *The proof immediately follows from the fact that .

The following corollary shows that by using the modified linear equation (23) and the homotopy analysis method with the corresponding , we can construct a convergent homotopy series for linear system (1).

Corollary 4. *If has only real eigenvalues, then there exists such that the series of vectors generated by
**
converges to the exact solution of (1). *

*Proof . *The proof immediately follows from Theorems 2 and 3.

This corollary establishes that the series of vectors generated by (24) always converges if the iteration matrix is a symmetric matrix. When is symmetric with diagonal elements positive real numbers, (1) can be written as follows: , where is the diagonal of . Denoting again by , , and the expressions , , and , respectively, it is obvious the new coefficient is still symmetric and can therefore be written in the form . An immediate consequence of Corollary 4 and the above discussion is the following results. The series of vectors generated by (24) converges when is a symmetric matrix and the iterative method is the Richardson method The series of vectors generated by (24) converges when is a symmetric matrix and the iterative method is the Jacobi method If is a symmetric matrix and the iterative method is SAOR method with then the series of vectors generated by (24) converges if where and stand for the minimum and the maximum eigenvalues of , respectively. This result follows from the fact that in this case the iteration matrix of SAOR method has real eigenvalues (see Theorem 2 in [19]). If is a symmetric matrix and the iterative method is SSOR method, then the series of vectors generated by (24) converges if . This result immediately follows from the fact that the SAOR method reduces to the SSOR method for .

#### 3. Numerical Examples

For numerical comparison, we use some matrices from the University of Florida sparse matrix Collection [20]. These matrices with their properties are shown in Table 1. We determined the spectral radii of iteration matrices of the classical SOR, AOR, SSOR, SAOR, Jacobi, and Richardson methods () as well as those of the corresponding after the application HAM method with the experimentally computed optimal value of . In Tables 2–4, we list , , the interval of convergence which introduced in Theorems 1 and 2, the experimentally computed optimal value of , and the spectral radius of iteration matrix which introduced in Corollary 4.

In Table 2, we consider the convergent classical methods (). It is easy to verify that the numerical results are consistent with Theorem 1, and we observe that for the convergent classical methods by choosing suitable convergence control parameter , the rate of the convergence of the HAM method is faster than that of corresponding classical method.

In Table 3, we consider the divergent classical methods when (or ) for . We can see that the numerical results are consistent with Theorem 2. These results show that by applying the HAM to a iterative scheme which is divergent, it is possible to construct a convergent homotopy-series vectors when the iteration matrix has mentioned properties.

In Table 4, we report the results obtained for the symmetric matrix Si2 which has positive diagonal elements. For this example, the classical methods diverge and there exist such that and . We observe that the results are consistent with Theorem 3 and Corollary 4. The results show that the HAM method is convergent but the rate of the convergence is slow.

Finally, Tables 3 and 4 show that it is not necessary to choose the the parameters and in the convergence interval of the classical methods. In the case of divergence, under the assumptions of Theorems 2 and 3, the application of the HAM can generate the convergent homotopy-series vectors for linear system (1).

#### 4. Conclusion

In this paper, we proposed to apply the homotopy analysis method to the classical iterative methods for solving the linear system of equations. The theoretical results show the HAM can be used to accelerate the convergence of the basic iterative methods. In addition, we show that by applying the HAM to a divergent iterative scheme, it is possible to construct a convergent homotopy-series solution when the iteration matrix of the iterative scheme has particular properties. The numerical experiments confirm the theoretical results and show the efficiency of the new method.