Abstract

We present several iterations for preconditioners introduced by Tarazaga and Cuellar (2009), and study the convergence of the method for solving a linear system whose coefficient matrix is positive definite matrices, and we also find that they complete very well with the SOR iteration, which is shown through numerical examples.

1. Introduction

For solving the large sparse linear system where is a square nonsingular positive definite matrix, an iteration method is often considered. For any splitting, with , the basic iterative method for system (1) is

To improve the convergence rate of the basic iterative method, transform the original systems (1) into the preconditioner form where is called the preconditioner or a preconditioning matrix. several preconditioned iterative methods have been proposed [16]. Since is nonsingular, (1) and (3) have the same solutions. We are considering here systems with a unique solution. It is well known that system (3) can be transformed by an iteration as follows: This iteration is called the Richardson iteration for preconditioning system (3). In this paper, we consider the iteration methods by the following form: where represents the iteration matrix.

Lemma 1 (see [7, 8]). For the iteration formula (5) to produce a sequence converging to , for any starting point , it is necessary and sufficient that the spectral radius of be less than one.

We will use the following notations. A matrix is called a row diagonally dominant if and column diagonally dominant if The Frobenius inner product of and is defined by where trace () denotes the trace of a matrix , and stands for the transpose of , and the spectral radius of a matrix is denoted by . Let be decomposed as in which is the diagonal of , is the strict lower part of , and is the strict upper part of .

Using the Frobenius norm, the preconditioning matrix in [6] is given by where stands for the th row of the matrix .

The second preconditioning matrix is and is computed to minimize the infinity norm of the iteration matrix.

The small gap for a matrix is defined by Obviously, is positive for diagonally dominant matrices. we can also suppose the diagonal entries are positive, else, it is true by multiplying the corresponding rows with .

Then, the preconditioner obtained by minimizing the infinity norm is given by We can easily find that the diagonal preconditioner is constant diagonal. Using the idea in [9], we can obtain the iterations associated with the preconditioners and as follows.

Solving the iteration the matrix is decomposed by and is moved to the left-hand side, we obtain Because is inverse, we can easily get or This iteration is considered as sequential Frobenius norm iteration.

Similarly, we obtained and built the infinity norm iteration associated with as follows: or Now, we have set two preconditioned SOR iterative methods which use and as a preconditioner.

In this paper, first in Section 2, we discuss the convergence of the preconditioned SOR iterative method which uses and as a preconditioner. In Section 3, we provide numerical experiments to illustrate the theoretical results obtained in Section 2, and we find if we choose the set of parameters; then our method has smaller spectral radii of the iterative matrices than the SOR method, which is shown through numerical examples.

2. Main Results

2.1. The Sequential Frobenius Norm Iteration

Theorem 2. Suppose and is positive definite; then the iteration converges for any starting point .

Proof. By simple calculation, the iteration matrix can be written as let be corresponding eigenvalue; then or we get that then The following proof is similar in [9], so we omit it; then the theorem is obtained.

2.2. The Sequential Infinity Norm Iteration

Theorem 3. Suppose and is positive definite. If every diagonal entry of satisfies then the iteration converges for any starting point .

Proof. The proof of this theorem is similar to the previous one. We notice the diagonal entries of the matrix are by the assumption , so the diagonal entries of matrix are positive which completes the proof.

Now, we modify the infinity norm preconditioner for diagonally dominant matrices. Since the eigenvalues of a positive definite matrix lie in the interval for arbitrarily small number , the preconditioning matrix is defined by for any . Especially, if .

Theorem 4. For any , if is positive definite; then the iteration converges for any starting point , where .

Proof. The proof is similar to Theorem 2, we notice the condition of or , and the diagonal entries are Hence, we obtain this result.

Next, we will obtain a general result for previously positive definite preconditioners.

Theorem 5. Suppose positive definite matrices of and satisfy where is the diagonal of and ; then the iteration converges for any starting point .

Remark 6. In Theorem 5, the matrix just satisfies the condition needed in the proof of Theorem 2; hence, this theorem is a general result for previous preconditioners.

3. Numerical Experiments

In this section, we provide numerical experiments to illustrate the theoretical results obtained in Section 2. All numerical experiments are carried out using MATLAB 7.1. The spectral radii of various iteration matrices are shown in Figures 1 and 2. For simplicity of comparison, suppose that all of . Let denote the spectral radii of the SOR iteration matrices, let denote the spectral radii of the Frobenius norm preconditioner iteration matrices, let denote the spectral radii of the infinity norm preconditioner, and let denote the spectral radii of the infinity- preconditioner with .

Remark 7. The previous numerical experiments indicate that the spectral radii of iterative matrices with three proposed preconditioners achieve significant improvement over the spectral radii of SOR iterative matrices.

Remark 8. As the proportion of negative to positive entries is increased, the spectral radius of random positive PSD matrices with half of the entries of negative with infinity norm preconditioner becomes larger than 1. But both of infinity norm preconditioners have faster convergence rates than SOR method.

Remark 9. Here, we maintain both of infinity norm iterations because there are cases when works better than .

Acknowledgments

The authors would like to thank the anonymous referees for their helpful comments and advice, which greatly improved the paper. The study was financially supported by National Natural Science Foundation (nos. 11161041, 71301111), Fundamental Research Funds for the Central Universities (31920130005).