Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2009, Article ID 794589, 9 pages
http://dx.doi.org/10.1155/2009/794589
Research Article

A Modification of Minimal Residual Iterative Method to Solve Linear Systems

1School of Mathematics and Computer Science, Fuyang Normal College, Fuyang Anhui 236032, China
2Department of Mathematics, East China Normal University, Shanghai 200062, China

Received 25 November 2008; Revised 16 February 2009; Accepted 4 March 2009

Academic Editor: Alois Steindl

Copyright © 2009 Xingping Sheng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We give a modification of minimal residual iteration (MR), which is 1V-DSMR to solve the linear system . By analyzing, we find the modifiable iteration to be a projection technique; moreover, the modification of which gives a better (at least the same) reduction of the residual error than MR. In the end, a numerical example is given to demonstrate the reduction of the residual error between the 1V-DSMR and MR.

1. Introduction

One of the important computational problems in the applied science and engineering is the solution of nonsingular linear systems of equations:where is a symmetric positive definite matrix (referred to as an SPD matrix), is given, and is unknown. To solve this problem, usually an iterative method is spurred by demands, which can be found in excellent papers [1, 2]. Most of the existing practical iterative techniques for solving larger linear systems of (1.1) utilize a projection process in one way or another; see, for example, [39].

Projection techniques are presented in different forms in many other areas of scientific computing, which can be formulated in abstract Hilbert functional spaces and finite element spaces. Furthermore, projection techniques are the process in which one attempts to solve a set of equations by solving each separate equation by correcting so that it is small in some norm. The idea of projection process is to extract an approximate solution to (1.1) from a subspace of . Denote and the search subspace and the constraints subspace, respectively. Let be their dimension and be an initial guess to the solution of (1.1). A projection method onto the subspace and orthogonal to is a process which finds an approximation solution to (1.1) by imposing the Petrov-Galerkin conditions that belongs to the affine space and the new residual vector is orthogonal to , that is,

From this point of view, the basic iterative methods for solving (1.1), such as Gauss-Seidel Iteration (GS), Steepest Descent Iteration (SD), Minimal Residual Iteration (MR), and Residual Norm Steepest Descent Iteration (RNSD), all can be viewed as a special case of the projection techniques.

In [2], Ujević obtained a new iterative method for solving (1.1), which is considered as a modification of Gauss-Seidel method. In [10], Jing and Huang pointed that this iterative method is also a projection process and named this method as “one-dimensional double successive projection method” (referred to as 1D-DSPM). In the same paper [10], the authors obtained another iterative method, which is named as “two-dimensional double successive projection method” (referred to as 2D-DSPM). The theory indicates that 2D-DSPM gives a better reduction of error than 1D-DSPM.

2. Notations and Preliminaries

In this paper, we will consider the following linear system of equations:where is not a symmetric but a positive definite matrix of order is a given element, and unknown. For linear systems (2.1), we can use the classical minimal residual iteration to solve, which can be found in [11]. Here, we will give a modification of minimal residual iteration to solve linear system (2.1). We call the modification method as one vector double successive MR (abbreviated 1V-DSMR) and compare reduction of the residual error at step between the modification iteration and original MR. Hence, we find that the modification iteration gives a better reduction of the residual error than the original MR. denotes a vector inner product between the vector .

We define the inner products as

In this subsection, we will recall minimal residual iterative method and give some properties of this iteration.

For the linear system (2.1), we can use the following algorithm which is called minimal residual iteration, viewed in [11].

Algorithm 2.1. (1) Choose an initial guess solution to (2.1), .
(2) Calculate
(3) If , then stop; else,
(4) calculate
(5) Go to step (3).

The minimal residual iteration can be interpreted with projection techniques. Here we represent the principles of this method in our uniform notation as follows:where .

If we choose and , then (1.2) turns to findwhere .

Equation (2.7) can be represented in terms of inner product aswhich isgiving rise to , which is the same as in (2.6).

If we choose a special , then (2.6) is the minimal residual iteration (MR); up to now, it is clear that MR is a special case of projection methods.

For the MR, we have the following property.

Lemma 2.2. Let and be generated by Algorithm 2.1, then we havewhere are defined in (2.2) and (2.3), respectively.

Proof. Using (2.6), we obtainwhere . Hence we have
From Lemma 2.2, easily, we can get the reduction of the residual error of MR as follows:

3. An Interpretation of 1V-DSMR with Projection Technique

In this section, we will give the modification of the minimal residual iterative method, which is abbreviated to 1V-DSMR; we can present this method in our uniform notation as follows:where and .

We will have a two-step investigation of 1V-DSMR.

The first step is to choose and , then it turns into the proceeding of MR, so we have .

The next step is a similar way to choose and ; denote and (1.2) turns to findwhere .

Equation (3.2) can be represented as in terms of inner product aswhich isThis gives rise to , which is the same as in (3.1).

If we choose a special , then (3.1) is a modification of the minimal residual iteration, which is named as 1V-DSMR; up to now, it is clear that 1V-DSMR is also a special case of projection methods.

As 1V-DSMR, we have the following relation of residual errors.

Theorem 3.1. Let generated by (3.1) and , then we havewhere is the same as in Lemma 2.2, and are defined in (2.2) and (2.3).

Proof. Using (3.1), we haveBy deduction, we getIf we substitute and into (3.7), then we obtainFrom Theorem 3.1, we also get a reduction of residual error of 1V-DSMR as follows:

Next we will depict the comparison results with respect to residual error reduction between 1V-DSMR and MR.

Theorem 3.2. 1V-DSMR gives a better (at least the same) reduction of the residual error than MR.

Proof. From the equalities (2.13) and (3.9), we havewhich proves the assertion of Theorem 3.2.

Theorem 3.2 implies that the residual error of MR is bigger than that of 1V-DSMR at step if the residual vectors and at the th iteration are equal to each other.

4. A Particular Method of 1V-DSMR

In this section, particular and will be chosen, and an algorithm to interpret 1V-DSMR is obtained.

Since 1V-DSMR is a modification of minimal residual iteration, take . In general, may be chosen in different ways. Here, we choose a particular , then from (3.1), each step of 1V-DSMR is as follows:where andIn this case, the first step of 1V-DSMR is the same as minimal residual iteration.

The above results provide the following algorithm of 1V-DSMR.

Algorithm 4.1 (A particular implementation of 1V-DSMR in a generalized way). (1) Choose an initial guess solution of (2.1), .
(2) Calculate
(3) Calculate
(4) If , then stop; else,
(5) calculate
(6) Goto step (4).

About Algorithm 4.1, we have the following basic property.

Theorem 4.2. If is a positive matrix, then the sequence of the iterations of Algorithm 4.1 converges to the solution of the linear system

Proof. From the equality (3.9), we get
This means that the sequence is a decreasing and bounded one. Thus, the sequence in question is convergent implying that the left-hand side tends to zero. Obviously, tends to zero, and the proof is complete.

5. Numerical Examples

In this section, we use examples to further examine the effectiveness and show the advantages of 1V-DSMR over MR.

We compare the numerical behavior of Algorithm 4.1 with Algorithm 2.1. All the tests are performed by MATLAB 7.0. Because of the influence of the error of roundoff, we regard the matrix as zero matrix if .

For convenience of comparison, consider the two-dimensional partial differential equations on the unit square region of the formwhere and are all given real valued function of and , which are as follows: , and .

Here, we use a five-point finite difference scheme to discretize the above problem with a uninform grid of mesh spacing in and directions, respectively; we can obtain a matrix of order as varies, which is called PDE matrix. Now, we take , then we get a matrix, which is called PDE900, and denoted by . It is easy to check that is real unsymmetrical and nonsingular.

If we take the coefficient matrix of linear system (2.1) as , the right vector , and initial iterative vector , then we use Algorithms 2.1 and 4.1 to compute the linear system (2.1), respectively. The comparison results between MR and 1V-DSMR are shown in Figure 1.

794589.fig.001
Figure 1: Comparison of convergence curve of residual norm (1.1) between Algorithm 2.1 and Algorithm 4.1.

From Figure 1, we can see that the convergence velocity of Algorithm 4.1 is always faster than that of Algorithm 2.1. In fact, when we use Algorithm 2.1 to compute the linear system (2.1), we only need to iterate 814 steps, and the residual norm is . While using Algorithm 4.1, we only need to iterate 647 steps, and the residual norm is .

Acknowledgment

This project was granted financial support from Shanghai Science and Technology Committee (no. 062112065), Shanghai Priority Academic Discipline Foundation, The University Young Teacher Sciences Foundation of Anhui Province (no. 2006jql220zd) and PhD Program Scholarship Fund of ECNU2007.

References

  1. Y. Saad and H. A. van der Vorst, “Iterative solution of linear systems in the 20th century,” Journal of Computational and Applied Mathematics, vol. 123, no. 1-2, pp. 1–33, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  2. N. Ujević, “A new iterative method for solving linear systems,” Applied Mathematics and Computation, vol. 179, no. 2, pp. 725–730, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  3. Å. Björck and T. Elfving, “Accelerated projection methods for computing pseudoinverse solutions of systems of linear equations,” BIT, vol. 19, no. 2, pp. 145–163, 1979. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  4. R. Bramley and A. Sameh, “Row projection methods for large nonsymmetric linear systems,” SIAM Journal on Scientific and Statistical Computing, vol. 13, no. 1, pp. 168–193, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  5. C. Brezinski, Projection Methods for Systems of Equations, vol. 7 of Studies in Computational Mathematics, North-Holland, Amsterdam, The Netherlands, 1997. View at Zentralblatt MATH · View at MathSciNet
  6. C. Kamath and A. Sameh, “A projection method for solving nonsymmetric linear systems on multiprocessors,” Parallel Computing, vol. 9, no. 3, pp. 291–312, 1989. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  7. L. Lopez and V. Simoncini, “Analysis of projection methods for rational function approximation to the matrix exponential,” SIAM Journal on Numerical Analysis, vol. 44, no. 2, pp. 613–635, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  8. V. Simoncini, “Variable accuracy of matrix-vector products in projection methods for eigencomputation,” SIAM Journal on Numerical Analysis, vol. 43, no. 3, pp. 1155–1174, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  9. K. Tanabe, “Projection method for solving a singular system of linear equations and its applications,” Numerische Mathematik, vol. 17, pp. 203–214, 1971. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  10. Y.-F. Jing and T.-Z. Huang, “On a new iterative method for solving linear systems and comparison results,” Journal of Computational and Applied Mathematics, vol. 220, no. 1-2, pp. 74–84, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  11. Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, Philadelphia, Pa, USA, 2nd edition, 2003. View at Zentralblatt MATH · View at MathSciNet