Abstract

Continuing from the works of Li et al. (2014), Li (2007), and Kincaid et al. (2000), we present more generalizations and modifications of iterative methods for solving large sparse symmetric and nonsymmetric indefinite systems of linear equations. We discuss a variety of iterative methods such as GMRES, MGMRES, MINRES, LQ-MINRES, QR MINRES, MMINRES, MGRES, and others.

1. Introduction

When solving large sparse linear systems of the form in which the coefficient matrix is indefinite, there are basis methods and a variety of generalizations and modifications of them. For example, basic iterative methods for symmetric indefinite linear systems are the MINRES method and the SYMMLQ method, while a basic method for nonsymmetric linear systems is the GMRES method. (See, e.g., Lanczos [1], Golub and Van Loan [2], Paige and Saunders [3], Saad [4], and Saad and Schultz [5].)

In Section 2, we review the Arnoldi process and present background material. In Sections 3 and 4, we describe the LQ-MINRES and the QR-MINRES methods, respectively, as well as discussing their relationship in Section 5. In Section 6, we take a closer look at the QR-MINRES method and the SYMMQR method. In Sections 7 and 8, we describe the modified MINRES (MMINRES) method and the generalized QR-MINRES method, respectively. In Section 9, we review the GMRES method. Finally, we discuss the differences between the modified MINRES (MMINRES) method and the modified GMRES (MGMRES) method, in Section 10.

2. Arnoldi Process

First, we assume that matrix is symmetric. In [6, 7], we use a short term recurrence to generate orthonormal vectors as follows: Here we assume that and , for all . If we let , then the subspace, is equivalent to the Krylov subspace Consequently, we have the following properties, for (, ): as well as these matrix equations where

Example. We illustrate (6), for the case .

From (2) and (7), we have Since , we obtain So we obtain (6), with :

3. LQ-MINRES Method

We choose such that . Hence, we have For the MINRES method [3], we let then Letting , we can minimize by solving this linear system for

First, using (6), (7), and (5), we expand the coefficient matrix on the left-hand side of linear system (18) since is symmetric. Second, we exam the right-hand side vector in linear system (18) since , , and .

Here . We obtain where Here a Givens rotation matrix iswith .

Here, we repeatedly apply Givens rotations to the right-hand side of , (10), in order to zero out the -diagonal above the main diagonal and change the tridiagonal matrix into a lower tridiagonal matrix .

Then, from (22), we have Since , we obtain Since is symmetric, we have Thus, we find that the coefficient matrix (26) can be written as Consequently, we are now interested in solving this linear system

In the next step in the SYMMLQ method [6, 7], we solve

Sincewhere then we choose Thus, we have . Recall that by (23). (For details on the SYMMLQ method, see [6, 7].)

Then, we have where Consequently, we obtain Then, we let where So we have since . Consequently, we have

Using (35) and (39), we have Since , the coefficient matrix in linear equation (28) is

From (24) and (36) we have the right-hand side vector in linear system (28) Since is nonsingular, and from (41) and (43), linear system (28) is which reduces to Thus, we obtain this equation for the th iteration of the LQ-MINRES method

4. QR-MINRES Method

Again, as in Section 3, we consider another method for solving linear system (18) for Then, instead of solving linear system (47), we now solve

First, we repeatedly apply Givens rotations to the left-hand side of where Here, we use Givens rotations applied on the left-hand side of , (10), to transform a tridiagonal matrix into an upper tridiagonal matrix , (21).

Then, we obtain Since is symmetric, we obtain Thus, the coefficient matrix in linear system (48) can be written as Consequently, we are interested in solving this linear system

In the next step in the SYMMQR method [6, 7], we solve this linear system Sincewhere then we choose Thus, we obtain . (For details on the SYMMQR method, see [6, 7].)

Let where We have

Then, we have where So we have since . Consequently, we obtain

Using (64), we have Since , the coefficient matrix in the linear equation (54) is

Since we have right-hand side vector in the linear system (54) Since is nonsingular, we obtain the linear system (54) which reduces to Thus, we obtain the equation for the th iteration of the QR-MINRES method

5. Relation between LQ-MINRES and QR-MINRES

Now, we show that the LQ-MINRES method and the QR-MINRES method are essentially the same. In the LQ-MINRES method (46), we have In the QR-MINRES method (72), we have For the LQ-MINRES method (73), we have In the QR-MINRES method (46), we have From the computation and by induction, we have the following relations between the LQ-MINRES method and the QR-MINRES method: By induction, we have Hence, we obtain Moreover, if we let then Thus, we obtain

6. A Closer Look at QR-MINRES and SYMMQR

In the SYMMQR method [6, 7], we have two estimated solutions: and .

The first estimated solution is where is the solution of this linear system with the right-hand side vector being From these equations, we have Thus, we have where is the solution of this least square problem

The second estimated solution is where is the solution of this linear system

7. Modified MINRES (MMINRES) Method

Next, we assume that matrix is nonsymmetric. In [6, 7], we use a long term recurrence to generate orthonormal vectors as follows: Consequently, we obtain the following matrix equations: where Since the matrix is a full upper Hessenberg matrix, the LQ-MINRES method is not a practical procedure. Hence, we discuss only a generalization of the QR-MINRES method.

8. Generalized QR-MINRES Method

Since the matrix is nonsymmetric, to minimize an expression such as this we choose to satisfy First, from the Arnoldi process and from the left-hand side of linear system (96), we can write the coefficient matrix of this linear system as Second, from the right-hand side vector of the linear system (96), we have Consequently, instead of solving (96), we solve this linear system

First, we repeatedly apply Givens rotations to the left-hand side of By defining we obtain so that Thus, the coefficient matrix in linear system (99) has the following form by using (103) and (104): Moreover, by using (102) and (104), the right-hand side vector in linear system (99) is where where and . Thus, we obtain . Then, we have

Let Then, we have and we obtain We have

Using   , (112), and (114) we find that the coefficient matrix in linear system (104) is and the right-hand side vector is Since is nonsingular, we obtain this linear system which reduces to this linear system Thus, we obtain the th iteration of the generalized QR-MINRES method

9. GMRES Method

In the GMRES method, we let Multiplying by , we have Then For minimizing , we need to solve by using Givens rotation, which is the GMRES method.

In Saad’s book [4], there is a relation between the FOM method and the GMRES method. For the FOM method, we impose the Galerkin condition and we solve this linear system for For the ()st iteration, we solve this linear system for Between the th iteration and the ()st iteration of the FOM method, we obtain the solution of the this least squares problem which is the same as in the GMRES method.

10. Differences between MMINRES and MGMRES

In this section, we assume that matrices and are nonsingular symmetric matrices (but not necessarily positive definite). In [6, 7], we use a short term recurrence to generate orthonormal vectors as follows: We obtain these properties, for (, ), where Consequently, we obtain these matrix equations

From (134) and (132), we obtain

Assume that ; then we have From the Galerkin condition , we have From (141) and since is symmetric, we obtain Then by (142), we have

From (139), (140), and (143), we obtain this linear system using (127), where and .

By (132) and (137), we have

Notice that is symmetric since Here we use (128), is symmetric, (129), (130), is symmetric, (131), and (133).

Let be the estimated solution corresponding to the Galerkin condition. Then where satisfies this linear system

Let be the estimated solution corresponding to the least squares problem. Then where satisfies this expression We call the th iteration of the MMINRES method.

Notice that if to matrix we add this bottom row and add this far-right column see (132) and (138), then we obtain the matrix .

We define these Given rotation matrices where is given by (23). Multiplying matrix times both sides of linear system (148), we obtain this linear system where, from (137), the coefficient matrix is and the right-hand side vector is Note that is not always nonzero, so that might be singular, but here, we assume that is nonsingular. Then we define this matrix factorization where Moreover, we define the right-hand side vector in (156) as From (147), we have, using (157) and (159),

For the next iterate , we need to solve this linear system where, by (132) and (137), the coefficient matrix is

Multiplying times both sides of linear system (161), we obtain this coefficient matrix

The last columns in matrices (162) and (163) are related as follows: To eliminate , we compute the th Givens rotation matrix with Recall, by (23), that . Next, we multiply matrix times the coefficient matrix , in (163),

Then multiplying matrix times the right-hand side vector , we obtain Looking at the last columns in (163) and (166), we see that where Recall, by (23), .

Before forming the iterate , we consider this least square problem (150) We have where Next define

Hence, is the solution of this linear system which minimizes the expression and then

If , then and is nonsingular, and we can solve linear system (174) for . We discuss the case later.

By (149), let using (174). Here From (177) and (179), we have where, by (172),

From (147), we obtain, by (159), (157), and (180),

We note that is the estimated solution satisfying the Galerkin condition, while where minimizes this expression

Proposition 1. If and , then is the exact solution of . Moreover, one obtains .

Proof. We consider the residual vector corresponding to the iterate . Then, by (147), we have Here satisfies (148) and we have Hence, starting from (185) and by using (134) and (187), we have since . Here is the th component of the vector .
If , then . Thus, we obtain , which is the exact solution. It follows from (165) that if , then Since the only difference between and is the -entries, which are and , respectively. Let the th components of and be and , respectively. It follows that both matrices and are triangular, where and . By (169), we have Hence, we have

11. Final Notes

In the MGMRES method, we solve this linear system while in the MMINRES method, we solve this linear system

Clearly, the MMINRES method and the MGMRES method are the same when the diagonal matrix is the identity matrix .

See also Kincaid et al. [8].

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.