Abstract

Recently, the accelerated successive overrelaxation- (SOR-) like (ASOR) method was proposed for saddle point problems. In this paper, we establish a generalized accelerated SOR-like (GASOR) method and a modified accelerated SOR-like (MASOR) method, which are extension of the ASOR method, for solving both nonsingular and singular saddle point problems. The sufficient conditions of the convergence (semiconvergence) for solving nonsingular (singular) saddle point problems are derived. Finally, numerical examples are carried out, which show that the GASOR and MASOR methods have faster convergence rates than the SOR-like, generalized SOR (GSOR), modified SOR-like (MSOR-like), modified symmetric SOR (MSSOR), generalized symmetric SOR (GSSOR), generalized modified symmetric SOR (GMSSOR), and ASOR methods with optimal or experimentally found optimal parameters when the iteration parameters are suitably chosen.

1. Introduction

Consider the following large and sparse saddle point problem:where is a symmetric positive definite matrix, , , and (). It follows that the linear system (1) is nonsingular when is of full column rank and singular when is rank deficient [1].

The saddle point problem (1) is important and arises in a variety of scientific and engineering applications, such as mixed finite element approximation of elliptic partial differential equations, optimal control, computational fluid dynamics, weighted least-squares problems, electronic networks, and computer graphics; see [14] and references therein.

When the linear system (1) is nonsingular, a number of iterative methods and their numerical properties have been discussed to approximate the unique solution of nonsingular saddle point problem (1) in the literatures, such as SOR-like methods [511], Uzawa-type methods [5, 1218], Hermitian and skew-Hermitian splitting (HSS) methods and their variants [2, 1924], restrictively preconditioned conjugate gradient (RPCG) methods [25, 26], and preconditioned Krylov subspace iteration methods [2729].

In the case of in (1) being rank deficient, we call the linear system (1) a singular saddle point problem. For this case, various kinds of relaxation iteration methods have also been established. In [3034], the authors applied the inexact Uzawa methods for singular saddle point problems.

Recently, Njeru and Guo [35] developed an accelerated SOR-like (ASOR) method for the nonsingular saddle point problem (1) and numerical experiments results show that the convergence rate of the ASOR method is faster than the convergence rates of the SOR-like, GSSOR, and GSOR methods when the parameters are suitably chosen. It can be described as follows.

For the coefficient matrix of the saddle point problem (1), Njeru and Guo [35] made the following splitting:where with , , and is a symmetric positive definite matrix. The iteration for ASOR method [35] is thus described by the algorithm below.

The ASOR Method. Given initial vectors and and two real relaxation factors and , for , until the iteration sequence converges to the exact solution of the saddle point problem (1), computeIn addition, the authors in [35] obtained the experimentally found optimal parameters by using trial and error method in different cases, and the experimentally found optimal values for and are very close towhere and are the maximum and minimum eigenvalues of the matrix , respectively.

In this paper, based on the ASOR method, by adding the new parameters, the generalized ASOR (GASOR) method and the modified ASOR (MASOR) method are proposed to solve the nonsingular and the singular saddle point problem (1). We discuss the convergence properties of the GASOR and MASOR methods for solving nonsingular saddle point problems and the semiconvergence properties of the GASOR and MASOR methods for solving singular saddle point problems, respectively. In addition, the choice of the relaxation parameters of the GASOR and MASOR methods is discussed in Section 4. Numerical examples are implemented to illustrate the effectiveness of the GASOR and MASOR methods with appropriate parameters for both nonsingular and singular saddle point problems.

The rest of this paper is organized as follows. In Section 2, we propose the GASOR method for solving nonsingular and singular saddle point problem (1) and discuss the convergence (semiconvergence) properties of the GASOR method for solving nonsingular (singular) saddle point problems. The MASOR method is proposed and convergence (semiconvergence) properties of the MASOR method for solving nonsingular (singular) saddle point problems are derived in Section 3. The analysis of the optimal convergence factors of two new methods is presented in Section 4. In Section 5, numerical experiments are provided to examine the feasibility and effectiveness of the GASOR and MASOR methods for solving both nonsingular and singular saddle point problems. Finally, some conclusions are drawn.

2. The Generalized ASOR Method for Saddle Point Problems

2.1. The Generalized ASOR Method

In this section, we propose the generalized ASOR (GASOR) method for solving the saddle point problem (1). The GASOR method with appropriate parameters has the faster convergence rate than those of SOR-like [8], GSOR [5], MSOR-like [9], and ASOR [35] methods with optimal or experimentally found optimal parameters for nonsingular saddle point problems. For the coefficient matrix of the augmented system (1), we make the splitting of as in (2).

Let and be two nonzero reals, let and be the -by- and the -by- identity matrices, respectively, and

Then, we consider the following generalized ASOR iteration scheme for solving the saddle point problem (1):where is the iteration matrix of the GASOR method and its form as follows:Then, we have the following algorithmic description of the GASOR method.

The GASOR Method. Given initial vectors and and three real relaxation factors and , for , until the iteration sequence converges to the exact solution of the saddle point problem (1), compute

Note that the GASOR method involves three parameters , , and . Sincethen the matrix is invertible if and only ifby the definiteness of and .

Note that the GASOR method with reduces to the ASOR method proposed by Njeru and Guo [35].

2.2. Convergence of the GASOR Method

Lemma 1. Suppose that is an eigenvalue of matrix , if satisfiesand then is an eigenvalue of iteration matrix . Conversely, if is an eigenvalue of such that , , , and satisfy (12), then is a nonzero eigenvalue of matrix .

Proof. Let be an eigenvalue of and let be the corresponding eigenvector. Then we haveEquation (13) can be written aswhich is equivalent toInasmuch as , we obtainfrom (15). Substituting (17) into (16), we haveby the definiteness of .
If satisfies (12), then we haveIf , then from (17) this leads to , which is a contradiction that is an eigenvector. Thus, , and therefore is an eigenvalue of . Furthermore, let ; from (15) and (16), we have and . Thus, by and the definiteness of , which yields that . However is positive definite, so is , which implies that and then and by which is full column rank. This contradicts the assumption that is an eigenvector as well. Therefore, combining (12) and the assumptions , results in ; that is, is a nonzero eigenvalue of the matrix .
We can prove the second assertion by reversing the process.

One may easily show that for the special case the GASOR method reduces to the ASOR method derived in [35].

Lemma 2 (see [36]). Both roots of the real quadratic equation are less than one in modulus if and only if and .

Theorem 3. Let and be symmetric positive definite, and let be full column rank. Assume that all eigenvalues of are real and positive. Then, the GASOR method is convergent for all , , and such thatwhere is the maximum eigenvalue of .

Proof. After some manipulations on Lemma 1, we haveChanging the above equation into , we findwhereBy Lemma 2, if and only ifIt follows from (24) thatIt is obvious that (26) holds true by and . From (25), we getSince , , and is an real and positive eigenvalue of matrix , we haveHowever,We obtain that if the conditions (20) are satisfied, the GASOR method is convergent.

From (21), we get the following corollary.

Corollary 4. Let and be symmetric positive definite, let and be full column rank. If is an eigenvalue of the matrix , then eigenvalues of the matrix are given by or

2.3. Semiconvergence of the GASOR Method for Singular Saddle Point Problems

For singular saddle point problems, in [37], Wang and Zhang studied the accelerated HSS (AHSS) method first put forward by Bai and Golub in [20] for solving singular saddle point problems; Yang and Wu [38] proposed the Uzawa-HSS method and applied it for singular saddle point problems [39]. Recently, Miao and Cao [40] studied the semiconvergence of the generalized local HSS method established by Zhu [41] for singular saddle point problems; Zhou and Zhang [42] discussed the semiconvergence of the GMSSOR method which derived by Zhang et al. in [43] for singular saddle point problems. In the sequel, Chen and Ma [44] presented a generalized shift-splitting preconditioner and investigated this preconditioner for singular saddle point problems [45]. For more literatures on this theme, one can refer to [46, 47] and references therein.

In this section, by using the idea of [42], we assume that in (1) is rank deficient; that is, ; that is, in (1) is singular. In addition, we will discuss the semiconvergence region for parameters , , and in the GASOR method for solving the singular saddle point problem (1). The GASOR method has a faster convergence rate than the ones of the GSSOR [48], MSSOR [49], and GMSSOR [42, 43] methods for singular saddle point problems when optimal or experimentally found optimal parameters are chosen for them.

We first recollect some needed known results about stationary iterative methods for singular linear systems. We denote the range and the null space of the matrix by and , respectively. The smallest nonnegative integer such that is called the index of and is denoted by . For a matrix , the splitting is a nonsingular splitting if is nonsingular. Let and , then solving linear systems is equivalent to considering the following iterative scheme:When is singular, the semiconvergence about the iteration scheme (31) is precisely described in [50, 51].

Definition 5 (see [51]). For any initial vector , the iteration scheme (31) is semiconvergence to a solution of if and only if the iteration matrix is semiconvergent. Moreover, it holds that , where is the identity matrix with appropriate size and denotes the Drazin inverse of .

Definition 6 (see [42]). The pseudospectral radius of matrix is defined aswhere stands for the spectrum of matrix .

Lemma 7 (see [52]). The iteration (31) is semiconvergent, if and only if the following conditions hold:(1).(2).

Theorem 8. Let and be symmetric positive definite, and let be rank deficient. If is an eigenvalue of the matrix , then eigenvalues of the matrix are 1, , and the remainder eigenvalues are the roots of the quadratic equation (21).

Proof. In order to seek for the eigenvalues of , we need to solve :If , we haveNotice that is symmetric positive definite and therefore . As is an () matrix, we have , which implies that is an eigenvalue of . If , after some elementary transformation of matrix, the following holds:Suppose that . Notice that and are symmetric and positive definite, so . Then, has at least eigenvalues , and we denote them by . Let be an orthogonal matrix such that , where is an upper triangular matrix, whose diagonal elements are composed of the eigenvalues of . Then, we havethat is,Thus, has at least eigenvalues and eigenvalues . After some calculations, we obtain (21) by , which means that the remainder eigenvalues of are the roots of (21). This proof is completed.

Lemma 9 (see [33]). holds if and only if, for any , .

Lemma 10. Let and be symmetric positive definite, let be rank deficient, and and . Then, .

Proof. Let , where and such thatTo prove , it is sufficient to prove by Lemma 9, where . Now we give the proof for by contradiction. Suppose . Let satisfy , where and . Then,So we can get and . From , we obtain , and . Since is positive definite and so is , which results in , thus . Hence,Then, there exists a vector such thatwhich yields thatMaking use of (38) and (42), we have . Notice that is positive definite, so is ; then , which implies that . Hence,which is a contradiction to . Thus, . This proof is completed.

By Lemma 10, we have proved that , which satisfies the condition (1) in Lemma 7. Next, we need to prove the condition (2) in Lemma 7.

By Theorem 8, we see that the eigenvalues of are , and the remainder eigenvalues are the roots of the quadratic equation (21). By Definition 6 and Lemma 7, we need to consider the eigenvalues of except .

Lemma 11. Let and be symmetric positive definite, and let be rank deficient. Then, for all , , and such thatwhere is the maximum eigenvalue of .

Proof. Combining Lemma 2 and (21), we know that if and only ifUsing the same technique as the proof of Theorem 3, the conditions (44) are obtained. This completes our proof of Lemma 11.

Based on Definition 6 and Lemmas 10 and 11, we can establish the sufficient conditions of the semiconvergence for the GASOR method.

Theorem 12. Let and be symmetric positive definite, and let be rank deficient. Then, the GASOR method is semiconvergent for all , , and such thatwhere is the maximum nonzero eigenvalue of .

3. The Modified ASOR Method for Saddle Point Problems

3.1. The Modified ASOR Method

To obtain the new iteration methods for saddle point problems, some authors added new parameters in the existing methods and got the better methods [11, 20, 53]. Based on the preconditioned HSS (PHSS) method derived by Bai et al. [2], Bai and Golub [20] and Li et al. [53] by using this method obtained the AHSS and parameterized preconditioned HSS (PPHSS) methods, respectively. This idea motivates us to propose the modified ASOR (MASOR) method for solving the saddle point problem (1) by making another splitting for the coefficient matrix of the saddle point problem (1) as follows:where with and is a symmetric positive definite matrix. Let be nonzero real and let be real. Then, we consider the following modified ASOR iteration scheme for solving the saddle point problem (1):where is the iteration matrix of the MASOR method and its form as follows:Then, the algorithmic description of the MASOR method is as follows.

The MASOR Method. Given initial vectors and and three real relaxation factors , , and is a real number, for , until the iteration sequence converges to the exact solution of the saddle point problem (1), compute

Note that the MASOR method involves three parameters , , and . Sincethen the matrix is invertible if and only ifby the definiteness of and .

Note that the MASOR method with reduces to the ASOR method proposed by Njeru and Guo [35].

3.2. Convergence of the MASOR Method

Lemma 13. Suppose that is an eigenvalue of matrix , if satisfiesand then is an eigenvalue of iteration matrix . Conversely, if is an eigenvalue of such that , , , and satisfy (54), then is a nonzero eigenvalue of matrix .

Proof. Since is nonsingular, . If in (54), we obtain , which is a contradiction. Thus, . In a similar manner, we can prove that and .
We assume that is the eigenvector corresponding to ; thus, . Then by (54), the following holds:which yields thatLet ; we can deduce thatand, by setting into (56), we haveCombining (57) and (58) results inWe rewrite (57) and (59) as follows:which is equivalent toThis implies that , where , which means that is an eigenvalue of . We can prove the second assertion by revising the process.

Theorem 14. Let and be symmetric positive definite, and let be full column rank. Assume that all eigenvalues of are real and positive. Then, the MASOR method is convergent for all , , and such thatwhere is the maximum eigenvalue of .

Proof. Equation (54) can be equivalently written asEquation (63) can be expressed as , whereApplying Lemma 2 to the quadratic equation (63), we know that if and only ifIn terms of (65), we deriveEvidently, the first and the second inequalities in (66) hold true for all and . It follows from the third inequality in (66), , and which is a real and positive eigenvalue of the matrix thatIt is not difficult to verify thatTherefore, the proof of this theorem is completed.

From (63), we get the following corollary.

Corollary 15. Let and be symmetric positive definite, and let be be full rank. If is an eigenvalue of the matrix , then eigenvalues of the matrix are given by or

3.3. Semiconvergence of the MASOR Method for Singular Saddle Point Problem

In this section, we assume that in (1) is rank deficient and we will discuss the semiconvergence region for parameters , , and in the MASOR method for solving the singular saddle point problem (1).

Theorem 16. Let and be symmetric positive definite, and let be rank deficient. If is an eigenvalue of the matrix , then eigenvalues of the matrix are 1, , and the remainder eigenvalues are the roots of the quadratic equation (63).

Proof. By simple calculations, we haveIf , with the similar manner applied in the proof of Theorem 8, we can prove that , which means that is an eigenvalue of . Otherwise, , after suitable manipulations, gives We assume that and, with a quite similar strategy utilized in Theorem 8, we deduce that where are the eigenvalues of , from which one can see that has at least eigenvalues and eigenvalues . It follows from that (63) holds, which implies that the remainder eigenvalues of are the roots of (63). Thus, the theorem is proved.

Lemma 17. Let and be symmetric positive definite, let be rank deficient, and and . Then, .

Proof. We prove this theorem by Lemma 9. Let , with and such thatIn order to prove , it remains only to prove , where . Here and in the sequel, we prove by contradiction. We assume that . Let , where , , and . That iswhich results in and . From , we deduce that and . From the fact that is positive definite and so is , which leads to , thus . Then, we infer thatTherefore, there is a vector such thatIt follows from (76) thatCombining (73) with (77) gives . is positive definite as is positive definite, and thus , which results in . Then, holds, which is in contradiction with (73). So . From the above analysis and Lemma 9, it has . This proves the theorem.

By Lemma 17, we have proved that , which satisfies the condition (1) in Lemma 7. In the sequel, we need to prove condition (2) in Lemma 7.

By Theorem 16, we see that the eigenvalues of are , and the remainder eigenvalues are the roots of the quadratic equation (63). By Definition 6 and Lemma 7, we only consider the eigenvalues of except 1.

Lemma 18. Let and be symmetric positive definite, and let be rank deficient. Then, for all , , and such thatwhere is the maximum eigenvalue of .

Together with Definition 6 and Lemmas 17 and 18, we get the following sufficient conditions for the semiconvergence of the MASOR method.

Theorem 19. Let and be symmetric positive definite, and let be rank deficient. Then, the MASOR method is semiconvergent for all , , and such thatwhere is the maximum nonzero eigenvalue of .

4. The Optimal Convergence Factors of the GASOR and MASOR Methods

According to the theory of iterative methods, the optimal parameters of the GASOR method and MASOR method are as follows:respectively, where the denotes such , , and that the spectral radius of the matrix reaches the minimum.

This problem has been discussed in some articles [2, 5, 8, 10, 12, 19]. Golub et al. [8] obtained the optimal parameter of the SOR-like method; Bai et al. and Li and Kong studied the optimal parameters of the GSOR-like and GSOR methods in [5, 10], respectively. Bai [19] discussed the optimal parameters of the HSS-like method and Bai et al. [2] studied the optimal parameters of the PHSS method and so forth. To get the optimal values of , , and for the GASOR method and , , and for the MASOR method, we need to analyze the modulus of eigenvalues of and , respectively. Based on the proof of Theorems 8 and 16, the matrices and have repeated eigenvalues and the remainder eigenvalues satisfy (21) and (63), respectively. Note that all the eigenvalues of and are available, which depend on , , , and . The optimal parameters could be obtained by getting the minimum of these eigenvalue functions. However, for most iterative methods, especially with multiple parameters, this work is very complicated. Hence it is very difficult to get the optimal parameters. The parameters in this paper are chosen based on prior experience and trial and error.

5. Numerical Experiments

In this section, numerical examples illustrate the superiority of the GASOR and MASOR methods to the ASOR, SOR-like, GSOR, and MSOR-like methods when they are used for solving the nonsingular saddle point problem (1) and show the advantages of the GASOR and MASOR methods over the GSOR, GSSOR, MSSOR, and GMSSOR methods for solving the singular saddle point problem (1). All numerical procedures are carried out using Matlab 6.5 on a personal computer with Intel(R) Pentium(R) CPU G3240T 2.70 GHz, 2.0 G memory and Windows 7 operating system.

Example 1. Consider the Stokes flow problem [2]: find and such that where , is the boundary of , stands for the viscosity scalar, is a vector-valued function representing the velocity, is the componentwise Laplace operator, and is a scalar function representing the pressure. By discretizing (82) with the upwind scheme as follows [54, 55]:we obtain the systemwherewith

Here denotes the Kronecker product, is the discretization mesh size, and denotes a tridiagonal matrix with , , and . In addition, we choose the right-hand side vector so that the exact solution of the saddle point problem (1) is , where and . The preconditioning matrix is an approximation of matrix . Moreover, we choose in this example. We consider two cases as listed in Table 1.

All computations for the SOR-like, GSOR, MSOR-like, ASOR, GASOR, and MASOR methods are started from initial vector and the iteration is terminated once the current iterate satisfiesor the maximum prescribed number of iterations is exceeded.

In Table 2, for various problem sizes , we list the theoretical optimal iteration parameters of the SOR-like, GSOR, MSOR-like, ASOR, GASOR, and MASOR methods used in our implementations. For the GSOR, SOR-like, and MSOR-like methods, we adopt their optimal parameters as in [5, 8, 9]. In addition, the parameters taken by the ASOR method are the same as in [35], and we choose the parameters of the GASOR and MASOR methods which result in the least numbers of iterations for this numerical example. However, note that the explicit expressions of parameters cannot be obtained, and we only choose them by trial and error. The corresponding convergence factors , , and with various problems size are also reported in Table 2.

In Table 3, we present the iteration numbers (IT), CPU times (CPU), and relative residual (RES) of the testing iteration methods with different problem sizes . In this table, we use the bold numbers to indicate the smallest and the second smallest CPU times and iteration numbers in each column.

As observed in Table 2, the convergence factors of the GASOR and MASOR methods are smaller than that of the SOR-like method. We can also see that the spectral radii of the GASOR and MASOR methods in Cases1 and 2 for this example are almost the same as the ones of the GSOR, MSOR-like and ASOR methods, while the GASOR and MASOR methods require much less time and iteration steps to achieve the stopping criterion than other four methods, and the relaxation parameters of the GASOR and MASOR methods are not optimal and only lie in the convergence region of these methods. From the results in Table 3, we see that the SOR-like, GSOR, MSOR-like, ASOR, GASOR, and MASOR methods succeed in achieving the stopping criterion within the largest admissible number of iteration steps in all cases. Moreover, the GASOR method uses the least iteration numbers and CPU times compared with the SOR-like, GSOR, MSOR-like, ASOR, and MASOR methods, and the MASOR method is also superior to the SOR-like, GSOR, MSOR-like, and ASOR methods both in iteration numbers and CPU times especially for large problem size . Furthermore, we can find that the performance of the GASOR and MASOR methods is almost the same.

In order to better understand the numerical results, we have presented graphs of against number of iterations in Table 3 for , , and , respectively, in Figures 13. From the three figures, we note that the six methods are convergent while the GASOR and MASOR methods converge faster than other methods. Moreover, it can be seen that the GASOR and MASOR methods at different iteration points have “semilocal convergence” behavior, and thus we can deduce that the parameters chosen by the GASOR and MASOR methods are not optimal and only lie in the convergence regions of these two new methods, which means that the GASOR and MASOR methods may perform better with better choice of optimal parameters. The numerical results in this example show feasibility and effectiveness of the GASOR and MASOR methods for solving nonsingular saddle point problems.

Example 2. Consider the singular saddle point problem (1) with coefficient matrix of the matrix blocks [34] whereand the right-hand side vector is chosen by , whereWe choose the preconditioning matrix . All computations for the GSOR, MSSOR, GSSOR, GMSSOR, GASOR, and MASOR methods are started from initial vector and the iteration is terminated once the current iterate satisfiesor the maximum prescribed number of iterations is exceeded.

In Table 4, for various problem sizes , we list the theoretical optimal iteration parameters of the iteration matrices of the GSOR, MSSOR, GSSOR, GMSSOR, GASOR, and MASOR methods for solving singular saddle point problem. For the GSOR, MSSOR, and GSSOR methods, we take their optimal parameters as in [5, 6, 49]. For the GMSSOR method, the parameters are chosen as [42]. Furthermore, we take the parameters of the GASOR and MASOR methods which result in the least numbers of iterations for this numerical example. As mentioned in Example 1, we only choose them by trial and error. Furthermore, the pseudospectral radii of the iteration matrices of these methods are also presented in this table.

Comparing the results in Table 4, we observe that the pseudospectral radii of the GASOR and MASOR methods are smaller than those of the MSSOR and GMSSOR methods. It also can be seen that the pseudospectral radii of the GASOR and MASOR methods for Example 2 are almost the same as the ones of the GSOR and GSSOR methods. However, the GASOR and MASOR methods outperform the GSOR and GSSOR methods in terms of less CPU times and iteration steps. In Table 5, we present the iteration numbers (IT), CPU times (CPU), and relative residual (RES) of the testing iteration methods with different problem sizes . In this table, we use the bold numbers to indicate the smallest and the second smallest CPU times and iteration numbers in each column. Dates presented in Table 5 reveal that the GSOR, MSSOR, GSSOR, GMSSOR, GASOR, and MASOR methods succeed in producing high-quality approximate solutions in all cases. The iteration numbers and CPU times of all methods improve with size growing. Moreover, the GASOR and MASOR methods both use the least iteration numbers and CPU times compared with the GSOR, MSSOR, GSSOR, and GMSSOR methods. Furthermore, we can observe that the performance of the GASOR and MASOR methods is the same. It can be seen that the performance of the GASOR and MASOR methods is much better than other methods, but the parameters of these two methods are not the optimal parameters, and other methods take optimal or experimentally found optimal parameters. Hence, it is anticipated that the GASOR and MASOR methods with the optimal parameters would be much better than the other four methods.

In Figure 4, the graphs of against number of iterations in Table 5 for four different sizes are derived. It clearly shows that the six methods are semiconvergent while the GASOR and MASOR methods converge faster. However, the parameters of the GASOR and MASOR methods are not optimal and only lie in the convergence regions of these two methods.

6. Conclusions

In this paper, we propose two new methods called GASOR and MASOR methods, respectively, and study the convergence and semiconvergence of these two new methods for solving nonsingular and singular saddle point problems, respectively. Numerical results given in Section 5 (Tables 15 and Figures 14) show that the convergence rates of the GASOR and MASOR methods are better than the SOR-like, GSOR, MSOR-like, GSSOR, MSSOR, GMSSOR, and ASOR methods even though they are implemented with the optimal or the experimentally found optimal parameters. Numerical results present the feasibility and effectiveness of the GASOR and MASOR methods for solving both nonsingular and singular saddle point problems.

Since the optimal parameters were not used for the GASOR and MASOR methods in the numerical experiments, it is anticipated that the GASOR and MASOR methods with the optimal parameters would be much better than the other methods. Thus, it would be nice if we can find the optimal parameters for which the convergence rates of the GASOR and MASOR methods are best. Future work will include numerical or theoretical studies for finding the optimal values of , , and for the GASOR method and the optimal values of , , and for the MASOR method.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

The authors would like to express their sincere gratitude to the anonymous reviewers for their valuable comments and suggestions, which actually stimulated the presentation of this paper. This work is supported by the National Natural Science Foundations of China (no. 11171273) and sponsored by Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University (no. CX201628).