Special Issue

## Inverse Problems: Theory and Application to Science and Engineering

View this Special Issue

Research Article | Open Access

Volume 2013 |Article ID 326169 | https://doi.org/10.1155/2013/326169

Yu-Qin Bai, Ting-Zhu Huang, Miao-Miao Yu, "Convergence of a Generalized USOR Iterative Method for Augmented Systems", Mathematical Problems in Engineering, vol. 2013, Article ID 326169, 6 pages, 2013. https://doi.org/10.1155/2013/326169

# Convergence of a Generalized USOR Iterative Method for Augmented Systems

Accepted18 Sep 2013
Published04 Nov 2013

#### Abstract

Zhang and Shang (2010) have presented the Uzawa-SOR (USOR) algorithm to solve augmented systems. In this paper, we establish a generalized Uzawa-SOR (GUSOR) method for solving augmented systems, which is the extension of the USOR method. We prove the convergence of the proposed method under suitable restrictions on the iteration parameters. Lastly, numerical experiments are carried out and experimental results show that our proposed method with appropriate parameters has faster convergence rate than the USOR method.

#### 1. Introduction

We consider the solution of systems of linear equations of the 2-by-2 block structure as follows: where is a symmetric and positive definite matrix and is a matrix of full column rank, , , and . denotes the transpose of the matrix . We assume that they are of the appropriate dimensions whenever the zero matrix and the identity matrix are used in this paper. The linear systems (1) appears in many different applications of scientific computing, such as the mixed finite element for incompressible flow problems when some form of pressure stabilization is included in the discretization, constrained optimization , computational fluid dynamics, and Stokes problems, of constrained least squares problems, and generalized least squares problems ; see [6, 7] and references therein.

A large variety of methods for solving linear systems of the form (1) can be found in the literature. Yuan and Iusem [3, 5] presented variants of the SOR method and preconditioned conjugate gradient methods. Golub et al.  proposed SOR-like algorithms for solving the augmented systems, which was further accelerated and generalized by GSOR method in . Darvishi and Hessari  studied the SSOR method. Zhang and Lu  studied a GSSOR (generalized SSOR) method. Recently, Zhang and Shang  proposed the Uzawa-SOR method and studied its convergence. Bai and Wang  established and studied the parameterized inexact Uzawa (PIU) method for solving the corresponding saddle point problems, which was also discussed convergence conditions for matrix splitting iteration methods in .

The remainder of the paper is organized as follows. In Section 2, we establish a generalized Uzawa-SOR (GUSOR) method for solving augmented systems and analyze convergence of the corresponding method in Section 3. Numerical results are presented in Section 4. At last, we give some remarks in Section 5.

#### 2. Generalized Uzawa-SOR (GUSOR) Method

For the sake of simplicity, we rewrite system (1) as where is a symmetric and positive definite matrix and is a matrix of full column rank. Let be decomposed as in which is the diagonal of , is the strict lower part of , and is the strict upper part of with and being nonzero reals.

To construct the generalized USOR method, we consider the following splitting: where is a prescribed a symmetric positive definite matrix and . Let and be two nonzero reals, let and be the m-by-m and the n-by-n identity matrices, respectively, and let be given parameter matrices of the form Then we consider the following generalized SOR iteration scheme for solving the augmented linear system (2): or equivalently, where

More precisely, we have the following algorithmic description of this GUSOR method.

Generalized USOR Method. Let be a prescribed symmetric positive definite matrix. Given initial vectors and , and the relaxed parameters , and with . For until the iteration sequence converges, compute

Remark 1. When the relaxed parameters , the GUSOR method reduces to the USOR method, so the GUSOR method is the extention of the USOR method.

#### 3. Convergence of the GUSOR Method

In this section, we will analyze a sufficient condition for parameters , , and in the generalized Uzawa-SOR (GUSOR) method to solve augmented systems (2). We will use the following notations and definitions. For a vector , denotes the complex conjugate transpose of the vector . and denote the minimum and maximum eigenvalues of the Hermitian matrix , respectively, and denotes the spectral radius of . We also assumed that the parameters of , and used in this paper are positive real numbers.

Note that the iteration matrix of the proposed methods is ; therefore, the GUSOR method is convergent if and only if the spectral radius of the matrix , defined in (8) is less than one; that is, .

Let be an eigenvalue of and let be the corresponding eigenvector. Then we have or equivalently,

Lemma 2. Let be a symmetric positive definite matrix and a matrix of full column rank. If is an eigenvalue of the iteration matrix , then .

Proof. If , since and are positive real numbers, then from (12), we have and . It follows that from the above relations, which leads to and . This is a contradiction to the assumption that is an eigenvector of the iteration matrix .

Lemma 3. Let be a symmetric positive definite matrix, and a matrix of full column rank. If is an eigenvalue of the iteration matrix and is an eigenvector of the iteration matrix corresponding to the eigenvalue , then . Moreover, if , then .

Proof. The method of proof is exactly the same as in , here we omit the proof of Lemma 3.

Lemma 4 (see ). Both roots of the complex quadratic equation have modulus less than one if and only if , where denotes the complex conjugate of .

Now we are in the position to establish the convergence of the proposed methods. The following theorem presents a sufficient condition for guaranteeing the convergence of the GUSOR method.

Theorem 5. Let be a symmetric positive definite matrix, a matrix of full column rank, and let be a symmetric positive definite matrix. Let . If , and , where then the proposed method is convergent.

Proof. Since from Lemma 2, we have . From (12), we obtain If , then and we have from Lemma 3.
We now assume that . For , let Notice that , then Substituting into (15), we obtain We notice and , that is, , and after some manipulations, we get satisfies the quadratic equation , where Let . By some calculations, one has From Lemma 4, we know that roots of the complex quadratic equation (18) satisfy if and only if By solving (21) for , if , that is, , one obtains Letting we obtain It is obviously that . Since is a Hermitian matrix, the eigenvalues of are real, . For , , and is an increasing function for , we obtain that when where Hence, the theorem is proved.

#### 4. Numerical Experiments

In this section, we provide numerical experiments to examine the feasibility and effectiveness of GUSOR method for solving the saddle point problem (1) and compare the results between the GUSOR method and the USOR method provided in . We report the number of iterations (IT), norm of absolution residual vectors (RES), the elapsed CPU time (CPU), and the spectral radius of corresponding iterative matrix denoted by . Here, RES is defined as with being the final approximate solution, where refers to -norm. We choose the right-hand vector such that the exact solution of the augmented linear system (1) is . All numerical experiments are carried out on a PC equipped with Intel Core i3 2.3 GHz CPU and 2.00 GB RAM memory Using MATLAB R2010a.

Example 6 (see ). Let the augmented system (1) in which with is the Kronecker product symbol and and is a tridiagonal matrix with , , for appropriate .

For this example, and . Hence, the total number of variables is . We choose the matrix as an approximation to the matrix , according to three cases listed in Table 1.

 Case number Description I II III

In our experiments, all runs with respect to both USOR method and GUSOR method are started from the initial vector which is set to the zero vector and terminate if the current iteration satisfies . Here, ERR is defined as

In Tables 2, 3, and 4, we list the values of which are same as in , IT, RES, CPU, and the spectral radii of corresponding iterative matrices for various problem sizes , respectively. They clearly show that the GUSOR method is more effective than the USOR method on convergence rate, computing speed, and the spectral radii of corresponding iterative matrices. IT and CPU of our proposed method are nearly half of the USOR if is smaller. However, the relaxed parameters of GUSOR method are not optimal values and only lie in the convergence region of the method. The determination of optimum values of the parameters needs further study.

 128 512 1152 64 256 576 USOR 1.3 1.4 1.4 0.6 0.5 0.5 IT 102 248 346 RES CPU 0.0073 0.5343 3.1231 0.8567 0.9387 0.9548 GUSOR 1.3 1.4 1.4 0.6 0.5 0.5 IT 63 179 282 RES CPU 0.0047 0.3611 2.5255 0.7677 0.9146 0.9424
 128 512 1152 64 256 576 USOR 1.3 1.3 1.3 0.5 0.5 0.5 IT 123 212 346 RES CPU 0.0169 0.4651 3.6313 0.8817 0.9263 0.9522 GUSOR 1.3 1.3 1.3 0.5 0.5 0.5 IT 62 169 280 RES CPU 0.0081 0.3728 2.9000 0.7784 0.9027 0.9395
 128 512 1152 64 256 576 USOR 1.3 1.4 1.3 0.6 0.5 0.5 IT 120 265 364 RES CPU 0.0084 0.4487 3.2774 0.8783 0.9427 0.9549 GUSOR 1.3 1.4 1.3 0.6 0.5 0.5 IT 69 194 306 RES CPU 0.0054 0.3234 2.6347 0.7890 0.9207 0.9448

Remark 7. When , in this case, the proposed method is the one in . Through experiment results, we find the optimal relaxation of seems to be about 0.5.

#### 5. Conclusions

In this paper, we propose the GUSOR method for the solution of the saddle point problems and analyze the convergence of GUSOR method. When chosen the relaxed parameters , the spectral radii of the iteration matrices, IT and CPU with the proposed method are smaller than those in , which is shown through numerical experiments. Particularly, one may discuss how to select the set of optimal parameters for accelerating the convergence of the considered method effectively. The optimal choice of this set of parameters is valuably studied which is our future work.

#### Acknowledgments

The authors would like to thank the anonymous referees for their helpful comments and advice, which greatly improved the paper. The study was financially supported by the National Natural Science Foundation (nos. 11161041 and 71301111), Chinese Universities Specialized Research Fund for the Doctoral Program (20110185110020).

1. S. Wright, “Stability of augmented system factorizations in interior-point methods,” SIAM Journal on Matrix Analysis and Applications, vol. 18, no. 1, pp. 191–222, 1997. View at: Publisher Site | Google Scholar | MathSciNet
2. M. Arioli, I. S. Duff, and P. P. M. de Rijk, “On the augmented system approach to sparse least-squares problems,” Numerische Mathematik, vol. 55, no. 6, pp. 667–684, 1989. View at: Publisher Site | Google Scholar | MathSciNet
3. J. Y. Yuan and A. N. Iusem, “Preconditioned conjugate gradient method for generalized least squares problems,” Journal of Computational and Applied Mathematics, vol. 71, no. 2, pp. 287–297, 1996. View at: Publisher Site | Google Scholar | MathSciNet
4. C. H. Santos, B. P. B. Silva, and J. Y. Yuan, “Block SOR methods for rank-deficient least-squares problems,” Journal of Computational and Applied Mathematics, vol. 100, no. 1, pp. 1–9, 1998. View at: Publisher Site | Google Scholar | MathSciNet
5. J. Y. Yuan, “Numerical methods for generalized least squares problems,” Journal of Computational and Applied Mathematics, vol. 66, no. 1-2, pp. 571–584, 1996. View at: Publisher Site | Google Scholar | MathSciNet
6. Z.-Z. Bai, “Structured preconditioners for nonsingular matrices of block two-by-two structures,” Mathematics of Computation, vol. 75, no. 254, pp. 791–815, 2006. View at: Publisher Site | Google Scholar | MathSciNet
7. Z.-Z. Bai and G.-Q. Li, “Restrictively preconditioned conjugate gradient methods for systems of linear equations,” SIMA Journal of Numerical Analysis, vol. 23, no. 4, pp. 561–580, 2003. View at: Publisher Site | Google Scholar | MathSciNet
8. G. H. Golub, X. Wu, and J.-Y. Yuan, “SOR-like methods for augmented systems,” BIT Numerical Mathematics, vol. 41, no. 1, pp. 71–85, 2001. View at: Publisher Site | Google Scholar | MathSciNet
9. Z.-Z. Bai, B. N. Parlett, and Z.-Q. Wang, “On generalized successive overrelaxation methods for augmented linear systems,” Numerische Mathematik, vol. 102, no. 1, pp. 1–38, 2005. View at: Publisher Site | Google Scholar | MathSciNet
10. M. T. Darvishi and P. Hessari, “Symmetric SOR method for augmented systems,” Applied Mathematics and Computation, vol. 183, no. 1, pp. 409–415, 2006. View at: Publisher Site | Google Scholar | MathSciNet
11. G.-F. Zhang and Q.-H. Lu, “On generalized symmetric SOR method for augmented systems,” Journal of Computational and Applied Mathematics, vol. 219, no. 1, pp. 51–58, 2008. View at: Publisher Site | Google Scholar | MathSciNet
12. J. Zhang and J. Shang, “A class of Uzawa-SOR methods for saddle point problems,” Applied Mathematics and Computation, vol. 216, no. 7, pp. 2163–2168, 2010. View at: Publisher Site | Google Scholar | MathSciNet
13. Z.-Z. Bai and Z.-Q. Wang, “On parameterized inexact Uzawa methods for generalized saddle point problems,” Linear Algebra and Its Applications, vol. 428, no. 11-12, pp. 2900–2932, 2008. View at: Publisher Site | Google Scholar | MathSciNet
14. L. Wang and Z.-Z. Bai, “Skew-Hermitian triangular splitting iteration methods for non-Hermitian positive definite linear systems of strong skew-Hermitian parts,” BIT Numerical Mathematics, vol. 44, no. 2, pp. 363–386, 2004. View at: Publisher Site | Google Scholar | MathSciNet

#### More related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.