• Views 805
• Citations 3
• ePub 20
• PDF 591
`Journal of Applied MathematicsVolume 2014, Article ID 578102, 9 pageshttp://dx.doi.org/10.1155/2014/578102`
Research Article

## A Generalized HSS Iteration Method for Continuous Sylvester Equations

1School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, Gansu, China
2Department of Mathematics, Federal University of Paraná, Centro Politécnico, CP 19.081, 81531-980 Curitiba, PR, Brazil

Received 20 August 2013; Accepted 13 December 2013; Published 12 January 2014

Copyright © 2014 Xu Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Based on the Hermitian and skew-Hermitian splitting (HSS) iteration technique, we establish a generalized HSS (GHSS) iteration method for solving large sparse continuous Sylvester equations with non-Hermitian and positive definite/semidefinite matrices. The GHSS method is essentially a four-parameter iteration which not only covers the standard HSS iteration but also enables us to optimize the iterative process. An exact parameter region of convergence for the method is strictly proved and a minimum value for the upper bound of the iterative spectrum is derived. Moreover, to reduce the computational cost, we establish an inexact variant of the GHSS (IGHSS) iteration method whose convergence property is discussed. Numerical experiments illustrate the efficiency and robustness of the GHSS iteration method and its inexact variant.

#### 1. Introduction

Consider the following continuous Sylvester equation: where , , and are given complex matrices. Assume that (i),, and are large and sparse matrices;(ii)at least one of and is non-Hermitian;(iii)both and are positive semi-definite, and at least one of them is positive definite.Since under assumptions (i)–(iii) there is no common eigenvalue between and , we obtain from [1, 2] that the continuous Sylvester equation (1) has a unique solution. Obviously, the continuous Lyapunov equation is a special case of the continuous Sylvester equation (1) with and Hermitian, where represents the conjugate transpose of the matrix . This continuous Sylvester equation arises in several areas of applications. For more details about the practical backgrounds of this class of problems, we refer to [215] and the references therein.

Before giving its numerical scheme, we rewrite the continuous Sylvester equation (1) in the mathematically equivalent system of linear equations where ; the vectors and contain the concatenated columns of the matrices and , respectively, with being the Kronecker product symbol and representing the transpose of the matrix . However, it is quite expensive and ill-conditioned to use the iteration method to solve this variation of the continuous Sylvester equation (1).

There is a large number of numerical methods for solving the continuous Sylvester equation (1). The Bartels-Stewart and the Hessenberg-Schur methods [16, 17] are direct algorithms, which can only be applied to problems of reasonably small sizes. When the matrices and become large and sparse, iterative methods are usually employed for efficiently and accurately solving the continuous Sylvester equation (1), for instance, the Smith’s method [18], the alternating direction implicit (ADI) method [1922], and others [2326].

Recently, Bai established the Hermitian and skew-Hermitian splitting (HSS) [4] iterative method for solving the continuous Sylvester equation (1), which is based on the Hermitian and skew-Hermitian splitting of the matrices and . This HSS iteration method is a matrix variant of the original HSS iteration method firstly proposed by Bai et al. [27] for solving systems of linear equations; see [2839] for more detailed descriptions about the HSS iteration method and its variants.

To further improve the convergence efficiency, in this paper we present a new generalized Hermitian and skew-Hermitian splitting (GHSS) method for solving the continuous Sylvester equation (1). It is a four-parameter iteration which enables the optimization of the iterative process, thereby achieving high efficiency and robustness. Similar approaches of using the parameterized acceleration technique in the algorithmic designs of the iterative methods can be seen in [3439].

In the remainder of this paper, a matrix sequence is said to be convergent to a matrix if the corresponding vector sequence is convergent to the corresponding vector , where the vectors and contain the concatenated columns of the matrices and , respectively. If is convergent, then its convergence factor and convergence rate are defined as those of , correspondingly. In addition, we use , and to denote the spectrum, the spectral norm, and the Frobenius norm of the matrix , respectively. Note that is also used to represent the 2-norm of a vector.

The rest of this paper is organized as follows. In Section 2, we present the GHSS method for solving the continuous Sylvester equation (1), in which we use four parameters instead of two parameters in the HSS method [4]. An exact parameter region of convergence for the method is strictly proved and a minimum value for the upper bound of the iterative spectrum is derived in Section 3. In Section 4, an inexact variant of the GHSS (IGHSS) iteration method is presented and its convergence property is studied. Numerical examples are given to illustrate the theoretical results and the effectiveness of the GHSS method in Section 5. Finally, we draw our conclusions.

#### 2. The GHSS Method

Here and in the sequel, we use and to denote the Hermitian part and the skew-Hermitian part of the matrix , respectively. Obviously, the matrix naturally possesses the Hermitian and skew-Hermitian splitting (HSS): see [4, 27, 28].

Similar to the HSS method [4], we obtain the following splitting of and : where are given nonnegative constants and are given positive constants and is the identity matrix of suitable dimension. Then the continuous Sylvester equation (1) can be equivalently reformulated as

Under assumptions (i)–(iii), we observe that there is no common eigenvalue between the matrices and , as well as between the matrices and , so that the above two fixed-point matrix equations have unique solutions for all given right-hand side matrices. This leads to the following generalized Hermitian and skew-Hermitian splitting (GHSS) iteration method for solving the continuous Sylvester equation (1).

Algorithm 1 (the GHSS iteration method). Given an initial guess , compute for using the following iteration scheme until satisfies the stopping criterion: where are given nonnegative constants and are given positive constants and is the identity matrix of suitable dimension.

Remark 2. The GHSS method has the same algorithmic structure as the HSS method [4], and thus two methods have the same computational cost in each iteration step. It is easy to see that the former reduces to the latter when and .

Remark 3. When is a zero matrix, and and reduce to column vectors, the GHSS iteration method becomes the one for systems of linear equations; see [3436]. In addition, when and is Hermitian, it leads to an GHSS iteration method for the continuous Lyapunov equations.

#### 3. Convergence Analysis of the GHSS Method

Let , , and , be the Hermitian and the skew-Hermitian parts of the matrices and , respectively.

Denote with and

In addition, denote by , with Obviously, , and , are the upper and the lower bounds of the eigenvalues of the matrices and , respectively.

By making use of Theorems 2.2 and 2.5 in [35], we can obtain the following convergence theorem about the GHSS iteration method for solving the continuous Sylvester equation (1).

Theorem 4. Assume that and are positive semi-definite matrices, and at least one of them is positive definite. Let be nonnegative constants and be positive constants. Denote by Then the convergence factor of the GHSS iteration method (6) is given by the spectral radius of the matrix , which is bounded by And, if the parameters and satisfy where with functions , and denoted by then ; that is, the GHSS iteration method (6) is convergent to the exact solution of the continuous Sylvester equation (1).

Proof. By making use of the Kronecker product, we can reformulate the GHSS iteration (6) as the following matrix-vector form: which can be arranged equivalently as Evidently, the iteration scheme (17) is the GHSS iteration method for solving the system of linear equations (2), with ; see [34, 35]. After concrete operations, the GHSS iteration (17) can also be expressed as a stationary iteration as follows: where is the iteration matrix defined in (10), with , and , being given in (9) and (11), respectively, and
We can easily verify that is a Hermitian matrix, is a skew-Hermitian matrix, is a nonnegative constant, and is a positive constant. Moreover, when either or is positive definite, the matrix is Hermitian positive definite. The spectral radius of the iteration matrix clearly satisfies then the bound for is given by (12).
Hence, by making use of Theorem 2.2 in [35] we know that Therefore we obtain that if the parameters and satisfy the condition (13), the GHSS iteration method (17) converges to the exact solution of the system of linear equations (2). This directly shows that the GHSS iteration method (6) is convergent to the exact solution of the continuous Sylvester equation (1) when and satisfy the condition (13), with the convergence factor being bounded by . This completes the proof.

Remark 5. When and , the GHSS method reduces to the HSS method, which is unconditionally convergent due to .

Theorem 4 gives the convergence conditions of the GHSS iteration method for the continuous Sylvester equation (1) by analyzing the upper bound of the spectral radius of the iteration matrix . Since the optimal parameters and that minimize the spectral radius are hardly obtained, we instead give the parameters and , which minimize the upper bound of the spectral radius , in the following corollary.

Corollary 6. The theoretical quasi-optimal parameters that minimize the upper bound are given by with and the corresponding upper bound of the convergence factor is given by where

Proof. It is straightforward from Theorem 2.5 of [35].

Remark 7. We observe that when , which means that GHSS with the theoretical quasi-optimal parameters reduces to HSS with the theoretical quasi-optimal parameter [4] in this case. In other cases, the GHSS iteration is superior to the HSS iteration when both of them use the theoretical quasi-optimal parameters. This phenomenon is also illustrated in the numerical results of Section 5.

Remark 8. The actual iteration parameters and can be chosen as and such that and . For example, we may take and .

#### 4. Inexact GHSS Iteration Methods

In the process of GHSS iteration (6), two subproblems need to be solved exactly. This is a tough task which is costly and even impractical in actual implementations. To further improve computational efficiency of the GHSS iteration, we develop an inexact GHSS (IGHSS) iteration, which solves the two sub-problems iteratively [1824]. We write the IGHSS iteration scheme in the following algorithm for solving the continuous Sylvester equation (1).

Algorithm 9 (the IGHSS iteration method). Given an initial guess , then this algorithm leads to the solution of the continuous Sylvester equation (1):;while (not convergent);approximately solve by employing an effective iteration method, such that the residual of the iteration satisfies ;;;approximately solve by employing an effective iteration method, such that the residual of the iteration satisfies ;;;end.

Here, and are prescribed tolerances used to control the accuracies of the inner iterations. We remark that when and , the IGHSS method reduces the inexact HSS (IHSS) method [4].

The convergence properties for the two-step iteration have been carefully studied in [27, 31]. By making use of Theorem 3.1 in [27], we can demonstrate the following convergence result about the above IGHSS iteration method.

Theorem 10. Let the conditions of Theorem 4 be satisfied. If is an iteration sequence generated by the IGHSS iteration method and if is the exact solution of the continuous Sylvester equation (1), then it holds that where the norm is defined as for any matrix , and the constants , , , and are given by with the matrices and being defined in (9) and the constants and being defined in (11). In particular, when the iteration sequence converges to , where and .

Proof. By making use of the Kronecker product and the notations introduced in Theorem 4, we can reformulate the above-described IGHSS iteration as the following matrix-vector form: with and , where is such that the residual satisfies , and is such that the residual satisfies .
Evidently, the iteration scheme (30) is the inexact GHSS iteration method for solving the system of linear equations (2), with ; see [34, 36]. Hence, by making use of Theorem 3.1 in [27] we can obtain the estimate where the norm is defined as follows: for a vector , ; and for a matrix , is the correspondingly induced matrix norm. Note that Hence, we can equivalently rewrite the estimate (33) as This proves the theorem.

We remark that Theorem 10 gives the choices of the tolerances and for convergence. In general, Theorem 10 shows that in order to guarantee the convergence of the IGHSS iteration, it is not necessary for and to approach to zero as is increasing. All we need is that the condition (29) be satisfied. However, the theoretical optimal tolerances and are difficult to be analyzed.

#### 5. Numerical Results

In this section, we perform numerical tests to exhibit the superiority of GHSS and IGHSS to HSS and IHSS when they are used as solvers for solving the continuous Sylvester equation (1), in terms of iteration numbers (denoted as IT) and CPU times (in seconds, denoted as CPU).

In our implementations, the initial guess is chosen to be the zero matrix, and the iteration is terminated once the current iterate satisfies In addition, all sub-problems involved in each step of the HSS and GHSS iteration methods are solved exactly by the method in [16]. In HSS and IGHSS iteration methods, we set , , and use the Smith’s method [18] as the inner iteration scheme.

We consider the continuous Sylvester equation (1) with and the matrices where are the tridiagonal matrices given by see also [46, 27, 40, 41].

From [4] we known that the HSS iteration method considerably outperforms the SOR iteration method in both iteration step and CPU time, so here we just solve this continuous Sylvester equation by the GHSS and the HSS iteration methods and their inexact variants.

In Table 1, numerical results for GHSS and HSS with the experimental optimal iteration parameters are listed, while (with ), (with ), and (with ) represent the experimentally found optimal values of the iteration parameters used for the GHSS and the HSS iterations, respectively.

Table 1: Numerical results for GHSS and HSS with the experimental optimal iteration parameters.

In Table 2, numerical results for GHSS and HSS with the theoretical quasi-optimal iteration parameters are listed, while (with ), (with ) and (with ) represent the theoretical quasi-optimal iteration parameters used for the GHSS and the HSS iterations, respectively.

Table 2: Numerical results for GHSS and HSS with the theoretical quasi-optimal iteration parameters.

In Table 3, numerical results for IGHSS and IHSS are listed; here we adopt the iteration parameters in Table 1 for convenience and not the experimental optimal parameters.

Table 3: Numerical results for IGHSS and IHSS.

From Tables 13 we observe that GHSS and IGHSS methods performs better than HSS and IHSS methods in terms of iteration numbers and CPU times. Therefore, the GHSS and IGHSS methods proposed in this work are two powerful and attractive iterative approaches for solving large sparse continuous Sylvester equations.

#### 6. Conclusions

As a strategy for accelerating convergence of iteration for solving a broad class of continuous Sylvester equations, we have proposed a four-parameter generalized HSS (GHSS) method. This is obviously a type of generalization of the classical HSS method [4]. When we take and , we shall return to the HSS method. In our work we demonstrate that the iterative series produced by the GHSS method converge to the unique solution of the continuous Sylvester equation when the parameters satisfy some moderate conditions. The GHSS method takes HSS method as a special case. We also give a possible optimal upper bound for the iterative spectral radius. Moreover, to reduce the computational cost, an inexact variant of the GHSS (IGHSS) iteration method is developed and its convergence property is analyzed. Numerical results display that the new GHSS method and its inexact variant are typically more flexible than HSS and IHSS methods.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

This work is partially supported by the National Basic Research (973) Program of China under Grant no. 2011CB706903, the National Natural Science Foundation of China under Grant no. 11271173, the Mathematical Tianyuan Foundation of China under Grant no. 11026064, and the CAPES and CNPq in Brazil.

#### References

1. P. Lancaster, “Explicit solutions of linear matrix equations,” SIAM Review, vol. 12, no. 4, pp. 544–566, 1970.
2. P. Lancaster and M. Tismenetsky, The Theory of Matrices, Computer Science and Applied Mathematics, Academic Press, Orlando, Fla, USA, 2nd edition, 1985.
3. Q.-W. Wang and Z.-H. He, “Solvability conditions and general solution for mixed Sylvester equations,” Automatica, vol. 49, no. 9, pp. 2713–2719, 2013.
4. Z.-Z. Bai, “On Hermitian and skew-Hermitian splitting iteration methods for continuous Sylvester equations,” Journal of Computational Mathematics, vol. 29, no. 2, pp. 185–198, 2011.
5. Z.-Z. Bai, Y.-M. Huang, and M. K. Ng, “On preconditioned iterative methods for Burgers equations,” SIAM Journal on Scientific Computing, vol. 29, no. 1, pp. 415–439, 2007.
6. Z.-Z. Bai and M. K. Ng, “Preconditioners for nonsymmetric block toeplitz-like-plus-diagonal linear systems,” Numerische Mathematik, vol. 96, no. 2, pp. 197–220, 2003.
7. P. Lancaster and L. Rodman, Algebraic Riccati Equations, Oxford Science Publications, The Clarendon Press, New York, NY, USA, 1995.
8. G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins Studies in the Mathematical Sciences, Johns Hopkins University Press, Baltimore, Md, USA, 3rd edition, 1996.
9. A. Halanay and V. Răsvan, Applications of Liapunov Methods in Stability, vol. 245 of Mathematics and Its Applications, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1993.
10. A. N. P. Liao, Z.-Z. Bai, and Y. Lei, “Best approximate solution of matrix equation $AXB+CYD=E$,” SIAM Journal on Matrix Analysis and Applications, vol. 27, no. 3, pp. 675–688, 2005.
11. Z.-Z. Bai, X.-X. Guo, and S.-F. Xu, “Alternately linearized implicit iteration methods for the minimal nonnegative solutions of the nonsymmetric algebraic Riccati equations,” Numerical Linear Algebra with Applications, vol. 13, no. 8, pp. 655–674, 2006.
12. X.-X. Guo and Z.-Z. Bai, “On the minimal nonnegative solution of nonsymmetric algebraic Riccati equation,” Journal of Computational Mathematics, vol. 23, no. 3, pp. 305–320, 2005.
13. G.-P. Xu, M.-S. Wei, and D.-S. Zheng, “On solutions of matrix equation $AXB+CYD=F$,” Linear Algebra and Its Applications, vol. 279, no. 1–3, pp. 93–109, 1998.
14. J.-T. Zhou, R.-R. Wang, and Q. Niu, “A preconditioned iteration method for solving Sylvester equations,” Journal of Applied Mathematics, vol. 2012, Article ID 401059, 12 pages, 2012.
15. F. Yin and G.-X. Huang, “An iterative algorithm for the generalized reflexive solutions of the generalized coupled Sylvester matrix equations,” Journal of Applied Mathematics, vol. 2012, Article ID 152805, 28 pages, 2012.
16. R. H. Bartels and G. W. Stewart, “Solution of the matrix equation $AX+XB=C\left[F4\right]$,” Communications of the ACM, vol. 15, no. 9, pp. 820–826, 1972.
17. G. H. Golub, S. Nash, and C. Van Loan, “A Hessenberg-Schur method for the problem $AX+XB=C$,” IEEE Transactions on Automatic Control, vol. 24, no. 6, pp. 909–913, 1979.
18. R. A. Smith, “Matrix equation $XA+BX=C$,” SIAM Journal on Applied Mathematics, vol. 16, no. 1, pp. 198–201, 1968.
19. D. Calvetti and L. Reichel, “Application of ADI iterative methods to the restoration of noisy images,” SIAM Journal on Matrix Analysis and Applications, vol. 17, no. 1, pp. 165–186, 1996.
20. D. Y. Hu and L. Reichel, “Krylov-subspace methods for the Sylvester equation,” Linear Algebra and Its Applications, vol. 172, pp. 283–313, 1992.
21. N. Levenberg and L. Reichel, “A generalized ADI iterative method,” Numerische Mathematik, vol. 66, no. 1, pp. 215–233, 1993.
22. E. L. Wachspress, “Iterative solution of the Lyapunov matrix equation,” Applied Mathematics Letters, vol. 1, no. 1, pp. 87–90, 1988.
23. G. Starke and W. Niethammer, “SOR for $AX-XB=C$,” Linear Algebra and Its Applications, vol. 154–156, pp. 355–375, 1991.
24. D. J. Evans and E. Galligani, “A parallel additive preconditioner for conjugate gradient method for $AX+XB=C$,” Parallel Computing, vol. 20, no. 7, pp. 1055–1064, 1994.
25. L. Jódar, “An algorithm for solving generalized algebraic Lyapunov equations in Hilbert space, applications to boundary value problems,” Proceedings of the Edinburgh Mathematical Society: Series 2, vol. 31, no. 1, pp. 99–105, 1988.
26. C.-Q. Gu and H.-Y. Xue, “A shift-splitting hierarchical identification method for solving Lyapunov matrix equations,” Linear Algebra and Its Applications, vol. 430, no. 5-6, pp. 1517–1530, 2009.
27. Z.-Z. Bai, G. H. Golub, and M. K. Ng, “Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems,” SIAM Journal on Matrix Analysis and Applications, vol. 24, no. 3, pp. 603–626, 2003.
28. Z.-Z. Bai, “Splitting iteration methods for non-Hermitian positive definite systems of linear equations,” Hokkaido Mathematical Journal, vol. 36, no. 4, pp. 801–814, 2007.
29. Z.-Z. Bai, G. H. Golub, L.-Z. Lu, and J.-F. Yin, “Block triangular and skew-Hermitian splitting methods for positive-definite linear systems,” SIAM Journal on Scientific Computing, vol. 26, no. 3, pp. 844–863, 2005.
30. Z.-Z. Bai, G. H. Golub, and M. K. Ng, “On successive-overrelaxation acceleration of the Hermitian and skew-Hermitian splitting iterations,” Numerical Linear Algebra with Applications, vol. 14, no. 4, pp. 319–335, 2007.
31. Z.-Z. Bai, G. H. Golub, and M. K. Ng, “On inexact hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems,” Linear Algebra and Its Applications, vol. 428, no. 2-3, pp. 413–440, 2008.
32. Z.-Z. Bai, G. H. Golub, and J.-Y. Pan, “Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems,” Numerische Mathematik, vol. 98, no. 1, pp. 1–32, 2004.
33. M. Benzi, “A Generalization of the he rmitian and skew-hermitian splitting iteration,” SIAM Journal on Matrix Analysis and Applications, vol. 31, no. 2, pp. 360–374, 2009.
34. L. Li, T.-Z. Huang, and X.-P. Liu, “Asymmetric Hermitian and skew-Hermitian splitting methods for positive definite linear systems,” Computers and Mathematics with Applications, vol. 54, no. 1, pp. 147–159, 2007.
35. A.-L. Yang, J. An, and Y.-J. Wu, “A generalized preconditioned HSS method for non-Hermitian positive definite linear systems,” Applied Mathematics and Computation, vol. 216, no. 6, pp. 1715–1722, 2010.
36. J.-F. Yin and Q.-Y. Dou, “Generalized preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive-definite linear systems,” Journal of Computational Mathematics, vol. 30, no. 4, pp. 404–417, 2012.
37. Z.-Z. Bai and G. H. Golub, “Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems,” IMA Journal of Numerical Analysis, vol. 27, no. 1, pp. 1–23, 2007.
38. X. Li, A.-L. Yang, and Y.-J. Wu, “Parameterized preconditioned Hermitian and skew-Hermitian splitting iteration method for saddle-point problems,” International Journal of Computer Mathematics, 2013.
39. W. Li, Y.-P. Liu, and X.-F. Peng, “The generalized HSS method for solving singular linear systems,” Journal of Computational and Applied Mathematics, vol. 236, no. 9, pp. 2338–2353, 2012.
40. O. Axelsson, Z.-Z. Bai, and S.-X. Qiu, “A class of nested iteration schemes for linear systems with a coefficient matrix with a dominant positive definite symmetric part,” Numerical Algorithms, vol. 35, no. 2-4, pp. 351–372, 2004.
41. Z.-Z. Bai, “A class of two-stage iterative methods for systems of weakly nonlinear equations,” Numerical Algorithms, vol. 14, no. 4, pp. 295–319, 1997.