Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017, Article ID 1624969, 8 pages
https://doi.org/10.1155/2017/1624969
Research Article

The Relaxed Gradient Based Iterative Algorithm for the Symmetric (Skew Symmetric) Solution of the Sylvester Equation

1School of Information and Computer, Anhui Agricultural University, Hefei 230036, China
2School of Mathematics and Statistics, Fuyang Normal College, Anhui 236037, China

Correspondence should be addressed to Xingping Sheng; moc.361@gnehsgnipgnix

Received 26 February 2017; Accepted 23 March 2017; Published 3 April 2017

Academic Editor: Jean Jacques Loiseau

Copyright © 2017 Xiaodan Zhang and Xingping Sheng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this paper, we present two different relaxed gradient based iterative (RGI) algorithms for solving the symmetric and skew symmetric solution of the Sylvester matrix equation . By using these two iterative methods, it is proved that the iterative solution converges to its true symmetric (skew symmetric) solution under some appropriate assumptions when any initial symmetric (skew symmetric) matrix is taken. Finally, two numerical examples are given to illustrate that the introduced iterative algorithms are more efficient.

1. Introduction

For the convenience of our statements, the following notations will be used throughout the paper: represents the set of real matrices. For , we write , , , , , , , and to denote the transpose, the range space, the trace, the spectral radius, the maximal eigenvalue, the minimal eigenvalue, the spectral norm, and the Frobenius norm of a matrix , respectively; that is, and . , are the maximal singular value and the minimal nonzero singular value of . The symbol of represents an identity matrix of order , is an matrix whose elements are 1, and is the condition number of matrix . The inner product in space is defined as ; particularly, . denotes the Kronecker product, defined as , , . For any matrix , the vector operator is defined as . Using vector operator and Kronecker product, we have .

Considering the symmetric (skew symmetric) solution of the Sylvester matrix equation where , , , and the unknown matrix .

The Sylvester matrix equation (1) has many applications in linear system theory, for example, pole/eigenstructure assignment [14], robust pole assignment [58], robust partial pole assignment [9], observer design [10], model matching problem [11], regularization of descriptor systems [12, 13], disturbance decoupling problem [14], and noninteracting control [15].

As is well known, (1) has a unique solution if and only if and possess no common eigenvalues [16] and the solution can be computed by solving a linear system . Using this method, it will increase the computational cost and storage requirements, so that this approach is only applicable for small sized Sylvester equations.

Due to these drawbacks, many other methods for the solution have appeared in the literature. The idea of transforming the coefficient matrix into a Schur or Hessenberg form to compute (1) have been presented in [16, 17]. When the linear matrix (1) is inconsistent, a finite iterative method to solving its Hermitian minimum norm solutions has been presented in [18]. An efficient iterative method based on Hermitian and skew Hermitian splitting has been proposed in [19]. Krylov subspace based methods have been presented in [2026] for solving Sylvester equations and generalized Sylvester equations. Recently based on the idea of a hierarchical identification principle [2729], some efficient gradient based iterative algorithms for solving generalized Sylvester equations and coupled (general coupled) Sylvester equations have been proposed in [27, 3032]. Particularly, for Sylvester equations of form (1), it is illustrated in [33] that the unknown matrix to be identified can be computed by a gradient based iterative algorithm. The convergence properties of the methods are investigated in [27, 32]. Niu et al. [34] proposed a relaxed gradient based iterative algorithm for solving Sylvester equations. Wang et al. [35] proposed a modified gradient based iterative algorithm for solving Sylvester equations (1). More recently, Xie and Ma [36] gave an accelerated gradient based iterative algorithm for solving (1). In [37, 38] Xie et al. studied the special structure solution of matrix equation (1) by using iterative method.

In this paper, inspired by [28, 3436], we first derive a relaxed gradient based iterative (RGI) algorithm for solving the symmetric solution of matrix equation (1). Theoretical analysis shows that our method converges to the exact symmetric solution for any initial value with some appropriate assumptions. Then the proposed algorithm can be also applied to the skew symmetric solution of matrix equation (1). Numerical results illustrate that the proposed method is correct and feasible. We must point out that the ideas in this paper have some differences comparing with that in [28, 3436].

The rest of the paper is organized as follows. In Section 2, some main preliminaries are provided. In Section 3, the relaxed gradient based iterative methods are studied. Finally, a numerical example is included to verify the superior convergence for the algorithms.

2. Preliminaries

In this section, we reviewed the ideas and principles of the gradient based iterative (GI) method, the relaxed gradient based iterative (RGI) method, and the modified gradient based iterative (MGI) method.

Let be the convergence factor or step factor. The gradient based iterative method for is as follows:

The convergence of the gradient based iterative is stated as follows.

Lemma 1 (see [32]). Assume matrix is full column-rank and ; then the gradient based iterative sequences in (2) converge to ; that is, or the error converges to zero for any initial value . Moreover, the maximal -convergence rate is given by , whereIn this case, the error vector satisfies

In [28], Ding and Chen presented the following algorithm based on gradient for solving (1).

Algorithm 2 (see [28] (the gradient based iterative (GI) algorithm)).
Step 1. Input matrices , given any small positive number . Choose the initial matrices and . Compute . Set .
Step 2. If , stop; otherwise, go to Step .
Step 3. Update the sequences Step 4. Set ; return to Step .

The authors [28] also pointed out that if the convergence factor is chosen in , Algorithm 2 will converge to the exact solution of (1).

Niu et al. [34] gave a relaxed gradient based iterative algorithm for solving (1). When is in the following algorithm has been proven to be convergent.

Algorithm 3 (see [34] (the relaxed gradient based iterative (RGI) algorithm)).
Step 1. Input matrices , given any small positive number and appropriate positive number . Choose the initial matrices and . Compute . Set .
Step 2. If , stop; otherwise, go to Step .
Step 3. Update the sequences Step 4. Set ; return to Step .

Recently, in [35] Wang et al. proposed a modified gradient based iterative (MGI) algorithm to solve (1). The main difference is that, in the step of computing , the last approximate solution has been considered fully to update .

Algorithm 4 (see [35] (the modified gradient based iterative (MGI) algorithm)).
Step 1. Input matrix , given any small positive number and appropriate positive number . Choose the initial matrices and . Compute . Set .
Step 2. If , stop; otherwise, go to Step .
Step 3. Update the sequences Step 4. Compute Step 5. Update the sequences Step 6. ComputeStep 7. Set ; return to Step .

More recently, Xie and Ma [36] presented the following AGBI algorithm for solving (1) based on the idea of MGI.

Algorithm 5 (see [36] (the accelerated gradient based iterative (AGBI) algorithm)).
Step 1. Input matrix , given any small positive number and appropriate positive number . Choose the initial matrices and . Compute . Set .
Step 2. If , stop; otherwise, go to Step .
Step 3. Update the sequences Step 4. Compute Step 5. Update the sequences Step 6. Compute Step 7. Set ; return to Step .

3. Main Results

In this section, we first study the necessary and sufficient conditions of the symmetric solution for (1). Then the relaxed gradient based iterative algorithm for the symmetric solution of equation (1) is proposed. Following the same line, the relaxed gradient based iterative algorithm for the skew symmetric solution of equation (1) is also presented.

Theorem 6. The matrix equation (1) has a unique symmetric solution if and only if the following pair of the matrix equations has a unique common solution and .

Proof. If is a unique symmetric solution of (1), then and ; further we haveThis shows that is also the solution of the pair matrix equations (15).
Conversely, if the system of matrix equations (15) has a common solution , let us denote ; then we can check that This implies that is the unique symmetric solution of (1).

According to Theorem 6, if the unique common solution of equations (15) can be obtained, then the unique symmetric solution of (1) is .

According to Theorem 6, we construct a relaxed gradient based iterative algorithm to solve the symmetric solution of (1).

Algorithm 7 (the relaxed gradient based iterative (RGI) algorithm for symmetric solution of (1)).
Step 1. Input matrices , given any small positive number and appropriative positive number such that . Choose any initial matrix . Set .
Step 2. If , stop; otherwise, go to Step .
Step 3. Update the sequences Step 4. Set ; return to Step .

In the following paragraph, we will investigate the convergence of Algorithm 7.

Theorem 8. Assume the matrix equations (15) have a unique solution ; then the iterative sequence generated by the Algorithm 7 converges to , if that is, or the error converges to zero for any initial value .

Further the sequence converges to , where is the unique symmetric solution of (1).

Proof. Define the error matrix We have The following two equalities are easily derived.Moreover Therefore, the above equality implies that It follows from and thatThe above two inequalities (25) mean thatIn other wordsEquation (27) implies that

From Theorem 6 and the above limited equation (28), we have

Following the same line, the idea of Algorithm 7 can be extended to solve the skew symmetric solution of (1). First, we need the following theorem.

Theorem 9. The matrix equation (1) has a unique skew symmetric solution if and only if the following pair of the matrix equations has a unique common solution and .

The relaxed gradient based iterative algorithm for solving the skew symmetric solution of (1) can be stated as follows.

Algorithm 10 (the relaxed gradient based iterative (RGI) algorithm for skew symmetric solution of (1)).
Step 1. Input matrices , given any small positive number and appropriative positive number such that . Choose any initial matrix . Set .
Step 2. If , stop; otherwise, go to Step .
Step 3. Update the sequences Step 4. Set ; return to Step .

Similarly, we have the following theorem to ensure the convergence of the Algorithm 10.

Theorem 11. Assume the matrix equations (30) have a unique solution ; then the iterative sequence generated by the Algorithm 10 converges to , if that is, or the error converges to zero for any initial value .

Furthermore, the sequence converges to , where is the unique skew symmetric solution of (1).

4. Numerical Examples

In this section, two numerical examples are used to show the efficiency of the RGI Method. All the computations were performed on Intel® Core™ i7-4500U CPU 1.80 GHZ 2.40 GHZ system by using MATLAB 7.0. is the Frobenius norm of absolute error matrices which is defined to be , where is the th iteration result for the RGI Method.

Example 1. In matrix equation (1), we choose

It is easy to show that the matrix equation (1) is consistent and has unique symmetric solution. By computing, the unique symmetric solution is given as

Taking and applying the RGI Method (Algorithm 7) to compute the symmetric solution of , the sum of is 962.2175. When taking , , and , the iterative errors versus are shown in Figure 1, where .

Figure 1: Convergence cure of symmetric solution (, , and ).

Example 2. In matrix equation (1), if we choose

it is easy to check that the matrix equation (1) is consistent and has unique skew symmetric solution. The unique skew symmetric solution is computed as

Also taking and , , and , then using Algorithm 10, the convergence cure of skew symmetric solution is shown in Figure 2.

Figure 2: Convergence cure of skew symmetric solution (, , and ).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This project was supported by NSF China (no. 11471122), Anhui Provincial Natural Science Foundation (no. 1508085MA12), Key Projects of Anhui Provincial University Excellent Talent Support Program (no. gxyqZD2016188), and the University Natural Science Research Key Project of Anhui Province (no. KJ2015A161).

References

  1. S. P. Bhattacharyya and E. de Souza, “Pole assignment via Sylvester's equation,” Systems & Control Letters, vol. 1, no. 4, pp. 261–263, 1982. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  2. G. R. Duan, “Solutions to matrix equation AV+BW=VF and their application to eigenstructure assignment in linear systems,” IEEE Transactions on Automatic Control, vol. 38, no. 2, pp. 276–280, 1993. View at Google Scholar
  3. G. R. Duan, “On the solution to the Sylvester matrix equation AV+BW=EVF,” IEEE Transactions on Automatic Control, vol. 41, no. 4, pp. 612–614, 1996. View at Publisher · View at Google Scholar
  4. B. Zhou and G. R. Duan, “A new solution to the generalized Sylvester matrix equation AV-EVF=BW,” Systems & Control Letters, vol. 55, no. 3, pp. 193–198, 2006. View at Publisher · View at Google Scholar
  5. T. S. Hu, Z. L. Lin, and J. Lam, “Unified gradient approach to performance optimization under a pole assignment constraint,” Journal of Optimization Theory and Applications, vol. 121, no. 2, pp. 361–383, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  6. J. Lam and W. Y. Yan, “A gradient flow approach to the robust pole-placement problem,” International Journal of Robust and Nonlinear Control, vol. 5, no. 3, pp. 175–185, 1995. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. J. Lam and W.-Y. Yan, “Pole assignment with optimal spectral conditioning,” Systems & Control Letters, vol. 29, no. 5, pp. 241–253, 1997. View at Publisher · View at Google Scholar · View at MathSciNet
  8. J. Lam, W.-Y. Yan, and T. Hu, “Pole assignment with eigenvalue and stability robustness,” International Journal of Control, vol. 72, no. 13, pp. 1165–1174, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  9. B. N. Datta, W.-W. Lin, and J.-N. Wang, “Robust partial pole assignment for vibrating systems with aerodynamic effects,” IEEE Transactions on Automatic Control, vol. 51, no. 12, pp. 1979–1984, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. B. Zhou and G. R. Duan, “Parametric approach for the normal Luenberger function observer design in second-order linear systems,” in Proceedings of the 45th IEEE Conference on Decision and Control, pp. 1423–1428, San Diego, Calif, USA, 2006.
  11. D. L. Chu and P. Van Dooren, “A novel numerical method for exact model matching problem with stability,” Automatica, vol. 42, no. 10, pp. 1697–1704, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. D. L. Chu, H. C. Chan, and D. W. C. Ho, “Regularization of singular systems by derivative and proportional output feedback,” SIAM Journal on Matrix Analysis and Applications, vol. 19, no. 1, pp. 21–38, 1998. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. D. L. Chu, V. Mehrmann, and N. K. Nichols, “Minimum norm regularization of descriptor systems by mixed output feedback,” Linear Algebra and Its Applications, vol. 296, no. 1–3, pp. 39–77, 1999. View at Publisher · View at Google Scholar · View at MathSciNet
  14. D. L. Chu and V. Mehrmann, “Disturbance decoupling for descriptor systems by state feedback,” SIAM Journal on Control and Optimization, vol. 38, no. 6, pp. 1830–1858, 2000. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. D. L. Chu and R. C. E. Tan, “Numerically reliable computing for the row by row decoupling problem with stability,” SIAM Journal on Matrix Analysis and Applications, vol. 23, no. 4, pp. 1143–1170, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. G. H. Golub, S. Nash, and C. Van Loan, “A Hessenberg-Schur method for the problem AX-XB=C,” IEEE Transactions on Automatics Control, vol. 24, pp. 909–913, 1979. View at Google Scholar
  17. R. H. Bartels and G. W. Stewart, “Algorithm 432: Solution of the matrix equation AX-XB=C,” Communications of the ACM, vol. 15, no. 9, pp. 820–826, 1972. View at Google Scholar
  18. Y.-B. Deng, Z.-Z. Bai, and Y.-H. Gao, “Iterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equations,” Numerical Linear Algebra with Applications, vol. 13, no. 10, pp. 801–823, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  19. Z.-Z. Bai, “On Hermitian and skew-Hermitian splitting iteration methods for continuous Sylvester equations,” Journal of Computational Mathematics, vol. 29, no. 2, pp. 185–198, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. A. Bouhamidi and K. Jbilou, “A note on the numerical approximate solutions for generalized Sylvester matrix equations with applications,” Applied Mathematics and Computation, vol. 206, no. 2, pp. 687–694, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. A. Kaabi, F. Toutounian, and A. Kerayechian, “Preconditioned Galerkin and minimal residual methods for solving Sylvester equations,” Applied Mathematics and Computation, vol. 181, no. 2, pp. 1208–1214, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  22. A. Kaabi, A. Kerayechian, and F. Toutounian, “A new version of successive approximations method for solving Sylvester matrix equations,” Applied Mathematics and Computation, vol. 186, no. 1, pp. 638–645, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  23. Y.-Q. Lin, “Implicitly restarted global FOM and GMRES for nonsymmetric matrix equations and Sylvester equations,” Applied Mathematics and Computation, vol. 167, no. 2, pp. 1004–1025, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. Y. Lin, “Minimal residual methods augmented with eigenvectors for solving SYLvester equations and generalized SYLvester equations,” Applied Mathematics and Computation, vol. 181, no. 1, pp. 487–499, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. D. Khojasteh Salkuyeh and F. Toutounian, “New approaches for solving large Sylvester equations,” Applied Mathematics and Computation, vol. 173, no. 1, pp. 9–18, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  26. J.-J. Zhang, “A note on the iterative solutions of general coupled matrix equation,” Applied Mathematics and Computation, vol. 217, no. 22, pp. 9380–9386, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  27. F. Ding and T. Chen, “Hierarchical gradient-based identification of multivariable discrete-time systems,” Automatica, vol. 41, no. 2, pp. 315–325, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  28. F. Ding and T. Chen, “Gradient-based iterative algorithms for solving a class of matrix equations,” IEEE Transactions on Automatic Control, vol. 50, no. 8, pp. 1216–1221, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  29. F. Ding and T. Chen, “Hierarchical identification of lifted state-space models for general dual-rate systems,” IEEE Transactions on Circuits and Systems. I. Regular Papers, vol. 52, no. 6, pp. 1179–1187, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  30. J.-P. Chehab and M. Raydan, “An implicit preconditioning strategy for large-scale generalized Sylvester equations,” Applied Mathematics and Computation, vol. 217, no. 21, pp. 8793–8803, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  31. F. Ding and T. Chen, “On iterative solutions of general coupled matrix equations,” SIAM Journal on Control and Optimization, vol. 44, no. 6, pp. 2269–2284, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  32. F. Ding, P. X. Liu, and J. Ding, “Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle,” Applied Mathematics and Computation, vol. 197, no. 1, pp. 41–50, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  33. A.-G. Wu, X. Zeng, G.-R. Duan, and W.-J. Wu, “Iterative solutions to the extended Sylvester-conjugate matrix equations,” Applied Mathematics and Computation, vol. 217, no. 1, pp. 130–142, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  34. Q. Niu, X. Wang, and L.-Z. Lu, “A relaxed gradient based algorithm for solving Sylvester equations,” Asian Journal of Control, vol. 13, no. 3, pp. 461–464, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  35. X. Wang, L. Dai, and D. Liao, “A modified gradient based algorithm for solving Sylvester equations,” Applied Mathematics and Computation, vol. 218, no. 9, pp. 5620–5628, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  36. Y.-J. Xie and C.-F. Ma, “The accelerated gradient based iterative algorithm for solving a class of generalized Sylvester-transpose matrix equation,” Applied Mathematics and Computation, vol. 273, pp. 1257–1269, 2016. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  37. Y. Xie and C. Ma, “Iterative methods to solve the generalized coupled Sylvester-conjugate matrix equations for obtaining the centrally symmetric (centrally antisymmetric) matrix solutions,” Journal of Applied Mathematics, vol. 2014, Article ID 515816, 17 pages, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  38. Y. Xie, N. Huang, and C. Ma, “Iterative method to solve the generalized coupled Sylvester-transpose linear matrix equations over reflexive or anti-reflexive matrix,” Computers & Mathematics with Applications, vol. 67, no. 11, pp. 2071–2084, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus