Mathematical Problems in Engineering

Volume 2017 (2017), Article ID 1624969, 8 pages

https://doi.org/10.1155/2017/1624969

## The Relaxed Gradient Based Iterative Algorithm for the Symmetric (Skew Symmetric) Solution of the Sylvester Equation

^{1}School of Information and Computer, Anhui Agricultural University, Hefei 230036, China^{2}School of Mathematics and Statistics, Fuyang Normal College, Anhui 236037, China

Correspondence should be addressed to Xingping Sheng

Received 26 February 2017; Accepted 23 March 2017; Published 3 April 2017

Academic Editor: Jean Jacques Loiseau

Copyright © 2017 Xiaodan Zhang and Xingping Sheng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In this paper, we present two different relaxed gradient based iterative (RGI) algorithms for solving the symmetric and skew symmetric solution of the Sylvester matrix equation . By using these two iterative methods, it is proved that the iterative solution converges to its true symmetric (skew symmetric) solution under some appropriate assumptions when any initial symmetric (skew symmetric) matrix is taken. Finally, two numerical examples are given to illustrate that the introduced iterative algorithms are more efficient.

#### 1. Introduction

For the convenience of our statements, the following notations will be used throughout the paper: represents the set of real matrices. For , we write , , , , , , , and to denote the transpose, the range space, the trace, the spectral radius, the maximal eigenvalue, the minimal eigenvalue, the spectral norm, and the Frobenius norm of a matrix , respectively; that is, and . , are the maximal singular value and the minimal nonzero singular value of . The symbol of represents an identity matrix of order , is an matrix whose elements are 1, and is the condition number of matrix . The inner product in space is defined as ; particularly, . denotes the Kronecker product, defined as , , . For any matrix , the vector operator is defined as . Using vector operator and Kronecker product, we have .

Considering the symmetric (skew symmetric) solution of the Sylvester matrix equation where , , , and the unknown matrix .

The Sylvester matrix equation (1) has many applications in linear system theory, for example, pole/eigenstructure assignment [1–4], robust pole assignment [5–8], robust partial pole assignment [9], observer design [10], model matching problem [11], regularization of descriptor systems [12, 13], disturbance decoupling problem [14], and noninteracting control [15].

As is well known, (1) has a unique solution if and only if and possess no common eigenvalues [16] and the solution can be computed by solving a linear system . Using this method, it will increase the computational cost and storage requirements, so that this approach is only applicable for small sized Sylvester equations.

Due to these drawbacks, many other methods for the solution have appeared in the literature. The idea of transforming the coefficient matrix into a Schur or Hessenberg form to compute (1) have been presented in [16, 17]. When the linear matrix (1) is inconsistent, a finite iterative method to solving its Hermitian minimum norm solutions has been presented in [18]. An efficient iterative method based on Hermitian and skew Hermitian splitting has been proposed in [19]. Krylov subspace based methods have been presented in [20–26] for solving Sylvester equations and generalized Sylvester equations. Recently based on the idea of a hierarchical identification principle [27–29], some efficient gradient based iterative algorithms for solving generalized Sylvester equations and coupled (general coupled) Sylvester equations have been proposed in [27, 30–32]. Particularly, for Sylvester equations of form (1), it is illustrated in [33] that the unknown matrix to be identified can be computed by a gradient based iterative algorithm. The convergence properties of the methods are investigated in [27, 32]. Niu et al. [34] proposed a relaxed gradient based iterative algorithm for solving Sylvester equations. Wang et al. [35] proposed a modified gradient based iterative algorithm for solving Sylvester equations (1). More recently, Xie and Ma [36] gave an accelerated gradient based iterative algorithm for solving (1). In [37, 38] Xie et al. studied the special structure solution of matrix equation (1) by using iterative method.

In this paper, inspired by [28, 34–36], we first derive a relaxed gradient based iterative (RGI) algorithm for solving the symmetric solution of matrix equation (1). Theoretical analysis shows that our method converges to the exact symmetric solution for any initial value with some appropriate assumptions. Then the proposed algorithm can be also applied to the skew symmetric solution of matrix equation (1). Numerical results illustrate that the proposed method is correct and feasible. We must point out that the ideas in this paper have some differences comparing with that in [28, 34–36].

The rest of the paper is organized as follows. In Section 2, some main preliminaries are provided. In Section 3, the relaxed gradient based iterative methods are studied. Finally, a numerical example is included to verify the superior convergence for the algorithms.

#### 2. Preliminaries

In this section, we reviewed the ideas and principles of the gradient based iterative (GI) method, the relaxed gradient based iterative (RGI) method, and the modified gradient based iterative (MGI) method.

Let be the convergence factor or step factor. The gradient based iterative method for is as follows:

The convergence of the gradient based iterative is stated as follows.

Lemma 1 (see [32]). *Assume matrix is full column-rank and ; then the gradient based iterative sequences in (2) converge to ; that is, or the error converges to zero for any initial value . Moreover, the maximal -convergence rate is given by , whereIn this case, the error vector satisfies *

In [28], Ding and Chen presented the following algorithm based on gradient for solving (1).

*Algorithm 2 (see [28] (the gradient based iterative (GI) algorithm)). * *Step 1*. Input matrices , given any small positive number . Choose the initial matrices and . Compute . Set .*Step 2*. If , stop; otherwise, go to Step .*Step 3*. Update the sequences *Step 4.* Set ; return to Step .

The authors [28] also pointed out that if the convergence factor is chosen in , Algorithm 2 will converge to the exact solution of (1).

Niu et al. [34] gave a relaxed gradient based iterative algorithm for solving (1). When is in the following algorithm has been proven to be convergent.

*Algorithm 3 (see [34] (the relaxed gradient based iterative (RGI) algorithm)). * *Step 1.* Input matrices , given any small positive number and appropriate positive number . Choose the initial matrices and . Compute . Set ≔ .*Step 2.* If , stop; otherwise, go to Step .*Step 3.* Update the sequences *Step 4.* Set ; return to Step .

Recently, in [35] Wang et al. proposed a modified gradient based iterative (MGI) algorithm to solve (1). The main difference is that, in the step of computing , the last approximate solution has been considered fully to update .

*Algorithm 4 (see [35] (the modified gradient based iterative (MGI) algorithm)). * *Step 1.* Input matrix , given any small positive number and appropriate positive number . Choose the initial matrices and . Compute . Set .*Step 2.* If , stop; otherwise, go to Step .*Step 3.* Update the sequences *Step 4.* Compute *Step 5.* Update the sequences *Step 6.* Compute*Step 7.* Set ; return to Step .

More recently, Xie and Ma [36] presented the following AGBI algorithm for solving (1) based on the idea of MGI.

*Algorithm 5 (see [36] (the accelerated gradient based iterative (AGBI) algorithm)). * *Step 1.* Input matrix , given any small positive number and appropriate positive number . Choose the initial matrices and . Compute . Set .*Step 2.* If , stop; otherwise, go to Step .*Step 3.* Update the sequences *Step 4.* Compute *Step 5.* Update the sequences *Step 6.* Compute *Step 7.* Set ; return to Step .

#### 3. Main Results

In this section, we first study the necessary and sufficient conditions of the symmetric solution for (1). Then the relaxed gradient based iterative algorithm for the symmetric solution of equation (1) is proposed. Following the same line, the relaxed gradient based iterative algorithm for the skew symmetric solution of equation (1) is also presented.

Theorem 6. *The matrix equation (1) has a unique symmetric solution if and only if the following pair of the matrix equations has a unique common solution and .*

*Proof. *If is a unique symmetric solution of (1), then and ; further we haveThis shows that is also the solution of the pair matrix equations (15).

Conversely, if the system of matrix equations (15) has a common solution , let us denote ; then we can check that This implies that is the unique symmetric solution of (1).

According to Theorem 6, if the unique common solution of equations (15) can be obtained, then the unique symmetric solution of (1) is .

According to Theorem 6, we construct a relaxed gradient based iterative algorithm to solve the symmetric solution of (1).

*Algorithm 7 (the relaxed gradient based iterative (RGI) algorithm for symmetric solution of (1)). * *Step 1.* Input matrices , given any small positive number and appropriative positive number such that . Choose any initial matrix . Set .*Step 2.* If , stop; otherwise, go to Step .*Step 3.* Update the sequences *Step 4.* Set ; return to Step .

In the following paragraph, we will investigate the convergence of Algorithm 7.

Theorem 8. *Assume the matrix equations (15) have a unique solution ; then the iterative sequence generated by the Algorithm 7 converges to , if that is, or the error converges to zero for any initial value .*

Further the sequence converges to , where is the unique symmetric solution of (1).

*Proof. *Define the error matrix We have The following two equalities are easily derived.Moreover Therefore, the above equality implies that It follows from and thatThe above two inequalities (25) mean thatIn other wordsEquation (27) implies that

From Theorem 6 and the above limited equation (28), we have

Following the same line, the idea of Algorithm 7 can be extended to solve the skew symmetric solution of (1). First, we need the following theorem.

Theorem 9. *The matrix equation (1) has a unique skew symmetric solution if and only if the following pair of the matrix equations has a unique common solution and .*

The relaxed gradient based iterative algorithm for solving the skew symmetric solution of (1) can be stated as follows.

*Algorithm 10 (the relaxed gradient based iterative (RGI) algorithm for skew symmetric solution of (1)). * *Step 1.* Input matrices , given any small positive number and appropriative positive number such that . Choose any initial matrix . Set .*Step 2.* If , stop; otherwise, go to Step .*Step 3.* Update the sequences *Step 4.* Set ; return to Step .

Similarly, we have the following theorem to ensure the convergence of the Algorithm 10.

Theorem 11. *Assume the matrix equations (30) have a unique solution ; then the iterative sequence generated by the Algorithm 10 converges to , if that is, or the error converges to zero for any initial value .*

Furthermore, the sequence converges to , where is the unique skew symmetric solution of (1).

#### 4. Numerical Examples

In this section, two numerical examples are used to show the efficiency of the RGI Method. All the computations were performed on Intel® Core™ i7-4500U CPU 1.80 GHZ 2.40 GHZ system by using MATLAB 7.0. is the Frobenius norm of absolute error matrices which is defined to be , where is the th iteration result for the RGI Method.

*Example 1. *In matrix equation (1), we choose

It is easy to show that the matrix equation (1) is consistent and has unique symmetric solution. By computing, the unique symmetric solution is given as

Taking and applying the RGI Method (Algorithm 7) to compute the symmetric solution of , the sum of is 962.2175. When taking , , and , the iterative errors versus are shown in Figure 1, where .