Research Article | Open Access

Yajun Xie, Changfeng Ma, "Iterative Methods to Solve the Generalized Coupled Sylvester-Conjugate Matrix Equations for Obtaining the Centrally Symmetric (Centrally Antisymmetric) Matrix Solutions", *Journal of Applied Mathematics*, vol. 2014, Article ID 515816, 17 pages, 2014. https://doi.org/10.1155/2014/515816

# Iterative Methods to Solve the Generalized Coupled Sylvester-Conjugate Matrix Equations for Obtaining the Centrally Symmetric (Centrally Antisymmetric) Matrix Solutions

**Academic Editor:**Sazzad Hossien Chowdhury

#### Abstract

The iterative method is presented for obtaining the centrally symmetric (centrally antisymmetric) matrix pair solutions of the generalized coupled Sylvester-conjugate matrix equations , . On the condition that the coupled matrix equations are consistent, we show that the solution pair can be obtained within finite iterative steps in the absence of round-off error for any initial value given centrally symmetric (centrally antisymmetric) matrix. Moreover, by choosing appropriate initial value, we can get the least Frobenius norm solution for the new generalized coupled Sylvester-conjugate linear matrix equations. Finally, some numerical examples are given to illustrate that the proposed iterative method is quite efficient.

#### 1. Introduction

Many research papers are involved in the system of matrix equation ([1â€“33]). The following matrix equation is a special case of coupled Sylvester linear matrix equations In [34], an iterative algorithm was constructed to solve (1) for skew-symmetric matrix . Navarra et al. studied a representation of the general solution for the matrix equations , [35]. By Moore-Penrose generalized inverse, some necessary and sufficient conditions on the existence of the solution and the expressions for the matrix equation were obtained in ([36]). Deng et al. give the consistent conditions and the general expressions of the Hermitian solutions for (1) [37]. In addition, by extending the well-known Jacobi and Gauss-Seidel iterations for , Ding et al. gained iterative solutions for matrix equation (1) and the generalized Sylvester matrix equation [38]. The closed form solutions to a family of generalized Sylvester matrix equations were given by utilizing the so-called Kronecker matrix polynomials in ([39]). In recent years, Dehghan and Hajarian considered the solution for the generalized coupled Sylvester matrix equations [40] , and presented a modified conjugate gradient method to solve the matrix equations over generalized bisymmetric matrix pair . Liang and Liu proposed a modified conjugate gradient method to solve the following problem [41]:

In the present paper, we conceive efficient algorithm to solve the following generalized coupled Sylvester-conjugate linear matrix equations for centrally symmetric (centrally antisymmetric) matrix pair : where , , â€‰â€‰ are given constant matrices, and are unknown matrices to be solved. When and , the problem (4) becomes the problem studied in [42]. When , , and , this system becomes the Yakubovich-conjugate matrix equation investigated in [43]. When , , and , the problem (4) becomes the equation considered in [44]. When and , the problem (4) becomes the equation in [45]. When , , and , (4) becomes the equation in [46].

It is known that modified conjugate gradient (MCG) method is the most popular iterative method for solving the system of linear equation where is an unknown vector, is a given matrix, and is constant vector. By the definition of the Kronecker product, matrix equations can be transformed into the system (5). Then the MCG can be applied to various linear matrix equations [44, 45]. Based on this idea, in this paper, we propose a modified conjugate gradient method to solve the system (4) and show that a solution pair can be obtained within finite iterative steps in the absence of round-off error for any initial value given centrally symmetric (centrally antisymmetric) matrix. Furthermore, by choosing appropriate initial value matrix pair, we can obtain the least Frobenius norm solution for (4).

As a matter of convenience, some terminology used throughout the paper follows.

is the set of complex matrices and is the set of all real matrices. For , we write , , , , , , , and to denote the real part, the imaginary part, the conjugation, transpose, conjugate transpose, the inverse, the Frobenius norm, and the column space of the matrix , respectively. denotes the block diagonal matrix, where . For any , , denotes the Kronecker product defined as . For the matrix , denotes the operator defined as . We use to denote the identity matrix of size implied by context.

*Definition 1. *Let and , where denotes the column vector whose element is 1 and the other elements are zeros. An complex matrix is said to be a centrally symmetric (centrally antisymmetric) matrix if , denote the set of all centrally symmetric (centrally antisymmetric) matrices by .

The rest of this paper is organized as follows. In Section 2, we construct modified conjugate gradient (MCG) method for solving the system (4) and show that a solution pair for (4) can be obtained by the MCG method within finite iterative steps in the absence of round-off error for any initial value given centrally symmetric (centrally antisymmetric) matrix. Furthermore, we demonstrate that the least Frobenius norm solution can be obtained by choosing a special kind of initial matrix. Also we give some numerical examples which illustrate that the introduced iterative algorithm is efficient in Section 3. Conclusions are arranged in Section 4.

#### 2. The Iterative Method for Solving the Matrix Equations (4)

In this section, we present the modified conjugate gradient method (MCG) for solving the system (4). Firstly, we recall that the definition of inner product came from [42].

The inner product in space is defined as By Theorem 1 in [42], we know that the inner product defined by (6) satisfies the following three axioms:(1)symmetry: ;(2)linearity in the first argument: where and are real constants;(3)positive definiteness: , for all .

For all real constants , by (1) and (2), we get namely, the inner product defined by (6) is linear in the second argument.

By the relation between the matrix trace and the conjugate operation, we get

The norm of a matrix generated by this inner product space is denoted by . Then for , we obtain

What is the relationship between this norm and the Frobenius norm? It is well known that and is a Hermite matrix. Then by the knowledge of algebra, we know that is real; hence, . This shows that . Another interesting relationship is that That is, .

In the following we present some algorithms. The ordinary conjugate gradient (CG) method to solve (5) is as follows [47].

*Algorithm 2 (CG method). *
Consider the following steps.*Stepâ€‰â€‰1*. Input , . Choose the initial vectors and set ; calculate , .*Stepâ€‰â€‰2.* If or and , stop; otherwise, calculate
*Stepâ€‰â€‰3.* Update the sequences
*Stepâ€‰â€‰4.* Set ; return to Step .

It is known that the size of the linear equation (5) will be large, when (4) is transformed to a linear equation (5) by the Kronecker product. Therefore, the iterative Algorithm 2 will consume much more computer time and memory space once increasing the dimensionality of coefficient matrix.

In view of these considerations, we construct the following so-called modified conjugate gradient (MCG) method to solve (4).

*Algorithm 3 (MCG method for centrally symmetric matrix version). *
Consider the following steps. *Stepâ€‰â€‰1.* Input appropriate dimension matrices , , , , and , . Choose the initial matrices and , in Definition 1. Compute
set . *Stepâ€‰â€‰2.* If or , , stop; otherwise, go to Step . *Stepâ€‰â€‰3*. Update the sequences
where
*Stepâ€‰â€‰4.* Set ; return to Step .

*Algorithm 4 (MCG method for centrally antisymmetric matrix version). *
Consider the following steps.*Stepâ€‰â€‰1.* Input matrices , , , , and , . Choose the initial matrix and , in Definition 1. Compute
set .*Stepâ€‰â€‰2.* If or , , stop; otherwise, go to Step . *Stepâ€‰â€‰3.* Update the sequences
where
*Stepâ€‰â€‰4*. Set ; return to Step .

Now, we will show that the sequence matrix pair generated by Algorithm 3 converges to the solution for (4) within finite iterative steps in the absence of round-off error for any initial value over centrally symmetric (centrally antisymmetric) matrix.

Lemma 5. *Let the sequences , , , , , and generated by Algorithm 3; then have
*

*Proof. *By Algorithm 3 and (20), we get
In a similar way, we can get
This together with the definition of inner product yields that
Then by the updated formulas of , , and , we obtain
which completes the proof.

Lemma 6. *Let the sequences, , , and , be generated by Algorithm 3; one has
*

*Proof. *Firstly, we prove
By mathematical induction, for , by Lemma 5, and noticing , generated by Algorithm 3, we get
where the second equality is from the fact
In addition, by (19), (20), and Lemma 5, we have
where the second equality is from (6) and the fact
Therefore, (27) holds for .

Suppose that (27) holds, for . For , it follows from Lemma 5, (9) that
where the fourth equality holds by the induction assumption. Combining (19) and (20) and by induction with the above result, we obtain
where the third equality is from Lemma 5.

For , by Lemma 5 and the induction, we have
Analogously, for , then we obtain
In addition, from Lemma 5 and the induction, for , we get
So (27) holds, for . By induction principle, (27) holds, for all . For , we obtain
which completes the proof.

Lemma 7. *Suppose that the system of matrix equations (4) is consistent; let be an arbitrary solution pair of (4). Then for any initial matrices , , the sequences , , , , and generated by Algorithm 3 satisfy
*

*Proof. *The conclusion is accomplished by mathematical induction.

Firstly, we notice that the sequences pair , generated by Algorithm 3 are all central symmetric matrices since initial matrix pair is centrally symmetric matrix. Then for , it follows from Algorithm 3 that
In the same way, we can get
This shows that
That is, (38) holds, for .

Assume (38) holds, for . For , it follows from the updated formulas of , that
Then
On the other hand, we have
Therefore, by (20) we get