Abstract

We consider an iterative algorithm for solving a complex matrix equation with conjugate and transpose of two unknowns of the form: + . With the iterative algorithm, the existence of a solution of this matrix equation can be determined automatically. When this matrix equation is consistent, for any initial matrices , the solutions can be obtained by iterative algorithm within finite iterative steps in the absence of round-off errors. Some lemmas and theorems are stated and proved where the iterative solutions are obtained. A numerical example is given to illustrate the effectiveness of the proposed method and to support the theoretical results of this paper.

1. Introduction

Consider the complex matrix equation: where , , , and are given matrices, while are matrices to be determined. In the field of linear algebra, iterative algorithms for solving matrix equations have received much attention. Based on the iterative solutions of matrix equations, Ding and Chen presented the hierarchical gradient iterative algorithms for general matrix equations [1, 2] and hierarchical least squares iterative algorithms for generalized coupled Sylvester matrix equations and general coupled matrix equations [3, 4]. The hierarchical gradient iterative algorithms [1, 2] and hierarchical least squares iterative algorithms [1, 4, 5] for solving general (coupled) matrix equations are innovational and computationally efficient numerical ones and were proposed based on the hierarchical identification principle [3, 6] which regards the unknown matrix as the system parameter matrix to be identified. Iterative algorithms were proposed for continuous and discrete Lyapunov matrix equations by applying the hierarchical identification principle[7]. Recently, the idea of the hierarchical identification was also utilized to solve the so-called extended Sylvester-conjugate matrix equations in [8]. From an optimization point of view, a gradient-based iteration was constructed in [9] to solve the general coupled matrix equation. A significant feature of the method in [9] is that a necessary and sufficient condition guaranteeing the convergence of the algorithm can be explicitly obtained.

Some complex matrix equations have attracted attention from many researchers since it was shown in [10] that the consistence of the matrix equation can be characterized by the consimilarity [1113] of two partitioned matrices related to the coefficient matrices , and . By consimilarity Jordan decomposition, explicit solutions were obtained in [10, 14]. Some explicit expressions of the solution to the matrix equation were established in [15], and it was shown that this matrix equation has a unique solution if and only if and hav no common eigenvalues. Research on solving linear matrix equations has been actively engaged in for many years. For example, Navarra et al. studied a representation of the general common solution of the matrix equations and [16]; Van der Woude obtained the existence of a common solution for the matrix equations [17]; Bhimasankaram considered the linear matrix equations , and [18]. Mitra has provided conditions for the existence of a solution and a representation of the general common solution of the matrix equations and and the matrix equations and [19, 20]. Ramadan et al. [21] introduced a complete, general, and explicit solution to the Yakubovich matrix equation , and the matrix equation has some important results that have been developed. In [22], necessary and sufficient conditions for its solvability and the expression of the solution were derived by means of generalized inverse. Moreover, in [22] the least-squares solution was also obtained by using the generalized singular value decomposition. While in [23], when this matrix equation is consistent, the minimum-norm solution was given by the use of the canonical correlation decomposition. In [24], based on the projection theorem in Hilbert space, an analytical expression of the least-squares solution was given for the matrix equation by making use of the generalized singular value decomposition and the canonical correlation decomposition. In [25], by using the matrix rank method a necessary and sufficient condition was derived for the matrix equations and to have a common least square solution. In the aforementioned methods, the coefficient matrices of the considered equations are required to be firstly transformed into some canonical forms. Recently, an iterative algorithm has presented in [26] to solve the matrix equation . Different from the above mentioned methods, this algorithm can be implemented by initial coefficient matrices and can provide a solution within finite iteration steps for any initial values.

Very recently, in [27] a new operator of conjugate product for complex polynomial matrices is proposed. It is shown that an arbitrary complex polynomial matrix can be converted into the so-called Smith normal form by elementary transformations in the framework of conjugate product. Meanwhile, the conjugate product and the Sylvester-conjugate sum are also proposed in [28]. Based on the important properties of the above new operators, a unified approach to solve a general class of Sylvester-polynomial-conjugate matrix equations is given. The complete solution of the Sylvester-polynomial-conjugate matrix equation is obtained. In [29] by using a real inner product in complex matrix spaces, a solution can be obtained within finite iterative steps for any initial values in the absence of round-off errors. In [30] iterative solutions to a class of complex matrix equations are given by applying the hierarchical identification principle.

This paper is organized as follows. First, in Section 2, we introduce some notations, a lemma, and a theorem that will be needed to develop this work. In Section 3, we propose iterative methods to obtain numerical solution to the complex matrix equation with conjugate and transpose of two unknowns of the form: using iterative method. In Section 4, numerical example is given to explore the simplicity and the neatness of the presented methods.

2. Preliminaries

The following notations, definitions, lemmas, and theorems will be used to develop the proposed work. We use , , and to denote the transpose, conjugate, conjugate transpose, the trace, and the Frobenius norm of a matrix , respectively. We denote the set of all complex matrices by , and denote the real part of number .

Definition 1 (inner product [31]). A real inner product space is a vector space over the real field together with an inner product. That is, with a map Satisfying the following three axioms for all vectors and all scalars : (1)symmetry: (2)linearity in the first argument: (3)positive definiteness: > 0 for all ,
two vectors are said to be orthogonal if .
The following theorem defines a real inner product on space over the field .

Theorem 2 (see [32]). In the space over the field , an inner product can be defined as

3. The Main Result

In this section, we propose an iterative solution to a complex matrix equation with conjugate and transpose of two unknowns: defined in (1) where , , , and are given matrices, while are matrices to be determined.

The following finite iterative algorithm is presented to solve it.

Algorithm 3. Input , , , , , , , , , , , , , , , , ;
Chosen arbitrary matrices , ;
Set
If , then stop; else go to Step 5;
Set;
If , then stop; else let go to Step 5.
To prove the convergence property of Algorithm 3, we first establish the following basic properties.

Lemma 4. Suppose that the matrix equation (1) is consistent and , are arbitrary solutions of (1). Then for any initial matrices and , we have where the sequence , ,  ,  , and are generated by Algorithm 3 for .

Proof. We apply mathematical induction to prove the conclusion.
For , from Algorithm 3 we have From properties of trace and conjugate In view that and are solutions of matrix equation (1), with this relation we have This implies that (8) holds for .
Assume that (8) holds for . That is, Then we have to prove that the conclusion holds for ; it follows from Algorithm 3 that From properties of trace and conjugate we get In view that and are solutions of matrix equation (1), with relation (14) one has Then relation (8) holds by mathematical induction.

Lemma 5. Suppose that the matrix equation (1) is consistent and the sequences , , and are generated by Algorithm 3 with any initial matrices , , such that for all , and then

Proof. We apply mathematical induction.
Step 1. We prove that First from Algorithm 3 we have For , it follows from (19) that This implies that (17) is satisfied for .
From Algorithm 3 we have This implies that (18) is satisfied for = 1.
Assume that (17) and (18) hold for , from Algorithm 3 we have Thus (17) holds for .
Also, from Algorithm 3 we have This implies that (17) and (18) hold for .
Then relations (17) and (18) holds by mathematical induction.
Step 2. We want to show that holds for . We will prove this conclusion by induction. The case of has been proven in Step 1. Now we assume that (24) holds for , . The aim is to show that First we prove the following: By using Algorithm 3, from (19) and induction we have Then (26) is holds.
From Algorithm 3 we have Also from (19) we have Repeating (28) and (29), one can easily obtain for certain and Combining these two relations with (26) implies that (24) holds for . From Steps 1 and 2 the conclusion holds by the principle of induction. With the above two lemmas, we have the following theorem.

Theorem 6 (see [32]). If the matrix equation (1) is consistent, then a solution can be obtained within finite iteration steps by using Algorithm 3 for any initial matrices , .

4. Numerical Example

In this section, we present numerical example to illustrate the application of our proposed methods.

Example 7. In this example we illustrate our theoretical results of Algorithm 3 for solving the system of matrix equation: Because of the influence of the error of calculation, the residual is usually unequal to zero in this process of the iteration. We regard the matrix as a zero matrix if .
Given Taking and we apply Algorithm 3 to compute , .
And iterating 42 steps we get which satisfy the matrix equation:
With the corresponding residual

5. Conclusions

The above Figure 1 shows the convergence curve for the residual function . In this paper, an iterative algorithm constructed to solve a complex matrix equation with conjugate and transpose of two unknowns of the form: + is presented. We proved that the iterative algorithms always converge to the solution for any initial matrices. We stated and proved some lemmas and theorems where the solutions are obtained. The proposed method is illustrated by numerical example where the obtained numerical results show that our technique is very neat and efficient.