#### Abstract

An iterative algorithm is constructed to solve the linear matrix equation pair over generalized reflexive matrix . When the matrix equation pair is consistent over generalized reflexive matrix , for any generalized reflexive initial iterative matrix , the generalized reflexive solution can be obtained by the iterative algorithm within finite iterative steps in the absence of round-off errors. The unique least-norm generalized reflexive iterative solution of the matrix equation pair can be derived when an appropriate initial iterative matrix is chosen. Furthermore, the optimal approximate solution of for a given generalized reflexive matrix can be derived by finding the least-norm generalized reflexive solution of a new corresponding matrix equation pair with . Finally, several numerical examples are given to illustrate that our iterative algorithm is effective.

#### 1. Introduction

Let denote the set of all -by- real matrices. denotes the order identity matrix. Let and be two real generalized reflection matrices, that is, . A matrix is called generalized reflexive matrix with respect to the matrix pair if . For more properties and applications on generalized reflexive matrix, we refer to [1, 2]. The set of all -by- real generalized reflexive matrices with respect to matrix pair is denoted by .We denote by the superscripts the transpose of a matrix. In matrix space , define inner product as for all ;β represents the Frobenius norm of . represents the column space of . represents the vector operator; that is, for the matrix . stands for the Kronecker product of matrices and .

In this paper, we will consider the following two problems.

*Problem 1. * For given matrices , , find matrix such that

*Problem 2. * When Problem 1 is consistent, let denote the set of the generalized reflexive solutions of Problem 1. For a given matrix , find such that

The matrix equation pair (1.1) may arise in many areas of control and system theory. Dehghan and Hajarian [3] presented some examples to show a motivation for studying (1.1). Problem 2 occurs frequently in experiment design; see for instance [4]. In recent years, the matrix nearness problem has been studied extensively (e.g., [3, 5β19]).

Research on solving the matrix equation pair (1.1) has been actively ongoing for last 40 or more years. For instance, Mitra [20, 21] gave conditions for the existence of a solution and a representation of the general common solution to the matrix equation pair (1.1). Shinozaki and Sibuya [22] and vander Woude[23] discussed conditions for the existence of a common solution to the matrix equation pair (1.1). Navarra et al. [11] derived sufficient and necessary conditions for the existence of a common solution to (1.1). Yuan [18] obtained an analytical expression of the least-squares solutions of (1.1) by using the generalized singular value decomposition (GSVD) of matrices. Recently, some finite iterative algorithms have also been developed to solve matrix equations. Deng et al. [24] studied the consistent conditions and the general expressions about the Hermitian solutions of the matrix equations and designed an iterative method for its Hermitian minimum norm solutions. Li and Wu [25] gave symmetric and skew-antisymmetric solutions to certain matrix equations over the real quaternion algebra . Dehghan and Hajarian [26] proposed the necessary and sufficient conditions for the solvability of matrix equations , and over the reflexive or antireflexive matrix and obtained the general expression of the solutions for a solvable case. Wang [27, 28] gave the centrosymmetric solution to the system of quaternion matrix equations . Wang [29] also solved a system of matrix equations over arbitrary regular rings with identity. For more studies on iterative algorithms on coupled matrix equations, we refer to [6, 7, 15β17, 19, 30β34]. Peng et al. [13] presented iterative methods to obtain the symmetric solutions of (1.1). Sheng and Chen [14] presented a finite iterative method when (1.1) is consistent. Liao and Lei [9] presented an analytical expression of the least-squares solution and an algorithm for (1.1) with the minimum norm. Peng et al. [12] presented an efficient algorithm for the least-squares reflexive solution. Dehghan and Hajarian [3] presented an iterative algorithm for solving a pair of matrix equations (1.1) over generalized centrosymmetric matrices. Cai and Chen [35] presented an iterative algorithm for the least-squares bisymmetric solutions of the matrix equations (1.1). However, the problem of finding the generalized reflexive solutions of matrix equation pair (1.1) has not been solved. In this paper, we construct an iterative algorithm by which the solvability of Problem 1 can be determined automatically, the solution can be obtained within finite iterative steps when Problem 1 is consistent, and the solution of Problem 2 can be obtained by finding the least-norm generalized reflexive solution of a corresponding matrix equation pair.

This paper is organized as follows. In Section 2, we will solve Problem 1 by constructing an iterative algorithm; that is, if Problem 1 is consistent, then for an arbitrary initial matrix , we can obtain a solution of Problem 1 within finite iterative steps in the absence of round-off errors. Let , where are arbitrary matrices, or more especially, letting , we can obtain the unique least norm solution of Problem 1. Then in Section 3, we give the optimal approximate solution of Problem 2 by finding the least norm generalized reflexive solution of a corresponding new matrix equation pair. In Section 4, several numerical examples are given to illustrate the application of our iterative algorithm.

#### 2. The Solution of Problem 1

In this section, we will first introduce an iterative algorithm to solve Problem 1 and then prove that it is convergent. The idea of the algorithm and itβs proof in this paper are originally inspired by those in [13]. The idea of our algorithm is also inspired by those in [3]. When and , the results in this paper reduce to those in [3].

*Algorithm 2.1. **Step *1*.* Input matrices , , and two generalized reflection matrix .*Step *2*.* Choose an arbitrary matrix . Compute
.*Step *3*.* If , then stop. Else go to Step 4.*Step *4*.* Compute
*Step *5*.* If , then stop. Else, letting , go to Step 4.

Obviously, it can be seen that , where .

Lemma 2.2. *For the sequences and generated in Algorithm 2.1, one has
*

* Proof. * By Algorithm 2.1, we have
This completes the proof.

Lemma 2.3. *For the sequences and generated by Algorithm 2.1, and , one has
*

* Proof. * Since and for all , we only need to prove that for all . We prove the conclusion by induction, and two steps are required.*Step *1*.* We will show that
To prove this conclusion, we also use induction.

For , by Algorithm 2.1 and the proof of Lemma 2.2, we have that

Assume (2.6) holds for , that is, . When , by Lemma 2.2, we have that
Hence, (2.6) holds for . Therefor, (2.6) holds by the principle of induction.*Step *2*.* Assuming that , then we show that
In fact, by Lemma 2.2 we have
From the previous results, we have . By Lemma 2.2 we have that

By the principle of induction, (2.9) holds. Note that (2.5) is implied in Steps 1 and 2 by the principle of induction. This completes the proof.

Lemma 2.4. *Supposing is an arbitrary solution of Problem 1, that is, and , then
**
where the sequences , , and are generated by Algorithm 2.1.*

* Proof. * We proof the conclusion by induction.

For , we have that

Assume (2.12) holds for . By Algorithm 2.1, we have that
Therefore, (2.12) holds for . By the principle of induction, (2.12) holds. This completes the proof.

Theorem 2.5. *Supposing that Problem 1 is consistent, then for an arbitrary initial matrix , a solution of Problem 1 can be obtained with finite iteration steps in the absence of round-off errors.*

* Proof. * If , by Lemma 2.4 we have , then we can compute by Algorithm 2.1.

By Lemma 2.3, we have
Therefore, is an orthogonal basis of the matrix space
which implies that ; that is, is a solution of Problem 1. This completes the proof.

To show the least norm generalized reflexive solution of Problem 1, we first introduce the following result.

Lemma 2.6 (see [8, Lemmaββ2.4]). * Supposing that the consistent system of linear equation has a solution , then is the least norm solution of the system of linear equations.*

By Lemma 2.6, the following result can be obtained.

Theorem 2.7. *Suppose that Problem 1 is consistent. If one chooses the initial iterative matrix , where are arbitrary matrices, especially, let , one can obtain the unique least norm generalized reflexive solution of Problem 1 within finite iterative steps in the absence of round-off errors by using Algorithm 2.1.*

* Proof. * By Algorithm 2.1 and Theorem 2.5, if we let , where are arbitrary matrices, we can obtain the solution of Problem 1 within finite iterative steps in the absence of round-off errors, and the solution can be represented that .

In the sequel, we will prove that is just the least norm solution of Problem 1.

Consider the following system of matrix equations:

If Problem 1 has a solution , then
Thus
Hence, the systems of matrix equations (2.17) also have a solution .

Conversely, if the systems of matrix equations (2.17) have a solution , let , then , and
Therefore, is a solution of Problem 1.

So the solvability of Problem 1 is equivalent to that of the systems of matrix equations (2.17), and the solution of Problem 1 must be the solution of the systems of matrix equations (2.17).

Letting denote the set of all solutions of the systems of matrix equations (2.17), then we know that , where is the set of all solutions of Problem 1. In order to prove that is the least-norm solution of Problem 1, it is enough to prove that is the least-norm solution of the systems of matrix equations (2.21). Denoting , then the systems of matrix equations (2.17) are equivalent to the systems of linear equations
Noting that
by Lemma 2.6 we know that is the least norm solution of the systems of linear equations (2.21). Since vector operator is isomorphic and is the unique least norm solution of the systems of matrix equations (2.17), then is the unique least norm solution of Problem 1.

#### 3. The Solution of Problem 2

In this section, we will show that the optimal approximate solution of Problem 2 for a given generalized reflexive matrix can be derived by finding the least norm generalized reflexive solution of a new corresponding matrix equation pair , .

When Problem 1 is consistent, the set of solutions of Problem 1 denoted by is not empty. For a given matrix and , we have that the matrix equation pair (1.1) is equivalent to the following equation pair: where . Then Problem 2 is equivalent to finding the least norm generalized reflexive solution of the matrix equation pair (3.1).

By using Algorithm 2.1, let initially iterative matrix , or more especially, letting , we can obtain the unique least norm generalized reflexive solution of the matrix equation pair (3.1); then we can obtain the generalized reflexive solution of Problem 2, and can be represented that .

#### 4. Examples for the Iterative Methods

In this section, we will show several numerical examples to illustrate our results. All the tests are performed by MATLAB 7.8.

*Example 4.1. * Consider the generalized reflexive solution of the equation pair (1.1), where
Let

We will find the generalized reflexive solution of the matrix equation pair by using Algorithm 2.1. It can be verified that the matrix equation pair is consistent over generalized reflexive matrix and has a solution with respect to as follows:

Because of the influence of the error of calculation, the residual is usually unequal to zero in the process of the iteration, where . For any chosen positive number , however small enough, for example, , whenever , stop the iteration, and is regarded to be a generalized reflexive solution of the matrix equation pair . Choose an initially iterative matrix , such as
By Algorithm 2.1, we have
So we obtain a generalized reflexive solution of the matrix equation pair as follows:
The relative error of the solution and the residual are shown in Figure 1, where the relative error and the residual .

Letting by Algorithm 2.1, we have So we obtain a generalized reflexive solution of the matrix equation pair as follows: The relative error of the solution and the residual are shown in Figure 2.

*Example 4.2. * Consider the least norm generalized reflexive solution of the matrix equation pair in Example 4.1. Let
By using Algorithm 2.1, we have
So we obtain the least norm generalized reflexive solution of the matrix equation pair as follows:
The relative error of the solution and the residual are shown in Figure 3.

*Example 4.3. *Let denote the set of all generalized reflexive solutions of the matrix equation pair in Example 4.1. For a given matrix,
we will find , such that
That is, find the optimal approximate solution to the matrix in .

Letting , by the method mentioned in Section 3, we can obtain the least norm generalized reflexive solution of the matrix equation pair by choosing the initial iteration matrix , and is that The relative error of the solution and the residual are shown in Figure 4, where the relative error and the residual .

#### Acknowledgments

The authors are very much indebted to the anonymous referees and our editors for their constructive and valuable comments and suggestions which greatly improved the original manuscript of this paper. This work was partially supported by the Research Fund Project (Natural Science 2010XJKYL018), Opening Fund of Geomathematics Key Laboratory of Sichuan Province (scsxdz2011005), Natural Science Foundation of Sichuan Education Department (12ZB289) and Key Natural Science Foundation of Sichuan Education Department (12ZA008).