Abstract

By extending the idea of LSMR method, we present an iterative method to solve the general coupled matrix equations , , (including the generalized (coupled) Lyapunov and Sylvester matrix equations as special cases) over some constrained matrix groups , such as symmetric, generalized bisymmetric, and -symmetric matrix groups. By this iterative method, for any initial matrix group , a solution group can be obtained within finite iteration steps in absence of round-off errors, and the minimum Frobenius norm solution or the minimum Frobenius norm least-squares solution group can be derived when an appropriate initial iterative matrix group is chosen. In addition, the optimal approximation solution group to a given matrix group in the Frobenius norm can be obtained by finding the least Frobenius norm solution group of new general coupled matrix equations. Finally, numerical examples are given to illustrate the effectiveness of the presented method.

1. Introduction

In control and system theory [17], we often encounter Lyapunov and Sylvester matrix equations which have been playing a fundamental role. Due to the important roles of the matrix equations, the studies on the matrix equations have been addressed in a large body of papers [827]. By using the hierarchical identification principle [911, 2832], a gradient-based iterative (GI) method was presented to solve the solutions and the least-squares solutions of the general coupled matrix equations. In [19, 33], Zhou et al. deduced the optimal parameter of the GI method for solving the solutions and the weighted least-squares solutions of the general coupled matrix equations. Dehghan and Hajarian [3436] introduced several iterative methods to solve various linear matrix equations.

In [12, 17], Huang et al. presented finite iterative algorithms for solving generalized coupled Sylvester systems. Li and Huang [37] proposed a matrix LSQR iterative method to solve the constrained solutions of the generalized coupled Sylvester matrix equations. Hajarian [38] presented the generalized QMRCGSTAB algorithm for solving Sylvester-transpose matrix equations. Recently, Lin and Simoncini [39] established minimal residual methods for large scale Lyapunov equations. They explored the numerical solution of this class of linear matrix equations when a minimal residual (MR) condition is used during the projection step.

In this paper, we construct a matrix iterative method based on the LSMR algorithm [40] to solve the constrained solutions of the following problems

Compatible matrix equations are as follows:Least-squares problem is as follows:Matrix nearness problem is as follows:where , , , , , are constant matrices with suitable dimensions, , , are unknown matrices to be solved, , , are given matrices, and is the solution set of (1) or problem (2).

This paper is organized as follows. In Section 2, we will briefly review the LSMR algorithm for solving linear systems of equations. In Section 3, we propose the matrix LSMR iterative algorithms for solving the problems (1)-(2). In Section 4, we solve the problem (3) by finding the minimum Frobenius norm solution group of the corresponding new general coupled matrix equations. In Section 5, numerical examples are given to illustrate the efficiency of the proposed iterative method. Finally, we make some concluding remarks in Section 6.

The notations used in this paper can be summarized as follows. represents the trace of the matrix . For , notation is Kronecker product and is the inner product with the Frobenius norm . The use of represents the vector operator defined aswhere is the th column of . The generalized bisymmetric matrices, the -symmetric matrices, and the symmetric orthogonal matrices can be defined as follows.

Definition 1 (see [41]). A matrix is said to be a symmetric orthogonal matrix (), if and .

Definition 2 (see [42]). For given symmetric orthogonal matrices , , we say a matrix is -symmetric (), if .

Definition 3 (see [43]). For a given symmetric orthogonal matrix , a matrix is said to be a generalized bisymmetric matrix (), if and .

2. LSMR Algorithm

In this section, we briefly review some fundamental properties of the LSMR algorithm [40], which is an iterative method for computing a solution to either of the following problems.

Compatible linear systems are as follows: Least-squares problem is as follows:where and . The LSMR algorithm uses an algorithm of Golub and Kahan [44], which stated as procedure Bidiag 4, to reduce to lower bidiagonal form. The procedure Bidiag 4 can be described as follows.

Bidiag 4 (starting vector ; reduction to lower bidiagonal form). The scalers and are chosen such that .
The following properties presented in [7] illustrate that the procedure Bidiag 4 has finite termination property.

Property 1. Suppose that steps of the procedure Bidiag 4 have been taken; then the vectors and are the orthonormal basis of the Krylov subspaces and , respectively.

Property 2. The procedure Bidiag 4 will stop at step if and only if is , where is the grade of with respect to and is the grade of with respect to .

By using the procedure Bidiag 4, the LSMR method constructs an approximation solution of the form , where , which solves the least-squares problem, , where is the residual for the approximate solution . The main steps of the LSMR algorithm can be summarized as shown in Algorithm 1.

Set , , , , , , , , , ,
For , until convergence Do:
 If is small enough then stop
End Do.

More details about the LSMR algorithm can be found in [40].

The stopping criterion may be used as for the compatible linear systems (5) and for the least-squares problem (6). Other stopping criteria can also be used and are not listed here. Reader can see [40] for details. Clearly, the sequence generated by the LSMR algorithm converges to the unique minimum norm solution of (5) or the unique minimum norm least-squares solution of problem (6).

3. A Matrix LSMR Iterative Method

In this section, we will present our matrix iterative method based on the LSMR algorithm, for solving (1) and problem (2). For the unknown matrices , , by using the Kronecker product, (1) and problem (2) are equivalent to (5) and problem (6), respectively, with

Hence, by using the invariance of the Frobenius norm under unitary transformations, it is easy to prove that the vector form , , , and () in LSMR algorithm can be rewritten in matrix forms, respectively, asIf the unknown matrices , then (1) and problem (2) are equivalent to (5) and problem (6), respectively, withwhere . Hence, the vector forms of , , , and () in LSMR algorithm can be rewritten in matrix forms, respectively, asIf the unknown matrices , then (1) and problem (2) are equivalent to (5) and problem (6), respectively, withwhere , . Hence, the vector forms of , ,  , and () in LSMR algorithm can be rewritten in matrix forms, respectively, as

If the unknown matrices (the set of symmetric matrices), then (1) and problem (2) are equivalent to (5) and problem (6), respectively, withHence, the vector forms of , , , and () in LSMR algorithm can be rewritten in matrix forms, respectively, as

From above results, we can obtain the matrix form iteration method of LSMR algorithm for solving the constrained solution group of (1) and problem (2). When the unknown matrices , the matrix form iterative method is given as shown in Algorithm 2.

Set , 
, ,
,
Set , , , , , , ,
For , until converges Do:
,
,
,
,
,
 If is small enough then stop
End Do.

4. The Solution Group of Problem (3)

Now, we consider the solution group of the matrix nearness problem (3) for given matrix group , where ,  . If , it is easy to prove thatLetthen problem (3) is equivalent to finding the minimum Frobenius norm symmetric solution group or minimum Frobenius norm least-squares symmetric solution group of the following problems, respectively.

Compatible matrix equations are as follows:Least-squares problem is as follows:By LSMR_SR_M method, we can get the minimum Frobenius norm symmetric solution group of (19) (or the minimum Frobenius norm least-squares symmetric solution group of problem (20)). Then, the optimal approximate solution group of problem (3) can be obtained; that is, .

5. Numerical Examples

To compare the behavior of the proposed matrix method discussed in the previous section with the CGNE method [43] and the matrix LSQR iterative method (LSQR_M) [37], we present in this section numerical results for three examples. All the numerical computations are performed in MATLAB 7.

Example 1. Suppose that the matrices ,  ,  ,  , are given by the following matrices:The and matrices are chosen such that and , where and are the identity matrix and the matrix whose entries are all one, respectively.

In Figure 1, we display the convergence curves of the function , withwhere ,  , is the residual matrix of the th equation in th iteration. The initial iterative matrices in all the iterative methods are chosen as zero matrices of suitable size. Figure 1 confirms that the proposed algorithm has faster convergence rate and higher accuracy than the CGNE method and similar behavior to the matrix LSQR iterative method.

Example 2. Suppose that the matrices ,  ,  ,  , are given by the following matrices:As Example 1, the and matrices are chosen such that and with and the initial iterative matrices in all the iterative methods are chosen as zero matrices of suitable size. In Figure 2, as Figure 1, we display the convergence curves of the function ,. This figure shows that the LSMR method outperforms the CGNE and LSQR methods.

Example 3 (see [45]). Consider the convection diffusion equation with the Dirichlet boundary conditionsHere is the unit square . The operator was discretized using central finite differences on , with mesh size in the “” direction and in the “” direction. This yields a linear system of algebraic equations that can be written as a Sylvester matrix equation(as a particular case of (1) with ,  ,  ,  ,  , and ) where tridiagonal matrices and are given byThe right-hand side matrix is obtained as follows:In this example, the functions and were chosen such that the exact solution is on the domain . In addition, we used the symmetric successive overrelaxation (SSOR) preconditioner for the matrix equation (25) to increase the convergence rate. It is easy to prove that the matrix equation (25) is equivalent to the linear system:where ,  , and .
The matrices and can be written aswhere is the diagonal of and and are the strict lower and upper part of , respectively. Then the splitting of the matrix is given aswithNow instead of solving the matrix equation (25), we will apply the LSMR-M algorithm to the preconditioned systemwhere is a preconditioner. As said, we use the SSOR preconditioner defined byWe note that the matrix is not used explicitly. We only use the action of the linear operator on a matrix , defined by . In addition, we use only matrix-by-vector products; then when using the SSOR preconditioner we have to compute, for a given , the matrix such thatorWith setting the linear system (35) is equivalent to

For computing such that , we have to solve the following matrix equations:The matrix equations (39) and (41) are also Sylvester matrix equations. But as was stated in [45], since the matrices involved in these equations are triangular, they are solved easily. In (39), the matrix can be computed from left to right and from top to bottom in each column; this corresponds to backward substitution. Equation (41) is solved in the opposite sense and this corresponds to forward substitution. Now, to compute in (35), it is sufficient to use the action of the operator on the matrix , defined by .

To compute in (36), first, we use the action of the operator on the matrix, defined by . Then, by settingthe linear system (36) is equivalent to Therefore, can be obtained by solving the following matrix equations:

Similarly, the matrix equations (44) and (46) are also Sylvester matrix equations. But since the matrices involved in these equations are triangular, in (44), the matrix can be computed from right to left and from bottom to top in each row; this corresponds to forward substitution. Equation (46) is solved in the opposite sense and this corresponds to backward substitution.

In Figure 3, we exhibited the function withversus the number of iterations for LSMR-M and the SSOR-LSMR-M. Furthermore, we note that for computing the quantity , ( is the residual matrix in th iteration) we used the pseudocode stated in [40]. These results were obtained for ,  ,  , and . The initial iterative matrix was chosen as zero matrix of suitable size. As we observe by using the SSOR preconditioner the convergence rate of the LSMR-M algorithm has increased, effectively.

6. Conclusion

Solving the linear matrix equations is an attractive part of research. By extending the idea of LSMR method, we have proposed Algorithm 2 to solve the coupled matrix equations (1) or the least-squares problem (2) over generalized symmetric matrices. By this new iterative method on the selection of special initial matrix group, we obtain the minimum Frobenius norm solutions or the minimum Frobenius norm least-squares solutions over generalized symmetric matrices. All the presented results show that the matrix LSMR iterative method is efficient to compute the solution group of the general coupled matrix equations.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The authors would like to thank the referees for their valuable remarks and helpful suggestions.