Journal of Applied Mathematics

Journal of Applied Mathematics / 2015 / Article
Special Issue

Iterative Methods and Applications 2014

View this Special Issue

Research Article | Open Access

Volume 2015 |Article ID 562529 | https://doi.org/10.1155/2015/562529

F. Toutounian, D. Khojasteh Salkuyeh, M. Mojarrab, "LSMR Iterative Method for General Coupled Matrix Equations", Journal of Applied Mathematics, vol. 2015, Article ID 562529, 12 pages, 2015. https://doi.org/10.1155/2015/562529

LSMR Iterative Method for General Coupled Matrix Equations

Academic Editor: D. R. Sahu
Received18 Mar 2014
Accepted28 Jul 2014
Published23 Mar 2015

Abstract

By extending the idea of LSMR method, we present an iterative method to solve the general coupled matrix equations , , (including the generalized (coupled) Lyapunov and Sylvester matrix equations as special cases) over some constrained matrix groups , such as symmetric, generalized bisymmetric, and -symmetric matrix groups. By this iterative method, for any initial matrix group , a solution group can be obtained within finite iteration steps in absence of round-off errors, and the minimum Frobenius norm solution or the minimum Frobenius norm least-squares solution group can be derived when an appropriate initial iterative matrix group is chosen. In addition, the optimal approximation solution group to a given matrix group in the Frobenius norm can be obtained by finding the least Frobenius norm solution group of new general coupled matrix equations. Finally, numerical examples are given to illustrate the effectiveness of the presented method.

1. Introduction

In control and system theory [17], we often encounter Lyapunov and Sylvester matrix equations which have been playing a fundamental role. Due to the important roles of the matrix equations, the studies on the matrix equations have been addressed in a large body of papers [827]. By using the hierarchical identification principle [911, 2832], a gradient-based iterative (GI) method was presented to solve the solutions and the least-squares solutions of the general coupled matrix equations. In [19, 33], Zhou et al. deduced the optimal parameter of the GI method for solving the solutions and the weighted least-squares solutions of the general coupled matrix equations. Dehghan and Hajarian [3436] introduced several iterative methods to solve various linear matrix equations.

In [12, 17], Huang et al. presented finite iterative algorithms for solving generalized coupled Sylvester systems. Li and Huang [37] proposed a matrix LSQR iterative method to solve the constrained solutions of the generalized coupled Sylvester matrix equations. Hajarian [38] presented the generalized QMRCGSTAB algorithm for solving Sylvester-transpose matrix equations. Recently, Lin and Simoncini [39] established minimal residual methods for large scale Lyapunov equations. They explored the numerical solution of this class of linear matrix equations when a minimal residual (MR) condition is used during the projection step.

In this paper, we construct a matrix iterative method based on the LSMR algorithm [40] to solve the constrained solutions of the following problems

Compatible matrix equations are as follows:Least-squares problem is as follows:Matrix nearness problem is as follows:where , , , , , are constant matrices with suitable dimensions, , , are unknown matrices to be solved, , , are given matrices, and is the solution set of (1) or problem (2).

This paper is organized as follows. In Section 2, we will briefly review the LSMR algorithm for solving linear systems of equations. In Section 3, we propose the matrix LSMR iterative algorithms for solving the problems (1)-(2). In Section 4, we solve the problem (3) by finding the minimum Frobenius norm solution group of the corresponding new general coupled matrix equations. In Section 5, numerical examples are given to illustrate the efficiency of the proposed iterative method. Finally, we make some concluding remarks in Section 6.

The notations used in this paper can be summarized as follows. represents the trace of the matrix . For , notation is Kronecker product and is the inner product with the Frobenius norm . The use of represents the vector operator defined aswhere is the th column of . The generalized bisymmetric matrices, the -symmetric matrices, and the symmetric orthogonal matrices can be defined as follows.

Definition 1 (see [41]). A matrix is said to be a symmetric orthogonal matrix (), if and .

Definition 2 (see [42]). For given symmetric orthogonal matrices , , we say a matrix is -symmetric (), if .

Definition 3 (see [43]). For a given symmetric orthogonal matrix , a matrix is said to be a generalized bisymmetric matrix (), if and .

2. LSMR Algorithm

In this section, we briefly review some fundamental properties of the LSMR algorithm [40], which is an iterative method for computing a solution to either of the following problems.

Compatible linear systems are as follows: Least-squares problem is as follows:where and . The LSMR algorithm uses an algorithm of Golub and Kahan [44], which stated as procedure Bidiag 4, to reduce to lower bidiagonal form. The procedure Bidiag 4 can be described as follows.

Bidiag 4 (starting vector ; reduction to lower bidiagonal form). The scalers and are chosen such that .
The following properties presented in [7] illustrate that the procedure Bidiag 4 has finite termination property.

Property 1. Suppose that steps of the procedure Bidiag 4 have been taken; then the vectors and are the orthonormal basis of the Krylov subspaces and , respectively.

Property 2. The procedure Bidiag 4 will stop at step if and only if is , where is the grade of with respect to and is the grade of with respect to .

By using the procedure Bidiag 4, the LSMR method constructs an approximation solution of the form , where , which solves the least-squares problem, , where is the residual for the approximate solution . The main steps of the LSMR algorithm can be summarized as shown in Algorithm 1.

Set , , , , , , , , , ,
For , until convergence Do:
 If is small enough then stop
End Do.

More details about the LSMR algorithm can be found in [40].

The stopping criterion may be used as for the compatible linear systems (5) and for the least-squares problem (6). Other stopping criteria can also be used and are not listed here. Reader can see [40] for details. Clearly, the sequence generated by the LSMR algorithm converges to the unique minimum norm solution of (5) or the unique minimum norm least-squares solution of problem (6).

3. A Matrix LSMR Iterative Method

In this section, we will present our matrix iterative method based on the LSMR algorithm, for solving (1) and problem (2). For the unknown matrices , , by using the Kronecker product, (1) and problem (2) are equivalent to (5) and problem (6), respectively, with

Hence, by using the invariance of the Frobenius norm under unitary transformations, it is easy to prove that the vector form , , , and () in LSMR algorithm can be rewritten in matrix forms, respectively, asIf the unknown matrices , then (1) and problem (2) are equivalent to (5) and problem (6), respectively, withwhere . Hence, the vector forms of , , , and () in LSMR algorithm can be rewritten in matrix forms, respectively, asIf the unknown matrices , then (1) and problem (2) are equivalent to (5) and problem (6), respectively, withwhere , . Hence, the vector forms of , ,  , and () in LSMR algorithm can be rewritten in matrix forms, respectively, as

If the unknown matrices (the set of symmetric matrices), then (1) and problem (2) are equivalent to (5) and problem (6), respectively, withHence, the vector forms of , , , and () in LSMR algorithm can be rewritten in matrix forms, respectively, as

From above results, we can obtain the matrix form iteration method of LSMR algorithm for solving the constrained solution group of (1) and problem (2). When the unknown matrices , the matrix form iterative method is given as shown in Algorithm 2.

Set , 
, ,
,
Set , , , , , , ,
For , until converges Do:
,