Journal of Applied Mathematics

Volume 2015, Article ID 562529, 12 pages

http://dx.doi.org/10.1155/2015/562529

## LSMR Iterative Method for General Coupled Matrix Equations

^{1}Department of Applied Mathematics, School of Mathematical Sciences, Ferdowsi University of Mashhad, Iran^{2}The Center of Excellence on Modelling and Control Systems, Ferdowsi University of Mashhad, Iran^{3}Faculty of Mathematical Sciences, University of Guilan, Iran

Received 18 March 2014; Accepted 28 July 2014

Academic Editor: D. R. Sahu

Copyright © 2015 F. Toutounian et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

By extending the idea of LSMR method, we present an iterative method to solve the general coupled matrix equations , , (including the generalized (coupled) Lyapunov and Sylvester matrix equations as special cases) over some constrained matrix groups , such as symmetric, generalized bisymmetric, and -symmetric matrix groups. By this iterative method, for any initial matrix group , a solution group can be obtained within finite iteration steps in absence of round-off errors, and the minimum Frobenius norm solution or the minimum Frobenius norm least-squares solution group can be derived when an appropriate initial iterative matrix group is chosen. In addition, the optimal approximation solution group to a given matrix group in the Frobenius norm can be obtained by finding the least Frobenius norm solution group of new general coupled matrix equations. Finally, numerical examples are given to illustrate the effectiveness of the presented method.

#### 1. Introduction

In control and system theory [1–7], we often encounter Lyapunov and Sylvester matrix equations which have been playing a fundamental role. Due to the important roles of the matrix equations, the studies on the matrix equations have been addressed in a large body of papers [8–27]. By using the hierarchical identification principle [9–11, 28–32], a gradient-based iterative (GI) method was presented to solve the solutions and the least-squares solutions of the general coupled matrix equations. In [19, 33], Zhou et al. deduced the optimal parameter of the GI method for solving the solutions and the weighted least-squares solutions of the general coupled matrix equations. Dehghan and Hajarian [34–36] introduced several iterative methods to solve various linear matrix equations.

In [12, 17], Huang et al. presented finite iterative algorithms for solving generalized coupled Sylvester systems. Li and Huang [37] proposed a matrix LSQR iterative method to solve the constrained solutions of the generalized coupled Sylvester matrix equations. Hajarian [38] presented the generalized QMRCGSTAB algorithm for solving Sylvester-transpose matrix equations. Recently, Lin and Simoncini [39] established minimal residual methods for large scale Lyapunov equations. They explored the numerical solution of this class of linear matrix equations when a minimal residual (MR) condition is used during the projection step.

In this paper, we construct a matrix iterative method based on the LSMR algorithm [40] to solve the constrained solutions of the following problems

Compatible matrix equations are as follows:Least-squares problem is as follows:Matrix nearness problem is as follows:where , , , , , are constant matrices with suitable dimensions, , , are unknown matrices to be solved, , , are given matrices, and is the solution set of (1) or problem (2).

This paper is organized as follows. In Section 2, we will briefly review the LSMR algorithm for solving linear systems of equations. In Section 3, we propose the matrix LSMR iterative algorithms for solving the problems (1)-(2). In Section 4, we solve the problem (3) by finding the minimum Frobenius norm solution group of the corresponding new general coupled matrix equations. In Section 5, numerical examples are given to illustrate the efficiency of the proposed iterative method. Finally, we make some concluding remarks in Section 6.

The notations used in this paper can be summarized as follows. represents the trace of the matrix . For , notation is Kronecker product and is the inner product with the Frobenius norm . The use of represents the vector operator defined aswhere is the th column of . The generalized bisymmetric matrices, the -symmetric matrices, and the symmetric orthogonal matrices can be defined as follows.

*Definition 1 (see [41]). *A matrix is said to be a symmetric orthogonal matrix (), if and .

*Definition 2 (see [42]). *For given symmetric orthogonal matrices , , we say a matrix is -symmetric (), if .

*Definition 3 (see [43]). *For a given symmetric orthogonal matrix , a matrix is said to be a generalized bisymmetric matrix (), if and .

#### 2. LSMR Algorithm

In this section, we briefly review some fundamental properties of the LSMR algorithm [40], which is an iterative method for computing a solution to either of the following problems.

Compatible linear systems are as follows: Least-squares problem is as follows:where and . The LSMR algorithm uses an algorithm of Golub and Kahan [44], which stated as procedure Bidiag 4, to reduce to lower bidiagonal form. The procedure Bidiag 4 can be described as follows.

*Bidiag 4 (starting vector ; reduction to lower bidiagonal form). *The scalers and are chosen such that .

The following properties presented in [7] illustrate that the procedure Bidiag 4 has finite termination property.

*Property 1. *Suppose that steps of the procedure Bidiag 4 have been taken; then the vectors and are the orthonormal basis of the Krylov subspaces and , respectively.

*Property 2. *The procedure Bidiag 4 will stop at step if and only if is , where is the grade of with respect to and is the grade of with respect to .

By using the procedure Bidiag 4, the LSMR method constructs an approximation solution of the form , where , which solves the least-squares problem, , where is the residual for the approximate solution . The main steps of the LSMR algorithm can be summarized as shown in Algorithm 1.