Journal of Applied Mathematics

Journal of Applied Mathematics / 2012 / Article

Research Article | Open Access

Volume 2012 |Article ID 492951 | 20 pages | https://doi.org/10.1155/2012/492951

An Iterative Algorithm for the Generalized Reflexive Solution of the Matrix Equations 𝐴𝑋𝐡=𝐸, 𝐢𝑋𝐷=𝐹

Academic Editor: Jinyun Yuan
Received21 Dec 2011
Revised30 Apr 2012
Accepted16 May 2012
Published15 Jul 2012

Abstract

An iterative algorithm is constructed to solve the linear matrix equation pair 𝐴𝑋𝐡=𝐸,𝐢𝑋𝐷=𝐹 over generalized reflexive matrix 𝑋. When the matrix equation pair 𝐴𝑋𝐡=𝐸,𝐢𝑋𝐷=𝐹 is consistent over generalized reflexive matrix 𝑋, for any generalized reflexive initial iterative matrix 𝑋1, the generalized reflexive solution can be obtained by the iterative algorithm within finite iterative steps in the absence of round-off errors. The unique least-norm generalized reflexive iterative solution of the matrix equation pair can be derived when an appropriate initial iterative matrix is chosen. Furthermore, the optimal approximate solution of 𝐴𝑋𝐡=𝐸,𝐢𝑋𝐷=𝐹 for a given generalized reflexive matrix 𝑋0 can be derived by finding the least-norm generalized reflexive solution of a new corresponding matrix equation pair 𝐴𝐹𝑋𝐡=𝐸,𝐢𝑋𝐷= with 𝐸=πΈβˆ’π΄π‘‹0𝐡,𝐹=πΉβˆ’πΆπ‘‹0𝐷. Finally, several numerical examples are given to illustrate that our iterative algorithm is effective.

1. Introduction

Let β„›π‘šΓ—π‘› denote the set of all π‘š-by-𝑛 real matrices. 𝐼𝑛 denotes the 𝑛 order identity matrix. Let π‘ƒβˆˆβ„›π‘šΓ—π‘š and π‘„βˆˆβ„›π‘›Γ—π‘› be two real generalized reflection matrices, that is, 𝑃𝑇=𝑃,𝑃2=πΌπ‘š,𝑄𝑇=𝑄,𝑄2=𝐼𝑛. A matrix π΄βˆˆβ„›π‘šΓ—π‘› is called generalized reflexive matrix with respect to the matrix pair (𝑃,𝑄) if 𝑃𝐴𝑄=𝐴. For more properties and applications on generalized reflexive matrix, we refer to [1, 2]. The set of all π‘š-by-𝑛 real generalized reflexive matrices with respect to matrix pair (𝑃,𝑄) is denoted by β„›π‘Ÿπ‘šΓ—π‘›(𝑃,𝑄).We denote by the superscripts 𝑇 the transpose of a matrix. In matrix space β„›π‘šΓ—π‘›, define inner product as tr(𝐡𝑇𝐴)=trace(𝐡𝑇𝐴) for all 𝐴,π΅βˆˆβ„›π‘šΓ—π‘›;  ‖𝐴‖ represents the Frobenius norm of 𝐴. β„›(𝐴) represents the column space of 𝐴. vec(β‹…) represents the vector operator; that is, vec(𝐴)=(π‘Žπ‘‡1,π‘Žπ‘‡2,…,π‘Žπ‘‡π‘›)π‘‡βˆˆβ„›π‘šπ‘› for the matrix 𝐴=(π‘Ž1,π‘Ž2,…,π‘Žπ‘›)βˆˆβ„›π‘šΓ—π‘›,π‘Žπ‘–βˆˆπ‘…π‘š,𝑖=1,2,…,𝑛. π΄βŠ—π΅ stands for the Kronecker product of matrices 𝐴 and 𝐡.

In this paper, we will consider the following two problems.

Problem 1. For given matrices π΄βˆˆβ„›π‘Γ—π‘š,π΅βˆˆβ„›π‘›Γ—π‘ž,πΆβˆˆβ„›π‘ Γ—π‘š, π·βˆˆβ„›π‘›Γ—π‘‘,πΈβˆˆβ„›π‘Γ—π‘ž,πΉβˆˆβ„›π‘ Γ—π‘‘, find matrix π‘‹βˆˆβ„›π‘Ÿπ‘šΓ—π‘›(𝑃,𝑄) such that 𝐴𝑋𝐡=𝐸,𝐢𝑋𝐷=𝐹.(1.1)

Problem 2. When Problem 1 is consistent, let 𝑆𝐸 denote the set of the generalized reflexive solutions of Problem 1. For a given matrix 𝑋0βˆˆβ„›π‘Ÿπ‘šΓ—π‘›(𝑃,𝑄), find ξπ‘‹βˆˆπ‘†πΈ such that β€–β€–ξπ‘‹βˆ’π‘‹0β€–β€–=minπ‘‹βˆˆπ‘†πΈβ€–β€–π‘‹βˆ’π‘‹0β€–β€–.(1.2)

The matrix equation pair (1.1) may arise in many areas of control and system theory. Dehghan and Hajarian [3] presented some examples to show a motivation for studying (1.1). Problem 2 occurs frequently in experiment design; see for instance [4]. In recent years, the matrix nearness problem has been studied extensively (e.g., [3, 5–19]).

Research on solving the matrix equation pair (1.1) has been actively ongoing for last 40 or more years. For instance, Mitra [20, 21] gave conditions for the existence of a solution and a representation of the general common solution to the matrix equation pair (1.1). Shinozaki and Sibuya [22] and vander Woude[23] discussed conditions for the existence of a common solution to the matrix equation pair (1.1). Navarra et al. [11] derived sufficient and necessary conditions for the existence of a common solution to (1.1). Yuan [18] obtained an analytical expression of the least-squares solutions of (1.1) by using the generalized singular value decomposition (GSVD) of matrices. Recently, some finite iterative algorithms have also been developed to solve matrix equations. Deng et al. [24] studied the consistent conditions and the general expressions about the Hermitian solutions of the matrix equations (𝐴𝑋,𝑋𝐡)=(𝐢,𝐷) and designed an iterative method for its Hermitian minimum norm solutions. Li and Wu [25] gave symmetric and skew-antisymmetric solutions to certain matrix equations 𝐴1𝑋=𝐢1,𝑋𝐡3=𝐢3 over the real quaternion algebra 𝐻. Dehghan and Hajarian [26] proposed the necessary and sufficient conditions for the solvability of matrix equations 𝐴1𝑋𝐡1=𝐷1,𝐴1𝑋=𝐢1,𝑋𝐡2=𝐢2 and 𝐴1𝑋=𝐢1,𝑋𝐡2=𝐢2,𝐴3𝑋=𝐢3,𝑋𝐡4=𝐢4 over the reflexive or antireflexive matrix 𝑋 and obtained the general expression of the solutions for a solvable case. Wang [27, 28] gave the centrosymmetric solution to the system of quaternion matrix equations 𝐴1𝑋=𝐢1,𝐴3𝑋𝐡3=𝐢3. Wang [29] also solved a system of matrix equations over arbitrary regular rings with identity. For more studies on iterative algorithms on coupled matrix equations, we refer to [6, 7, 15–17, 19, 30–34]. Peng et al. [13] presented iterative methods to obtain the symmetric solutions of (1.1). Sheng and Chen [14] presented a finite iterative method when (1.1) is consistent. Liao and Lei [9] presented an analytical expression of the least-squares solution and an algorithm for (1.1) with the minimum norm. Peng et al. [12] presented an efficient algorithm for the least-squares reflexive solution. Dehghan and Hajarian [3] presented an iterative algorithm for solving a pair of matrix equations (1.1) over generalized centrosymmetric matrices. Cai and Chen [35] presented an iterative algorithm for the least-squares bisymmetric solutions of the matrix equations (1.1). However, the problem of finding the generalized reflexive solutions of matrix equation pair (1.1) has not been solved. In this paper, we construct an iterative algorithm by which the solvability of Problem 1 can be determined automatically, the solution can be obtained within finite iterative steps when Problem 1 is consistent, and the solution of Problem 2 can be obtained by finding the least-norm generalized reflexive solution of a corresponding matrix equation pair.

This paper is organized as follows. In Section 2, we will solve Problem 1 by constructing an iterative algorithm; that is, if Problem 1 is consistent, then for an arbitrary initial matrix 𝑋1βˆˆβ„›π‘Ÿπ‘šΓ—π‘›(𝑃,𝑄), we can obtain a solution βˆ’π‘‹βˆˆβ„›π‘Ÿπ‘šΓ—π‘›(𝑃,𝑄) of Problem 1 within finite iterative steps in the absence of round-off errors. Let 𝑋1=𝐴𝑇𝐻𝐡𝑇+𝐢𝑇𝐻𝐷𝑇+𝑃𝐴𝑇𝐻𝐡𝑇𝑄+𝑃𝐢𝑇𝐻𝐷𝑇𝑄, where π»βˆˆβ„›π‘Γ—π‘ž,ξπ»βˆˆβ„›π‘ Γ—π‘‘ are arbitrary matrices, or more especially, letting 𝑋1=πŸŽβˆˆβ„›π‘Ÿπ‘šΓ—π‘›(𝑃,𝑄), we can obtain the unique least norm solution of Problem 1. Then in Section 3, we give the optimal approximate solution of Problem 2 by finding the least norm generalized reflexive solution of a corresponding new matrix equation pair. In Section 4, several numerical examples are given to illustrate the application of our iterative algorithm.

2. The Solution of Problem 1

In this section, we will first introduce an iterative algorithm to solve Problem 1 and then prove that it is convergent. The idea of the algorithm and it’s proof in this paper are originally inspired by those in [13]. The idea of our algorithm is also inspired by those in [3]. When 𝑃=𝑄,𝑅=𝑆,𝑋𝑇=𝑋 and π‘Œπ‘‡=π‘Œ, the results in this paper reduce to those in [3].

Algorithm 2.1. Step 1. Input matrices π΄βˆˆβ„›π‘Γ—π‘š,π΅βˆˆβ„›π‘›Γ—π‘ž,πΆβˆˆβ„›π‘ Γ—π‘š, π·βˆˆβ„›π‘›Γ—π‘‘,πΈβˆˆβ„›π‘Γ—π‘ž,πΉβˆˆβ„›π‘ Γ—π‘‘, and two generalized reflection matrix π‘ƒβˆˆβ„›π‘šΓ—π‘š,π‘„βˆˆβ„›π‘›Γ—π‘›.
Step 2. Choose an arbitrary matrix 𝑋1βˆˆβ„›π‘Ÿπ‘šΓ—π‘›(𝑃,𝑄). Compute 𝑅1=βŽ›βŽœβŽœβŽπΈβˆ’π΄π‘‹1𝐡00πΉβˆ’πΆπ‘‹1𝐷⎞⎟⎟⎠,𝑃1=12ξ€·π΄π‘‡ξ€·πΈβˆ’π΄π‘‹1𝐡𝐡𝑇+πΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹1𝐷𝐷𝑇+π‘ƒπ΄π‘‡ξ€·πΈβˆ’π΄π‘‹1𝐡𝐡𝑇𝑄+π‘ƒπΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹1𝐷𝐷𝑇𝑄,(2.1)π‘˜βˆΆ=1.
Step 3. If 𝑅1=𝟎, then stop. Else go to Step 4.
Step 4. Compute π‘‹π‘˜+1=π‘‹π‘˜+β€–β€–π‘…π‘˜β€–β€–2β€–β€–π‘ƒπ‘˜β€–β€–2π‘ƒπ‘˜,π‘…π‘˜+1=βŽ›βŽœβŽœβŽπΈβˆ’π΄π‘‹π‘˜+1𝐡00πΉβˆ’πΆπ‘‹π‘˜+1𝐷⎞⎟⎟⎠=π‘…π‘˜βˆ’β€–β€–π‘…π‘˜β€–β€–2β€–β€–π‘ƒπ‘˜β€–β€–2βŽ›βŽœβŽœβŽπ΄π‘ƒπ‘˜π΅00πΆπ‘ƒπ‘˜π·βŽžβŽŸβŽŸβŽ ,π‘ƒπ‘˜+1=12ξ€·π΄π‘‡ξ€·πΈβˆ’π΄π‘‹π‘˜+1𝐡𝐡𝑇+πΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘˜+1𝐷𝐷𝑇+π‘ƒπ΄π‘‡ξ€·πΈβˆ’π΄π‘‹π‘˜+1𝐡𝐡𝑇𝑄+π‘ƒπΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘˜+1𝐷𝐷𝑇𝑄+β€–β€–π‘…π‘˜+1β€–β€–2β€–β€–π‘…π‘˜β€–β€–2π‘ƒπ‘˜.(2.2)
Step 5. If π‘…π‘˜+1=𝟎, then stop. Else, letting π‘˜βˆΆ=π‘˜+1, go to Step 4.

Obviously, it can be seen that π‘ƒπ‘–βˆˆπ‘…π‘Ÿπ‘šΓ—π‘›(𝑃,𝑄),π‘‹π‘–βˆˆπ‘…π‘Ÿπ‘šΓ—π‘›(𝑃,𝑄), where 𝑖=1,2,….

Lemma 2.2. For the sequences {𝑅𝑖} and {𝑃𝑖} generated in Algorithm 2.1, one has 𝑅tr𝑇𝑖+1𝑅𝑗𝑅=trπ‘‡π‘–π‘…π‘—ξ€Έβˆ’β€–β€–π‘…π‘–β€–β€–2‖‖𝑃𝑖‖‖2𝑃tr𝑇𝑖𝑃𝑗+‖‖𝑅𝑖‖‖2‖‖𝑅𝑗‖‖2‖‖𝑃𝑖‖‖2β€–β€–π‘…π‘—βˆ’1β€–β€–2𝑃trπ‘‡π‘–π‘ƒπ‘—βˆ’1ξ€Έ,𝑃tr𝑇𝑖+1𝑃𝑗=‖‖𝑃𝑗‖‖2‖‖𝑅𝑗‖‖2𝑅tr𝑇𝑖+1π‘…π‘—ξ€Έξ€·π‘…βˆ’tr𝑇𝑖+1𝑅𝑗+1+‖‖𝑅𝑖+1β€–β€–2‖‖𝑅𝑖‖‖2𝑃tr𝑇𝑖𝑃𝑗.(2.3)

Proof. By Algorithm 2.1, we have 𝑅tr𝑇𝑖+1π‘…π‘—ξ€ΈβŽ›βŽœβŽœβŽœβŽβŽ›βŽœβŽœβŽπ‘…=trπ‘–βˆ’β€–β€–π‘…π‘–β€–β€–2‖‖𝑃𝑖‖‖2βŽ›βŽœβŽœβŽπ΄π‘ƒπ‘–π΅00πΆπ‘ƒπ‘–π·βŽžβŽŸβŽŸβŽ βŽžβŽŸβŽŸβŽ π‘‡π‘…π‘—βŽžβŽŸβŽŸβŽŸβŽ ξ€·π‘…=trπ‘‡π‘–π‘…π‘—ξ€Έβˆ’β€–β€–π‘…π‘–β€–β€–2‖‖𝑃𝑖‖‖2βŽ›βŽœβŽœβŽβŽ›βŽœβŽœβŽπ΅tr𝑇𝑃𝑇𝑖𝐴𝑇00π·π‘‡π‘ƒπ‘‡π‘–πΆπ‘‡βŽžβŽŸβŽŸβŽ π‘…π‘—βŽžβŽŸβŽŸβŽ ξ€·π‘…=trπ‘‡π‘–π‘…π‘—ξ€Έβˆ’β€–β€–π‘…π‘–β€–β€–2‖‖𝑃𝑖‖‖2βŽ›βŽœβŽœβŽβŽ›βŽœβŽœβŽπ΅tr𝑇𝑃𝑇𝑖𝐴𝑇00π·π‘‡π‘ƒπ‘‡π‘–πΆπ‘‡βŽžβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽπΈβˆ’π΄π‘‹π‘—π΅00πΉβˆ’πΆπ‘‹π‘—π·βŽžβŽŸβŽŸβŽ βŽžβŽŸβŽŸβŽ ξ€·π‘…=trπ‘‡π‘–π‘…π‘—ξ€Έβˆ’β€–β€–π‘…π‘–β€–β€–2‖‖𝑃𝑖‖‖2𝐡trπ‘‡π‘ƒπ‘‡π‘–π΄π‘‡ξ€·πΈβˆ’π΄π‘‹π‘—π΅ξ€Έ+π·π‘‡π‘ƒπ‘‡π‘–πΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘—π·ξ€·π‘…ξ€Έξ€Έ=trπ‘‡π‘–π‘…π‘—ξ€Έβˆ’β€–β€–π‘…π‘–β€–β€–2‖‖𝑃𝑖‖‖2𝑃trπ‘‡π‘–ξ€·π΄π‘‡ξ€·πΈβˆ’π΄π‘‹π‘—π΅ξ€Έπ΅π‘‡+πΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘—π·ξ€Έπ·π‘‡ξ€·π‘…ξ€Έξ€Έ=trπ‘‡π‘–π‘…π‘—ξ€Έβˆ’β€–β€–π‘…π‘–β€–β€–2‖‖𝑃𝑖‖‖2𝑃×trπ‘‡π‘–ξƒ©π΄π‘‡ξ€·πΈβˆ’π΄π‘‹π‘—π΅ξ€Έπ΅π‘‡+πΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘—π·ξ€Έπ·π‘‡2+π‘ƒπ΄π‘‡ξ€·πΈβˆ’π΄π‘‹π‘—π΅ξ€Έπ΅π‘‡π‘„+π‘ƒπΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘—π·ξ€Έπ·π‘‡π‘„2+π΄π‘‡ξ€·πΈβˆ’π΄π‘‹π‘—π΅ξ€Έπ΅π‘‡+πΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘—π·ξ€Έπ·π‘‡2+βˆ’π‘ƒπ΄π‘‡ξ€·πΈβˆ’π΄π‘‹π‘—π΅ξ€Έπ΅π‘‡π‘„βˆ’π‘ƒπΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘—π·ξ€Έπ·π‘‡π‘„2𝑅ξƒͺξƒͺ=trπ‘‡π‘–π‘…π‘—ξ€Έβˆ’β€–β€–π‘…π‘–β€–β€–2‖‖𝑃𝑖‖‖2𝑃×trπ‘‡π‘–ξƒ―π΄π‘‡ξ€·πΈβˆ’π΄π‘‹π‘—π΅ξ€Έπ΅π‘‡+πΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘—π·ξ€Έπ·π‘‡2+π‘ƒπ΄π‘‡ξ€·πΈβˆ’π΄π‘‹π‘—π΅ξ€Έπ΅π‘‡π‘„+π‘ƒπΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘—π·ξ€Έπ·π‘‡π‘„2𝑅ξƒͺ=trπ‘‡π‘–π‘…π‘—ξ€Έβˆ’β€–β€–π‘…π‘–β€–β€–2‖‖𝑃𝑖‖‖2𝑃trπ‘‡π‘–ξƒ©π‘ƒπ‘—βˆ’β€–β€–π‘…π‘—β€–β€–2β€–β€–π‘…π‘—βˆ’1β€–β€–2π‘ƒπ‘—βˆ’1𝑅ξƒͺξƒͺ=trπ‘‡π‘–π‘…π‘—ξ€Έβˆ’β€–β€–π‘…π‘–β€–β€–2‖‖𝑃𝑖‖‖2𝑃tr𝑇𝑖𝑃𝑗+‖‖𝑅𝑖‖‖2‖‖𝑅𝑗‖‖2‖‖𝑃𝑖‖‖2β€–β€–π‘…π‘—βˆ’1β€–β€–2𝑃trπ‘‡π‘–π‘ƒπ‘—βˆ’1ξ€Έ,𝑃tr𝑇𝑖+1𝑃𝑗𝐴=trξƒ©ξƒ©π‘‡ξ€·πΈβˆ’π΄π‘‹π‘–+1𝐡𝐡𝑇+πΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘–+1𝐷𝐷𝑇2+π‘ƒπ΄π‘‡ξ€·πΈβˆ’π΄π‘‹π‘–+1𝐡𝐡𝑇𝑄+π‘ƒπΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘–+1𝐷𝐷𝑇𝑄2+‖‖𝑅𝑖+1β€–β€–2‖‖𝑅𝑖‖‖2𝑃𝑖ξƒͺπ‘‡π‘ƒπ‘—βŽžβŽŸβŽŸβŽ π΄=trξƒ©ξƒ©π‘‡ξ€·πΈβˆ’π΄π‘‹π‘–+1𝐡𝐡𝑇+πΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘–+1𝐷𝐷𝑇2+π‘ƒπ΄π‘‡ξ€·πΈβˆ’π΄π‘‹π‘–+1𝐡𝐡𝑇𝑄+π‘ƒπΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘–+1𝐷𝐷𝑇𝑄2ξƒͺπ‘‡π‘ƒπ‘—βŽžβŽŸβŽŸβŽ +‖‖𝑅𝑖+1β€–β€–2‖‖𝑅𝑖‖‖2𝑃tr𝑇𝑖𝑃𝑗𝑃=trπ‘‡π‘—ξ€·π΄π‘‡ξ€·πΈβˆ’π΄π‘‹π‘–+1𝐡𝐡𝑇+πΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘–+1𝐷𝐷𝑇+‖‖𝑅𝑖+1β€–β€–2‖‖𝑅𝑖‖‖2𝑃tr𝑇𝑖𝑃𝑗=trπΈβˆ’π΄π‘‹π‘–+1𝐡𝑇𝐴𝑃𝑗𝐡+πΉβˆ’πΆπ‘‹π‘–+1𝐷𝑇𝐢𝑃𝑗𝐷+‖‖𝑅𝑖+1β€–β€–2‖‖𝑅𝑖‖‖2𝑃trπ‘‡π‘–π‘ƒπ‘—ξ€ΈβŽ›βŽœβŽœβŽβŽ›βŽœβŽœβŽξ€·=trπΈβˆ’π΄π‘‹π‘–+1𝐡𝑇00ξ€·πΉβˆ’πΆπ‘‹π‘–+1π·ξ€Έπ‘‡βŽžβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽπ΄π‘ƒπ‘—π΅00πΆπ‘ƒπ‘—π·βŽžβŽŸβŽŸβŽ βŽžβŽŸβŽŸβŽ +‖‖𝑅𝑖+1β€–β€–2‖‖𝑅𝑖‖‖2𝑃tr𝑇𝑖𝑃𝑗=‖‖𝑃𝑗‖‖2‖‖𝑅𝑗‖‖2𝑅tr𝑇𝑖+1ξ€·π‘…π‘—βˆ’π‘…π‘—+1+‖‖𝑅𝑖+1β€–β€–2‖‖𝑅𝑖‖‖2𝑃tr𝑇𝑖𝑃𝑗=‖‖𝑃𝑗‖‖2‖‖𝑅𝑗‖‖2𝑅tr𝑇𝑖+1π‘…π‘—ξ€Έξ€·π‘…βˆ’tr𝑇𝑖+1𝑅𝑗+1+‖‖𝑅𝑖+1β€–β€–2‖‖𝑅𝑖‖‖2𝑃tr𝑇𝑖𝑃𝑗.(2.4) This completes the proof.

Lemma 2.3. For the sequences {𝑅𝑖} and {𝑃𝑖} generated by Algorithm 2.1, and π‘˜β‰₯2, one has 𝑅tr𝑇𝑖𝑅𝑗𝑃=0,tr𝑇𝑖𝑃𝑗=0,𝑖,𝑗=1,2,…,π‘˜,𝑖≠𝑗.(2.5)

Proof. Since tr(𝑅𝑇𝑖𝑅𝑗)=tr(𝑅𝑇𝑗𝑅𝑖) and tr(𝑃𝑇𝑖𝑃𝑗)=tr(𝑃𝑇𝑗𝑃𝑖) for all 𝑖,𝑗=1,2,…,π‘˜, we only need to prove that tr(𝑅𝑇𝑖𝑅𝑗)=0,tr(𝑃𝑇𝑖𝑃𝑗)=0 for all 1≀𝑗<π‘–β‰€π‘˜. We prove the conclusion by induction, and two steps are required.
Step 1. We will show that 𝑅tr𝑇𝑖+1𝑅𝑖𝑃=0,tr𝑇𝑖+1𝑃𝑖=0,𝑖=1,2,…,π‘˜βˆ’1.(2.6) To prove this conclusion, we also use induction.
For 𝑖=1, by Algorithm 2.1 and the proof of Lemma 2.2, we have that 𝑅tr𝑇2𝑅1ξ€ΈβŽ›βŽœβŽœβŽœβŽβŽ›βŽœβŽœβŽπ‘…=tr1βˆ’β€–β€–π‘…1β€–β€–2‖‖𝑃1β€–β€–2βŽ›βŽœβŽœβŽπ΄π‘ƒ1𝐡00𝐢𝑃1π·βŽžβŽŸβŽŸβŽ βŽžβŽŸβŽŸβŽ π‘‡π‘…1βŽžβŽŸβŽŸβŽŸβŽ ξ€·π‘…=tr𝑇1𝑅1ξ€Έβˆ’β€–β€–π‘…1β€–β€–2‖‖𝑃1β€–β€–2𝑃×tr𝑇1ξƒ―π΄π‘‡ξ€·πΈβˆ’π΄π‘‹1𝐡𝐡𝑇+πΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹1𝐷𝐷𝑇2+π‘ƒπ΄π‘‡ξ€·πΈβˆ’π΄π‘‹1𝐡𝐡𝑇𝑄+π‘ƒπΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹1𝐷𝐷𝑇𝑄2=‖‖𝑅ξƒͺ1β€–β€–2βˆ’β€–β€–π‘…1β€–β€–2‖‖𝑃1β€–β€–2𝑃tr𝑇1𝑃1𝑃=0,tr𝑇2𝑃1ξ€Έ=‖‖𝑃1β€–β€–2‖‖𝑅1β€–β€–2𝑅tr𝑇2𝑅1ξ€Έξ€·π‘…βˆ’tr𝑇2𝑅2+‖‖𝑅2β€–β€–2‖‖𝑅1β€–β€–2‖‖𝑃1β€–β€–2=0.(2.7)
Assume (2.6) holds for 𝑖=π‘ βˆ’1, that is, tr(π‘…π‘‡π‘ π‘…π‘ βˆ’1)=0,tr(π‘ƒπ‘‡π‘ π‘ƒπ‘ βˆ’1)=0. When 𝑖=𝑠, by Lemma 2.2, we have that 𝑅tr𝑇𝑠+1𝑅𝑠𝑅=trπ‘‡π‘ π‘…π‘ ξ€Έβˆ’β€–β€–π‘…π‘ β€–β€–2‖‖𝑃𝑠‖‖2𝑃tr𝑇𝑠𝑃𝑠+‖‖𝑅𝑠‖‖4‖‖𝑃𝑠‖‖2β€–β€–π‘…π‘ βˆ’1β€–β€–2𝑃trπ‘‡π‘ π‘ƒπ‘ βˆ’1ξ€Έ=‖‖𝑅𝑠‖‖2βˆ’β€–β€–π‘…π‘ β€–β€–2+‖‖𝑅𝑠‖‖4‖‖𝑃𝑠‖‖2β€–β€–π‘…π‘ βˆ’1β€–β€–2𝑃trπ‘‡π‘ π‘ƒπ‘ βˆ’1𝑃=0,tr𝑇𝑠+1𝑃𝑠=‖‖𝑃𝑠‖‖2‖‖𝑅𝑠‖‖2𝑅tr𝑇𝑠+1π‘…π‘ ξ€Έξ€·π‘…βˆ’tr𝑇𝑠+1𝑅𝑠+1+‖‖𝑅𝑠+1β€–β€–2‖‖𝑅𝑠‖‖2𝑃tr𝑇𝑠𝑃𝑠‖‖𝑃=βˆ’π‘ β€–β€–2‖‖𝑅𝑠‖‖2‖‖𝑅𝑠+1β€–β€–2+‖‖𝑅𝑠+1β€–β€–2‖‖𝑅𝑠‖‖2‖‖𝑃𝑠‖‖2=0.(2.8) Hence, (2.6) holds for 𝑖=𝑠. Therefor, (2.6) holds by the principle of induction.
Step 2. Assuming that tr(𝑅𝑇𝑠𝑅𝑗)=0,tr(𝑃𝑇𝑠𝑃𝑗)=0,𝑗=1,2,…,π‘ βˆ’1, then we show that 𝑅tr𝑇𝑠+1𝑅𝑗𝑃=0,tr𝑇𝑠+1𝑃𝑗=0,𝑗=1,2,…,𝑠.(2.9) In fact, by Lemma 2.2 we have 𝑅tr𝑇𝑠+1𝑅𝑗𝑅=trπ‘‡π‘ π‘…π‘—ξ€Έβˆ’β€–β€–π‘…π‘ β€–β€–2‖‖𝑃𝑠‖‖2𝑃tr𝑇𝑠𝑃𝑗+‖‖𝑅𝑠‖‖2‖‖𝑅𝑗‖‖2‖‖𝑃𝑠‖‖2β€–β€–π‘…π‘—βˆ’1β€–β€–2𝑃trπ‘‡π‘ π‘ƒπ‘—βˆ’1ξ€Έ=0.(2.10) From the previous results, we have tr(𝑅𝑇𝑠+1𝑅𝑗+1)=0. By Lemma 2.2 we have that 𝑃tr𝑇𝑠+1𝑃𝑗=‖‖𝑃𝑗‖‖2‖‖𝑅𝑗‖‖2𝑅tr𝑇𝑠+1π‘…π‘—ξ€Έξ€·π‘…βˆ’tr𝑇𝑠+1𝑅𝑗+1+‖‖𝑅𝑠+1β€–β€–2‖‖𝑅𝑠‖‖2𝑃tr𝑇𝑠𝑃𝑗=‖‖𝑃𝑗‖‖2‖‖𝑅𝑗‖‖2𝑅tr𝑇𝑠+1π‘…π‘—ξ€Έξ€·π‘…βˆ’tr𝑇𝑠+1𝑅𝑗+1ξ€Έξ€Έ=0.(2.11)
By the principle of induction, (2.9) holds. Note that (2.5) is implied in Steps 1 and 2 by the principle of induction. This completes the proof.

Lemma 2.4. Supposing βˆ’π‘‹ is an arbitrary solution of Problem 1, that is, π΄βˆ’π‘‹π΅=𝐸 and πΆβˆ’π‘‹π·=𝐹, then ξ‚΅ξ‚€trβˆ’π‘‹βˆ’π‘‹π‘˜ξ‚π‘‡π‘ƒπ‘˜ξ‚Ά=β€–β€–π‘…π‘˜β€–β€–2,π‘˜=1,2,…,(2.12) where the sequences {π‘‹π‘˜}, {π‘…π‘˜}, and {π‘ƒπ‘˜} are generated by Algorithm 2.1.

Proof. We proof the conclusion by induction.
For π‘˜=1, we have that ξ‚΅ξ‚€trβˆ’π‘‹βˆ’π‘‹1𝑇𝑃1ξ‚Άξ‚΅ξ‚€=trβˆ’π‘‹βˆ’π‘‹1𝑇12ξ€·π΄π‘‡ξ€·πΈβˆ’π΄π‘‹1𝐡𝐡𝑇+πΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹1𝐷𝐷𝑇+π‘ƒπ΄π‘‡ξ€·πΈβˆ’π΄π‘‹1𝐡𝐡𝑇𝑄+π‘ƒπΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹1𝐷𝐷𝑇𝑄=trβˆ’π‘‹βˆ’π‘‹1ξ‚π‘‡ξ€·π΄π‘‡ξ€·πΈβˆ’π΄π‘‹1𝐡𝐡𝑇+πΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹1𝐷𝐷𝑇=trβˆ’π‘‹βˆ’π‘‹1ξ‚π‘‡π΄π‘‡ξ€·πΈβˆ’π΄π‘‹1𝐡𝐡𝑇+ξ‚€βˆ’π‘‹βˆ’π‘‹1ξ‚π‘‡πΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹1𝐷𝐷𝑇=trπΈβˆ’π΄π‘‹1π΅ξ€Έπ‘‡π΄ξ‚€βˆ’π‘‹βˆ’π‘‹1𝐡+πΉβˆ’πΆπ‘‹1π·ξ€Έπ‘‡πΆξ‚€βˆ’π‘‹βˆ’π‘‹1ξ‚π·ξ‚βŽ›βŽœβŽœβŽβŽ›βŽœβŽœβŽξ€·=trπΈβˆ’π΄π‘‹1𝐡𝑇00ξ€·πΉβˆ’πΆπ‘‹1π·ξ€Έπ‘‡βŽžβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽπ΄ξ‚€βˆ’π‘‹βˆ’π‘‹1𝐡00πΆβˆ’π‘‹βˆ’π‘‹1ξ‚π·βŽžβŽŸβŽŸβŽ βŽžβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽβŽ›βŽœβŽœβŽξ€·=trπΈβˆ’π΄π‘‹1𝐡𝑇00ξ€·πΉβˆ’πΆπ‘‹1π·ξ€Έπ‘‡βŽžβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽπΈβˆ’π΄π‘‹1𝐡00πΉβˆ’πΆπ‘‹1π·βŽžβŽŸβŽŸβŽ βŽžβŽŸβŽŸβŽ ξ€·π‘…=tr𝑇1𝑅1ξ€Έ=‖‖𝑅1β€–β€–2.(2.13)
Assume (2.12) holds for π‘˜=𝑠. By Algorithm 2.1, we have that ξ‚΅ξ‚€trβˆ’π‘‹βˆ’π‘‹π‘ +1𝑇𝑃𝑠+1ξ‚Άξ‚΅ξ‚€=trβˆ’π‘‹βˆ’π‘‹π‘ +1ξ‚π‘‡Γ—π΄ξƒ©ξƒ©π‘‡ξ€·πΈβˆ’π΄π‘‹π‘ +1𝐡𝐡𝑇+πΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘ +1𝐷𝐷𝑇2+π‘ƒπ΄π‘‡ξ€·πΈβˆ’π΄π‘‹π‘ +1𝐡𝐡𝑇𝑄+π‘ƒπΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘ +1𝐷𝐷𝑇𝑄2ξƒͺ+‖‖𝑅𝑠+1β€–β€–2‖‖𝑅𝑠‖‖2𝑃𝑠ξƒͺξƒͺ=trβˆ’π‘‹βˆ’π‘‹π‘ +1ξ‚π‘‡ξƒ©π΄π‘‡ξ€·πΈβˆ’π΄π‘‹π‘ +1𝐡𝐡𝑇+πΆπ‘‡ξ€·πΉβˆ’πΆπ‘‹π‘ +1𝐷𝐷𝑇+‖‖𝑅𝑠+1β€–β€–2‖‖𝑅𝑠‖‖2π‘ƒπ‘ βŽ›βŽœβŽœβŽβŽ›βŽœβŽœβŽξ€·ξƒͺξƒͺ=trπΈβˆ’π΄π‘‹π‘ +1𝐡𝑇00ξ€·πΉβˆ’πΆπ‘‹π‘ +1π·ξ€Έπ‘‡βŽžβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽπΈβˆ’π΄π‘‹π‘ +1𝐡00πΉβˆ’πΆπ‘‹π‘ +1𝐷⎞⎟⎟⎠⎞⎟⎟⎠+‖‖𝑅𝑠+1β€–β€–2‖‖𝑅𝑠‖‖2ξ‚΅ξ‚€trβˆ’π‘‹βˆ’π‘‹π‘ +1𝑇𝑃𝑠𝑅=tr𝑇𝑠+1𝑅𝑠+1ξ€Έ+‖‖𝑅𝑠+1β€–β€–2‖‖𝑅𝑠‖‖2ξ‚΅ξ‚€trβˆ’π‘‹βˆ’π‘‹π‘ ξ‚π‘‡π‘ƒπ‘ ξ‚Άβˆ’β€–β€–π‘…π‘ +1β€–β€–2‖‖𝑅𝑠‖‖2‖‖𝑅𝑠‖‖2‖‖𝑃𝑠‖‖2𝑃tr𝑇𝑠𝑃𝑠=‖‖𝑅𝑠+1β€–β€–2+‖‖𝑅𝑠+1β€–β€–2‖‖𝑅𝑠‖‖2‖‖𝑅𝑠‖‖2βˆ’β€–β€–π‘…π‘ +1β€–β€–2‖‖𝑅𝑠‖‖2‖‖𝑅𝑠‖‖2‖‖𝑃𝑠‖‖2‖‖𝑃𝑠‖‖2=‖‖𝑅𝑠+1β€–β€–2.(2.14) Therefore, (2.12) holds for π‘˜=𝑠+1. By the principle of induction, (2.12) holds. This completes the proof.

Theorem 2.5. Supposing that Problem 1 is consistent, then for an arbitrary initial matrix 𝑋1βˆˆβ„›π‘Ÿπ‘šΓ—π‘›(𝑃,𝑄), a solution of Problem 1 can be obtained with finite iteration steps in the absence of round-off errors.

Proof. If π‘…π‘–β‰ πŸŽ,𝑖=1,2,…,π‘π‘ž+𝑠𝑑, by Lemma 2.4 we have π‘ƒπ‘–β‰ πŸŽ,𝑖=1,2,…,π‘π‘ž+𝑠𝑑, then we can compute π‘‹π‘π‘ž+𝑠𝑑+1,π‘…π‘π‘ž+𝑠𝑑+1 by Algorithm 2.1.
By Lemma 2.3, we have 𝑅trπ‘‡π‘π‘ž+𝑠𝑑+1𝑅𝑖𝑅=0,𝑖=1,2,…,π‘π‘ž+𝑠𝑑,tr𝑇𝑖𝑅𝑗=0,𝑖,𝑗=1,2,…,π‘π‘ž+𝑠𝑑,𝑖≠𝑗.(2.15) Therefore, 𝑅1,𝑅2,…,π‘…π‘π‘ž+𝑠𝑑 is an orthogonal basis of the matrix space ⎧βŽͺ⎨βŽͺβŽ©βŽ›βŽœβŽœβŽπ‘Šπ‘†=π‘Šβˆ£π‘Š=100π‘Š4⎞⎟⎟⎠,π‘Š1βˆˆβ„›π‘Γ—π‘ž,π‘Š4βˆˆβ„›π‘ Γ—π‘‘βŽ«βŽͺ⎬βŽͺ⎭,(2.16) which implies that π‘…π‘π‘ž+𝑠𝑑+1=𝟎; that is, π‘‹π‘π‘ž+𝑠𝑑+1 is a solution of Problem 1. This completes the proof.

To show the least norm generalized reflexive solution of Problem 1, we first introduce the following result.

Lemma 2.6 (see [8, Lemma  2.4]). Supposing that the consistent system of linear equation 𝑀𝑦=𝑏 has a solution 𝑦0βˆˆπ‘…(𝑀𝑇), then 𝑦0 is the least norm solution of the system of linear equations.

By Lemma 2.6, the following result can be obtained.

Theorem 2.7. Suppose that Problem 1 is consistent. If one chooses the initial iterative matrix 𝑋1=𝐴𝑇𝐻𝐡𝑇+𝐢𝑇𝐻𝐷𝑇+𝑃𝐴𝑇𝐻𝐡𝑇𝑄+𝑃𝐢𝑇𝐻𝐷𝑇𝑄, where π»βˆˆβ„›π‘Γ—π‘ž,ξπ»βˆˆβ„›π‘ Γ—π‘‘ are arbitrary matrices, especially, let 𝑋1=πŸŽβˆˆβ„›π‘Ÿπ‘šΓ—π‘›, one can obtain the unique least norm generalized reflexive solution of Problem 1 within finite iterative steps in the absence of round-off errors by using Algorithm 2.1.

Proof. By Algorithm 2.1 and Theorem 2.5, if we let 𝑋1=𝐴𝑇𝐻𝐡𝑇+𝐢𝑇𝐻𝐷𝑇+𝑃𝐴𝑇𝐻𝐡𝑇𝑄+𝑃𝐢𝑇𝐻𝐷𝑇𝑄, where π»βˆˆβ„›π‘Γ—π‘ž,ξπ»βˆˆβ„›π‘ Γ—π‘‘ are arbitrary matrices, we can obtain the solution π‘‹βˆ— of Problem 1 within finite iterative steps in the absence of round-off errors, and the solution π‘‹βˆ— can be represented that π‘‹βˆ—=𝐴𝑇𝐺𝐡𝑇+𝐢𝑇𝐺𝐷𝑇+𝑃𝐴𝑇𝐺𝐡𝑇𝑄+𝑃𝐢𝑇𝐺𝐷𝑇𝑄.
In the sequel, we will prove that π‘‹βˆ— is just the least norm solution of Problem 1.
Consider the following system of matrix equations: 𝐴𝑋𝐡=𝐸,𝐢𝑋𝐷=𝐹,𝐴𝑃𝑋𝑄𝐡=𝐸,𝐢𝑃𝑋𝑄𝐷=𝐹.(2.17)
If Problem 1 has a solution 𝑋0βˆˆβ„›π‘Ÿπ‘šΓ—π‘›(𝑃,𝑄), then 𝑃𝑋0𝑄=𝑋0,𝐴𝑋0𝐡=𝐸,𝐢𝑋0𝐷=𝐹.(2.18) Thus 𝐴𝑃𝑋0𝑄𝐡=𝐸,𝐢𝑃𝑋0𝑄𝐷=𝐹.(2.19) Hence, the systems of matrix equations (2.17) also have a solution 𝑋0.
Conversely, if the systems of matrix equations (2.17) have a solution π‘‹βˆˆπ‘…π‘šΓ—π‘›, let 𝑋0=(𝑋+𝑃𝑋𝑄)/2, then 𝑋0βˆˆπ‘…π‘Ÿπ‘šΓ—π‘›(𝑃,𝑄), and 𝐴𝑋01𝐡=2𝐴𝑋+𝑃1𝑋𝑄𝐡=2𝐴𝑋𝐡+𝐴𝑃=1𝑋𝑄𝐡2(𝐸+𝐸)=𝐸,𝐢𝑋01𝐷=2𝐢𝑋+𝑃1𝑋𝑄𝐷=2𝐢𝑋𝐷+𝐢𝑃=1𝑋𝑄𝐷2(𝐹+𝐹)=𝐹.(2.20) Therefore, 𝑋0 is a solution of Problem 1.
So the solvability of Problem 1 is equivalent to that of the systems of matrix equations (2.17), and the solution of Problem 1 must be the solution of the systems of matrix equations (2.17).
Letting π‘†ξ…žπΈ denote the set of all solutions of the systems of matrix equations (2.17), then we know that π‘†πΈβŠ‚π‘†ξ…žπΈ, where 𝑆𝐸 is the set of all solutions of Problem 1. In order to prove that π‘‹βˆ— is the least-norm solution of Problem 1, it is enough to prove that π‘‹βˆ— is the least-norm solution of the systems of matrix equations (2.21). Denoting vec(𝑋)=π‘₯,vec(π‘‹βˆ—)=π‘₯βˆ—,vec(𝐺)=𝑔1,vec(𝐺)=𝑔2,vec(𝐸)=𝑒,vec(𝐹)=𝑓, then the systems of matrix equations (2.17) are equivalent to the systems of linear equations βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽπ΅π‘‡π·βŠ—π΄π‘‡π΅βŠ—πΆπ‘‡π·π‘„βŠ—π΄π‘ƒπ‘‡βŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽπ‘’π‘“π‘’π‘“βŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ π‘„βŠ—πΆπ‘ƒπ‘₯=.(2.21) Noting that π‘₯βˆ—ξ‚€π΄=vec𝑇𝐺𝐡𝑇+𝐢𝑇𝐺𝐷𝑇+𝑃𝐴𝑇𝐺𝐡𝑇𝑄+𝑃𝐢𝑇𝐺𝐷𝑇𝑄=ξ€·π΅βŠ—π΄π‘‡ξ€Έπ‘”1+ξ€·π·βŠ—πΆπ‘‡ξ€Έπ‘¦2+ξ€·π‘„π΅βŠ—π‘ƒπ΄π‘‡ξ€Έπ‘”1+ξ€·π‘„π·βŠ—π‘ƒπΆπ‘‡ξ€Έπ‘”2=ξ‚€π΅βŠ—π΄π‘‡π·βŠ—πΆπ‘‡π‘„π΅βŠ—π‘ƒπ΄π‘‡π‘„π·βŠ—π‘ƒπΆπ‘‡ξ‚βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽπ‘”1𝑔2𝑔1𝑔2⎞⎟⎟⎟⎟⎟⎟⎠=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽπ΅π‘‡π·βŠ—π΄π‘‡π΅βŠ—πΆπ‘‡π·π‘„βŠ—π΄π‘ƒπ‘‡βŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ π‘„βŠ—πΆπ‘ƒπ‘‡βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽπ‘”1𝑔2𝑔1𝑔2βŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽπ΅βˆˆβ„›π‘‡π·βŠ—π΄π‘‡π΅βŠ—πΆπ‘‡π·π‘„βŠ—π΄π‘ƒπ‘‡βŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ π‘„βŠ—πΆπ‘ƒπ‘‡βŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,(2.22) by Lemma 2.6 we know that π‘‹βˆ— is the least norm solution of the systems of linear equations (2.21). Since vector operator is isomorphic and π‘‹βˆ— is the unique least norm solution of the systems of matrix equations (2.17), then π‘‹βˆ— is the unique least norm solution of Problem 1.

3. The Solution of Problem 2

In this section, we will show that the optimal approximate solution of Problem 2 for a given generalized reflexive matrix can be derived by finding the least norm generalized reflexive solution of a new corresponding matrix equation pair 𝐴𝐸𝑋𝐡=, 𝐢𝐹𝑋𝐷=.

When Problem 1 is consistent, the set of solutions of Problem 1 denoted by 𝑆𝐸 is not empty. For a given matrix 𝑋0βˆˆβ„›π‘Ÿπ‘šΓ—π‘›(𝑃,𝑄) and π‘‹βˆˆπ‘†πΈ, we have that the matrix equation pair (1.1) is equivalent to the following equation pair: 𝐴𝐢𝑋𝐡=𝐸,𝑋𝐷=𝐹,(3.1) where 𝑋=π‘‹βˆ’π‘‹0,𝐸=πΈβˆ’π΄π‘‹0𝐡,𝐹=πΉβˆ’πΆπ‘‹0𝐷. Then Problem 2 is equivalent to finding the least norm generalized reflexive solution ξ‚π‘‹βˆ— of the matrix equation pair (3.1).

By using Algorithm 2.1, let initially iterative matrix 𝑋1=𝐴𝑇𝐻𝐡𝑇+𝐢𝑇𝐻𝐷𝑇+𝑃𝐴𝑇𝐻𝐡𝑇𝑄+𝑃𝐢𝑇𝐻𝐷𝑇𝑄, or more especially, letting 𝑋1=πŸŽβˆˆπ‘…π‘Ÿπ‘šΓ—π‘›(𝑃,𝑄), we can obtain the unique least norm generalized reflexive solution ξ‚π‘‹βˆ— of the matrix equation pair (3.1); then we can obtain the generalized reflexive solution 𝑋 of Problem 2, and 𝑋 can be represented that 𝑋𝑋=βˆ—+𝑋0.

4. Examples for the Iterative Methods

In this section, we will show several numerical examples to illustrate our results. All the tests are performed by MATLAB 7.8.

Example 4.1. Consider the generalized reflexive solution of the equation pair (1.1), where βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ .𝐴=13βˆ’57βˆ’92046βˆ’10βˆ’296βˆ’836227βˆ’13βˆ’55βˆ’22βˆ’1βˆ’1184βˆ’6βˆ’9βˆ’19,𝐡=408βˆ’54βˆ’150βˆ’234βˆ’10250392βˆ’6βˆ’27βˆ’8111𝐢=632βˆ’57βˆ’921046βˆ’119βˆ’1293βˆ’8136427βˆ’15βˆ’515βˆ’22βˆ’13βˆ’1129βˆ’6βˆ’9βˆ’19,𝐷=718βˆ’614βˆ’450βˆ’233βˆ’1208251694βˆ’6βˆ’58βˆ’2917𝐸=592βˆ’11911216βˆ’244βˆ’13313054311234βˆ’518221814βˆ’4071668βˆ’11765371434βˆ’1794083βˆ’1374βˆ’808242βˆ’3150βˆ’13621104βˆ’2848423βˆ’29091441βˆ’182βˆ’3326𝐹=βˆ’288228302992291βˆ’48494096701090βˆ’783βˆ’7933363βˆ’1262979βˆ’385124626321734553βˆ’3709βˆ’100βˆ’1774βˆ’4534βˆ’45481256βˆ’6896864βˆ’2512βˆ’1136βˆ’1633βˆ’5412(4.1) Let βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ π‘ƒ=000100000100βˆ’1001000001000,𝑄=0000βˆ’10001000βˆ’10001000βˆ’10000.(4.2)
We will find the generalized reflexive solution of the matrix equation pair 𝐴𝑋𝐡=𝐸,𝐢𝑋𝐷=𝐹 by using Algorithm 2.1. It can be verified that the matrix equation pair is consistent over generalized reflexive matrix and has a solution with respect to 𝑃,𝑄 as follows: π‘‹βˆ—=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ 53βˆ’612βˆ’5βˆ’118βˆ’19713βˆ’4βˆ’841351263βˆ’5βˆ’791811βˆˆβ„›π‘Ÿ5Γ—5(𝑃,𝑄).(4.3)
Because of the influence of the error of calculation, the residual 𝑅𝑖 is usually unequal to zero in the process of the iteration, where 𝑖=1,2,…. For any chosen positive number πœ€, however small enough, for example, πœ€=1.0000π‘’βˆ’010, whenever β€–π‘…π‘˜β€–<πœ€, stop the iteration, and π‘‹π‘˜ is regarded to be a generalized reflexive solution of the matrix equation pair 𝐴𝑋𝐡=𝐸,𝐢𝑋𝐷=𝐹. Choose an initially iterative matrix 𝑋1βˆˆβ„›π‘Ÿ5Γ—5(𝑃,𝑄), such as 𝑋1=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ 110βˆ’612βˆ’5βˆ’68βˆ’114913βˆ’4βˆ’8413512610βˆ’1βˆ’914186.(4.4) By Algorithm 2.1, we have 𝑋17=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,‖‖𝑅5.00003.0000βˆ’6.000012.0000βˆ’5.0000βˆ’11.00008.0000βˆ’1.00009.00007.000013.0000βˆ’4.0000βˆ’8.00004.000013.00005.000012.00006.00003.0000βˆ’5.0000βˆ’7.00009.00001.00008.000011.000017β€–β€–=3.2286π‘’βˆ’011<πœ€.(4.5) So we obtain a generalized reflexive solution of the matrix equation pair 𝐴𝑋𝐡=𝐸,𝐢𝑋𝐷=𝐹 as follows: βˆ’π‘‹=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ 5.00003.0000βˆ’6.000012.0000βˆ’5.0000βˆ’11.00008.0000βˆ’1.00009.00007.000013.0000βˆ’4.0000βˆ’8.00004.000013.00005.000012.00006.00003.0000βˆ’5.0000βˆ’7.00009.00001.00008.000011.0000.(4.6) The relative error of the solution and the residual are shown in Figure 1, where the relative error reπ‘˜=β€–π‘‹π‘˜βˆ’π‘‹βˆ—β€–/β€–π‘‹βˆ—β€– and the residual π‘Ÿπ‘˜=β€–π‘…π‘˜β€–.

Letting 𝑋1=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ 0000000000000000000000000,(4.7) by Algorithm 2.1, we have 𝑋17=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,‖‖𝑅5.00003.0000βˆ’6.000012.0000βˆ’5.0000βˆ’11.00008.0000βˆ’1.00009.00007.000013.0000βˆ’4.0000βˆ’8.00004.000013.00005.000012.00006.00003.0000βˆ’5.0000βˆ’7.00009.00001.00008.000011.000017β€–β€–=3.1999π‘’βˆ’011<πœ€.(4.8) So we obtain a generalized reflexive solution of the matrix equation pair 𝐴𝑋𝐡=𝐸,𝐢𝑋𝐷=𝐹 as follows: βˆ’π‘‹=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ 5.00003.0000βˆ’6.000012.0000βˆ’5.0000βˆ’11.00008.0000βˆ’1.00009.00007.000013.0000βˆ’4.0000βˆ’8.00004.000013.00005.000012.00006.00003.0000βˆ’5.0000βˆ’7.00009.00001.00008.000011.0000.(4.9) The relative error of the solution and the residual are shown in Figure 2.

Example 4.2. Consider the least norm generalized reflexive solution of the matrix equation pair in Example 4.1. Let βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,ξβŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,𝑋𝐻=101020βˆ’10101βˆ’10012010βˆ’301210βˆ’10βˆ’2βˆ’10𝐻=βˆ’11βˆ’100010βˆ’131βˆ’10βˆ’202010βˆ’301210βˆ’10βˆ’2121=𝐴𝑇𝐻𝐡𝑇+𝐢𝑇𝐻𝐷𝑇+𝑃𝐴𝑇𝐻𝐡𝑇𝑄+𝑃𝐢𝑇𝐻𝐷𝑇𝑄.(4.10) By using Algorithm 2.1, we have 𝑋19=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,‖‖𝑅5.00003.0000βˆ’6.000012.0000βˆ’5.0000βˆ’11.00008.0000βˆ’1.00009.00007.000013.0000βˆ’4.0000βˆ’8.00004.000013.00005.000012.00006.00003.0000βˆ’5.0000βˆ’7.00009.00001.00008.000011.000019β€–β€–=6.3115π‘’βˆ’011<πœ€.(4.11) So we obtain the least norm generalized reflexive solution of the matrix equation pair 𝐴𝑋𝐡=𝐸,𝐢𝑋𝐷=𝐹 as follows: π‘‹βˆ—=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ 5.00003.0000βˆ’6.000012.0000βˆ’5.0000βˆ’11.00008.0000βˆ’1.00009.00007.000013.0000βˆ’4.0000βˆ’8.00004.000013.00005.000012.00006.00003.0000βˆ’5.0000βˆ’7.00009.00001.00008.000011.0000.(4.12) The relative error of the solution and the residual are shown in Figure 3.

Example 4.3. Let 𝑆𝐸 denote the set of all generalized reflexive solutions of the matrix equation pair in Example 4.1. For a given matrix, 𝑋0=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ βˆ’331110βˆ’7161010βˆ’90910βˆ’11βˆ’133βˆ’106βˆ’1βˆ’70βˆˆβ„›π‘Ÿ5Γ—5(𝑃,𝑄),(4.13) we will find ξπ‘‹βˆˆπ‘†πΈ, such that β€–β€–ξπ‘‹βˆ’π‘‹0β€–β€–=minπ‘‹βˆˆπ‘†πΈβ€–β€–π‘‹βˆ’π‘‹0β€–β€–.(4.14) That is, find the optimal approximate solution to the matrix 𝑋0 in 𝑆𝐸.

Letting 𝑋=π‘‹βˆ’π‘‹0,𝐸=πΈβˆ’π΄π‘‹0𝐡,𝐹=πΉβˆ’πΆπ‘‹0𝐷, by the method mentioned in Section 3, we can obtain the least norm generalized reflexive solution ξ‚π‘‹βˆ— of the matrix equation pair 𝐴𝐹𝑋𝐡=𝐸,𝐢𝑋𝐷= by choosing the initial iteration matrix 𝑋1=𝟎, and ξ‚π‘‹βˆ— is that ξ‚π‘‹βˆ—17=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,‖‖𝑅8.0000βˆ’0.0000βˆ’7.000011.0000βˆ’6.0000βˆ’11.000015.0000βˆ’2.00003.0000βˆ’3.00003.00005.0000βˆ’8.0000βˆ’5.00003.00006.000011.00007.0000βˆ’0.0000βˆ’8.00003.00003.00002.000015.000011.000017‖‖𝑋=3.0690π‘’βˆ’011<πœ€=1.0000π‘’βˆ’010,𝑋=βˆ—17+𝑋0=βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ .5.00003.0000βˆ’6.000012.0000βˆ’5.0000βˆ’11.00008.0000βˆ’1.00009.00007.000013.0000βˆ’4.0000βˆ’8.00004.000013.00005.000012.00006.00003.0000βˆ’5.0000βˆ’7.00009.00001.00008.000011.0000(4.15) The relative error of the solution and the residual are shown in Figure 4, where the relative error reπ‘˜ξ‚π‘‹=β€–π‘˜+𝑋0βˆ’π‘‹βˆ—β€–/β€–π‘‹βˆ—β€– and the residual π‘Ÿπ‘˜=β€–π‘…π‘˜β€–.

Acknowledgments

The authors are very much indebted to the anonymous referees and our editors for their constructive and valuable comments and suggestions which greatly improved the original manuscript of this paper. This work was partially supported by the Research Fund Project (Natural Science 2010XJKYL018), Opening Fund of Geomathematics Key Laboratory of Sichuan Province (scsxdz2011005), Natural Science Foundation of Sichuan Education Department (12ZB289) and Key Natural Science Foundation of Sichuan Education Department (12ZA008).

References

  1. H.-C. Chen, β€œGeneralized reflexive matrices: special properties and applications,” SIAM Journal on Matrix Analysis and Applications, vol. 19, no. 1, pp. 140–153, 1998. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  2. J. L. Chen and X. H. Chen, Special Matrices, Tsing Hua University Press, 2001.
  3. M. Dehghan and M. Hajarian, β€œAn iterative algorithm for solving a pair of matrix equations AYB=E, CYD=F over generalized centro-symmetric matrices,” Computers & Mathematics with Applications, vol. 56, no. 12, pp. 3246–3260, 2008. View at: Publisher Site | Google Scholar
  4. T. Meng, β€œExperimental design and decision support,” in Expert System, The Technology of Knowledge Management and Decision Making for the 21st Century, Leondes, Ed., vol. 1, Academic Press, 2001. View at: Google Scholar
  5. A. L. Andrew, β€œSolution of equations involving centrosymmetric matrices,” Technometrics, vol. 15, pp. 405–407, 1973. View at: Google Scholar | Zentralblatt MATH
  6. M. Dehghan and M. Hajarian, β€œAn iterative algorithm for the reflexive solutions of the generalized coupled Sylvester matrix equations and its optimal approximation,” Applied Mathematics and Computation, vol. 202, no. 2, pp. 571–588, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  7. M. Dehghan and M. Hajarian, β€œAn iterative method for solving the generalized coupled Sylvester matrix equations over generalized bisymmetric matrices,” Applied Mathematical Modelling. Simulation and Computation for Engineering and Environmental Systems, vol. 34, no. 3, pp. 639–654, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  8. G.-X. Huang, F. Yin, and K. Guo, β€œAn iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB=C,” Journal of Computational and Applied Mathematics, vol. 212, no. 2, pp. 231–244, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  9. A.-P. Liao and Y. Lei, β€œLeast-squares solution with the minimum-norm for the matrix equation (AXB,GXH)=(C,D),” Computers & Mathematics with Applications, vol. 50, no. 3-4, pp. 539–549, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  10. F. Li, X. Hu, and L. Zhang, β€œThe generalized reflexive solution for a class of matrix equations (AX=B,XC=D),” Acta Mathematica Scientia B, vol. 28, no. 1, pp. 185–193, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  11. A. Navarra, P. L. Odell, and D. M. Young, β€œA representation of the general common solution to the matrix equations A1XB1=C1 and A2XB2=C2 with applications,” Computers & Mathematics with Applications, vol. 41, no. 7-8, pp. 929–935, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  12. Z.-h. Peng, X.-y. Hu, and L. Zhang, β€œAn efficient algorithm for the least-squares reflexive solution of the matrix equation A1XB1=C1,A2XB2=C2,” Applied Mathematics and Computation, vol. 181, no. 2, pp. 988–999, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  13. Y.-X. Peng, X.-Y. Hu, and L. Zhang, β€œAn iterative method for symmetric solutions and optimal approximation solution of the system of matrix equations A1XB1=C1, A2XB2=C2,” Applied Mathematics and Computation, vol. 183, no. 2, pp. 1127–1137, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  14. X. Sheng and G. Chen, β€œA finite iterative method for solving a pair of linear matrix equations (AXB,CXD)=(E,F),” Applied Mathematics and Computation, vol. 189, no. 2, pp. 1350–1358, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  15. A.-G. Wu, G. Feng, G.-R. Duan, and W.-J. Wu, β€œFinite iterative solutions to a class of complex matrix equations with conjugate and transpose of the unknowns,” Mathematical and Computer Modelling, vol. 52, no. 9-10, pp. 1463–1478, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  16. A.-G. Wu, G. Feng, G.-R. Duan, and W.-J. Wu, β€œIterative solutions to coupled Sylvester-conjugate matrix equations,” Computers & Mathematics with Applications, vol. 60, no. 1, pp. 54–66, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  17. A.-G. Wu, B. Li, Y. Zhang, and G.-R. Duan, β€œFinite iterative solutions to coupled Sylvester-conjugate matrix equations,” Applied Mathematical Modelling, vol. 35, no. 3, pp. 1065–1080, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  18. Y. X. Yuan, β€œLeast squares solutions of matrix equation AXB=E, CXD=F,” Journal of East China Shipbuilding Institute, vol. 18, no. 3, pp. 29–31, 2004. View at: Google Scholar | Zentralblatt MATH
  19. B. Zhou, Z.-Y. Li, G.-R. Duan, and Y. Wang, β€œWeighted least squares solutions to general coupled Sylvester matrix equations,” Journal of Computational and Applied Mathematics, vol. 224, no. 2, pp. 759–776, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  20. S. K. Mitra, β€œCommon solutions to a pair of linear matrix equations A1XB1=C1 and A2XB2=C2,” Cambridge Philosophical Society, vol. 74, pp. 213–216, 1973. View at: Google Scholar | Zentralblatt MATH
  21. S. K. Mitra, β€œA pair of simultaneous linear matrix equations A1XB1=C1,A2XB2=C2 and a matrix programming problem,” Linear Algebra and its Applications, vol. 131, pp. 97–123, 1990. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  22. N. Shinozaki and M. Sibuya, β€œConsistency of a pair of matrix equations with an application,” Keio Science and Technology Reports, vol. 27, no. 10, pp. 141–146, 1974. View at: Google Scholar
  23. J. W. vander Woude, Freeback decoupling and stabilization for linear systems with multiple exogenous variables [Ph.D. thesis], 1987.
  24. Y.-B. Deng, Z.-Z. Bai, and Y.-H. Gao, β€œIterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equations,” Numerical Linear Algebra with Applications, vol. 13, no. 10, pp. 801–823, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  25. Y.-T. Li and W.-J. Wu, β€œSymmetric and skew-antisymmetric solutions to systems of real quaternion matrix equations,” Computers & Mathematics with Applications, vol. 55, no. 6, pp. 1142–1147, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  26. M. Dehghan and M. Hajarian, β€œThe reflexive and anti-reflexive solutions of a linear matrix equation and systems of matrix equations,” The Rocky Mountain Journal of Mathematics, vol. 40, no. 3, pp. 825–848, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  27. Q.-W. Wang, J.-H. Sun, and S.-Z. Li, β€œConsistency for bi(skew)symmetric solutions to systems of generalized Sylvester equations over a finite central algebra,” Linear Algebra and Its Applications, vol. 353, pp. 169–182, 2002. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  28. Q.-W. Wang, β€œBisymmetric and centrosymmetric solutions to systems of real quaternion matrix equations,” Computers & Mathematics with Applications, vol. 49, no. 5-6, pp. 641–650, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  29. Q.-W. Wang, β€œA system of matrix equations and a linear matrix equation over arbitrary regular rings with identity,” Linear Algebra and Its Applications, vol. 384, pp. 43–54, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  30. M. Dehghan and M. Hajarian, β€œAn efficient algorithm for solving general coupled matrix equations and its application,” Mathematical and Computer Modelling, vol. 51, no. 9-10, pp. 1118–1134, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  31. M. Dehghan and M. Hajarian, β€œOn the reflexive and anti-reflexive solutions of the generalised coupled Sylvester matrix equations,” International Journal of Systems Science. Principles and Applications of Systems and Integration, vol. 41, no. 6, pp. 607–625, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  32. M. Dehghan and M. Hajarian, β€œThe general coupled matrix equations over generalized bisymmetric matrices,” Linear Algebra and Its Applications, vol. 432, no. 6, pp. 1531–1552, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  33. I. Jonsson and B. Kågström, β€œRecursive blocked algorithm for solving triangular systems. I. One-sided and coupled Sylvester-type matrix equations,” ACM Transactions on Mathematical Software, vol. 28, no. 4, pp. 392–415, 2002. View at: Publisher Site | Google Scholar
  34. I. Jonsson and B. Kågström, β€œRecursive blocked algorithm for solving triangular systems. II. Two-sided and generalized Sylvester and Lyapunov matrix equations,” ACM Transactions on Mathematical Software, vol. 28, no. 4, pp. 416–435, 2002. View at: Publisher Site | Google Scholar
  35. J. Cai and G. Chen, β€œAn iterative algorithm for the least squares bisymmetric solutions of the matrix equations A1XB1=C1,A2XB2=C2,” Mathematical and Computer Modelling, vol. 50, no. 7-8, pp. 1237–1244, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH

Copyright © 2012 Deqin Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

591Β Views | 466Β Downloads | 5Β Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles