Abstract

We investigate the determinantal representation by exploiting the limiting expression for the generalized inverse 𝐴(2)𝑇,𝑆. We show the equivalent relationship between the existence and limiting expression of 𝐴(2)𝑇,𝑆 and some limiting processes of matrices and deduce the new determinantal representations of 𝐴(2)𝑇,𝑆, based on some analog of the classical adjoint matrix. Using the analog of the classical adjoint matrix, we present Cramer rules for the restricted matrix equation 𝑆𝐴𝑋𝐵=𝐷,ℛ(𝑋)⊂𝑇,𝒩(𝑋)⊃.

1. Introduction

Throughout this paper ℂ𝑚×𝑛 denotes the set of 𝑚×𝑛 matrices over the complex number field ℂ, and ℂ𝑟𝑚×𝑛 denotes its subset in which every matrix has rank 𝑟. 𝐼 stands for the identity matrix of appropriate order (dimension).

Let 𝐴∈ℂ𝑚×𝑛, and let 𝑀 and 𝑁 be Hermitian positive definite matrices of orders 𝑚 and 𝑛, respectively. Consider the following equations:𝐴𝑋𝐴=𝐴,(1)𝑋𝐴𝑋=𝐴,(2)(𝐴𝑋)∗=𝐴𝑋,(3)(𝑀𝐴𝑋)∗=𝑀𝐴𝑋,(3𝑀)(𝑋𝐴)∗=𝑋𝐴,(4)(𝑁𝑋𝐴)∗=𝑁𝑋𝐴.(4𝑁)𝑋 is called a {2}- (or outer) inverse of 𝐴 if it satisfies (2) and denoted by 𝐴(2). 𝑋 is called the Moore-Penrose inverse of 𝐴 if it satisfies (1), (2), (3), and (5) and denoted by 𝐴†. 𝑋 is called the weighted Moore-Penrose inverse of 𝐴 (with respect to 𝑀,𝑁) if it satisfies (1), (2), (4), and (6) and denoted by 𝐴+𝑀𝑁 (see, e.g., [1, 2]).

Let 𝐴∈ℂ𝑛×𝑛. Then a matrix 𝑋 satisfying𝐴𝑘𝑋𝐴=𝐴𝑘,(1𝑘)𝑋𝐴𝑋=𝑋,(2∗)𝐴𝑋=𝑋𝐴,(5) where 𝑘 is some positive integer, is called the Drazin inverse of 𝐴 and denoted by 𝐴𝑑. The smallest positive integer 𝑘 such that 𝑋 and 𝐴 satisfy (7),  (8), and (9), then it is called the Drazin index and denoted by 𝑘=Ind(𝐴). It is clear that Ind(𝐴) is the smallest positive integer 𝑘 satisfying rank(𝐴𝑘)=rank(𝐴𝑘+1) (see [3]). If 𝑘=1, then 𝑋 is called the group inverse of 𝐴 and denoted by 𝐴𝑔. As is well known, 𝐴𝑔 exists if and only if rank𝐴=rank𝐴2. The generalized inverses, and in particular Moore-Penrose, group and Drazin inverses, have also been studied in the context of semigroups, rings of Banach and 𝐶∗ algebras (see [4–8]).

In addition, if a matrix 𝑋 satisfies (1) and (5), then it is called a {1,5}-inverse of 𝐴 and is denoted by 𝐴(1,5).

Let 𝐴∈ℂ𝑚×𝑛, 𝑊∈ℂ𝑛×𝑚. Then the matrix 𝑋∈ℂ𝑚×𝑛 satisfying(𝐴𝑊)𝑘+1𝑋𝑊=(𝐴𝑊)𝑘,(1𝑘𝑊)𝑋𝑊𝐴𝑊𝑋=𝑋,(2𝑊)𝐴𝑊𝑋=𝑋𝑊𝐴,(5𝑊) where 𝑘 is some nonnegative integer, is called the 𝑊-weighted Drazin inverse of 𝐴, and is denoted by 𝑋=𝐴𝑑,𝑊 (see [9]). It is obvious that when 𝑚=𝑛 and 𝑊=𝐼𝑛, 𝑋 is called the Drazin inverse of 𝐴.

Lemma 1.1 (see [1, Theorem  2.14]). Let 𝐴∈ℂ𝑟𝑚×𝑛, and let 𝑇 and 𝑆 be subspaces of ℂ𝑛 and ℂ𝑚, respectively, with dim𝑇=dim𝑆⟂=𝑡≤𝑟. Then A has a {2}-inverse 𝑋 such that ℛ(𝑋)=𝑇 and 𝒩(𝑋)=𝑆 if and only if 𝐴𝑇⊕𝑆=ℂ𝑚(1.1) in which case 𝑋 is unique and denoted by 𝐴(2)𝑇,𝑆.

If 𝐴(2)𝑇,𝑆 exists and there exists a matrix 𝐺 such that ℛ(𝐺)=𝑇 and 𝒩(𝐺)=𝑆, then 𝐺𝐴𝐴(2)𝑇,𝑆=𝐺 and 𝐴(2)𝑇,𝑆𝐴𝐺=𝐺.

It is well known that several important generalized inverses, such as the Moore-Penrose inverse 𝐴†, the weighted Moore-Penrose inverse 𝐴+𝑀,𝑁, the Drazin inverse 𝐴𝑑, and the group inverse 𝐴𝑔, are outer inverses 𝐴(2)𝑇,𝑆 for some specific choice of 𝑇 and 𝑆, are all the generalized inverse 𝐴(2)𝑇,𝑆, {2}- (or outer) inverse of 𝐴 with the prescribed range 𝑇 and null space 𝑆 (see [2, 10] in the context of complex matrices and [11] in the context of semigroups).

Determinantal representation of the generalized inverse 𝐴(2)𝑇,𝑆 was studied in [12, 13]. We will investigate further such representation by exploiting the limiting expression for 𝐴(2)𝑇,𝑆. The paper is organized as follows. In Section 2, we investigate the equivalent relationship between the existence of 𝐴(2)𝑇,𝑆 and the limiting process of matrices lim𝜆→0𝐺(𝐴𝐺+𝜆𝐼)−1 or lim𝜆→0(𝐺𝐴+𝜆𝐼)−1𝐺 and deduce the new determinantal representations of 𝐴(2)𝑇,𝑆, based on some analog of the classical adjoint matrix, by exploiting limiting expression. In Section 3, using the analog of the classical adjoint matrix in Section 2, we present Cramer rules for the restricted matrix equation 𝑆𝐴𝑋𝐵=𝐷,ℛ(𝑋)⊂𝑇,𝒩(𝑋)⊃. In Section 4, we give an example for solving the solution of the restricted matrix equation by using our expression. We introduce the following notations.

For 1≤𝑘≤𝑛, the symbol 𝒬𝑘,𝑛 denotes the set {𝛼∶𝛼=(𝛼1,…,𝛼𝑘),1≤𝛼1<⋯<𝛼𝑘≤𝑛,where𝛼𝑖,𝑖=1,…,𝑘,areintegers}. And 𝒬𝑘,𝑛{𝑗}∶={𝛽∶𝛽∈𝒬𝑘,𝑛,𝑗∈𝛽}, where 𝑗∈{1,…,𝑛}.

Let 𝐴=(ğ‘Žğ‘–ğ‘—)∈ℂ𝑚×𝑛. The symbols ğ‘Ž.𝑗 and ğ‘Žğ‘–. stand for the 𝑗th column and the 𝑖th row of 𝐴, respectively. In the same way, denote by ğ‘Žâˆ—.𝑗 and ğ‘Žâˆ—ğ‘–. the 𝑗th column and the 𝑖th row of Hermitian adjoint matrix 𝐴∗. The symbol 𝐴.𝑗(𝑏) (or 𝐴𝑗.(𝑏)) denotes the matrix obtained from 𝐴 by replacing its 𝑗th column (or row) with some vector 𝑏 (or 𝑏𝑇). We write the range of 𝐴 by ℛ(𝐴)={𝐴𝑥∶𝑥∈ℂ𝑛} and the null space of 𝐴 by 𝒩(𝐴)={𝑥∈ℂ𝑛∶𝐴𝑥=0}. Let ğµâˆˆâ„‚ğ‘Ã—ğ‘ž. We define the range of a pair of 𝐴 and 𝐵 as ℛ(𝐴,𝐵)={𝐴𝑊𝐵∶𝑊∈ℂ𝑛×𝑝}.

Let 𝛼∈𝒬𝑘,𝑚 and 𝛽∈𝒬𝑘,𝑛, where 1≤𝑘≤min{𝑚,𝑛}. Then |𝐴𝛼𝛽| denotes a minor of 𝐴 determined by the row indexed by 𝛼 and the columns indexed by 𝛽. When 𝑚=𝑛, the cofactor of ğ‘Žğ‘–ğ‘— in 𝐴 is denoted by 𝜕|𝐴|/ğœ•ğ‘Žğ‘–ğ‘—.

2. Analogs of the Adjugate Matrix for 𝐴(2)𝑇,𝑆

We start with the following theorem which reveals the intrinsic relation between the existence of 𝐴(2)𝑇,𝑆 and of lim𝜆→0𝐺(𝐴𝐺+𝜆𝐼)−1or lim𝜆→0(𝐺𝐴+𝜆𝐼)−1𝐺. Here 𝜆→0 means 𝜆→0 through any neighborhood of 0 in ℂ which excludes the nonzero eigenvalues of a square matrix. In [14], Wei pointed out that the existence of 𝐴(2)𝑇,𝑆 implies the existence of lim𝜆→0𝐺(𝐴𝐺+𝜆𝐼)−1 or lim𝜆→0(𝐺𝐴+𝜆𝐼)−1𝐺. The following result will show that the converse is true under some condition.

Theorem 2.1. Let 𝐴∈ℂ𝑟𝑚×𝑛, and let 𝑇 and 𝑆 be subspaces of ℂ𝑛 and ℂ𝑚, respectively, with dim𝑇=dim𝑆⟂=𝑡≤𝑟. Let 𝐺∈ℂ𝑟𝑛×𝑚 with ℛ(𝐺)=𝑇 and 𝒩(𝐺)=𝑆. Then the following statements are equivalent:(i)𝐴(2)𝑇,𝑆 exists;(ii)lim𝜆→0𝐺(𝐴𝐺+𝜆𝐼)−1 exists and rank(𝐴𝐺)=rank(𝐺);(iii)lim𝜆→0(𝐺𝐴+𝜆𝐼)−1𝐺 exists and rank(𝐺𝐴)=rank(𝐺).
In this case, 𝐴(2)𝑇,𝑆=lim𝜆→0(𝐺𝐴+𝜆𝐼)−1𝐺=lim𝜆→0𝐺(𝐴𝐺+𝜆𝐼)−1.(2.1)

Proof. (i)⇔(ii) Assume that 𝐴(2)𝑇,𝑆 exists. By [14, Theorem  2.4], lim𝜆→0𝐺(𝐴𝐺+𝜆𝐼)−1 exists. Since 𝐺=𝐴(2)𝑇,𝑆𝐴𝐺, rank(𝐴𝐺)=rank(𝐺).
Conversely, assume that lim𝜆→0𝐺(𝐴𝐺+𝜆𝐼)−1 exists and rank(𝐴𝐺)=rank(𝐺). So lim𝜆→0(𝐴𝐺+𝜆𝐼)−1𝐴𝐺=lim𝜆→0𝐴𝐺(𝐴𝐺+𝜆𝐼)−1(2.2) exists. By [15, Theorem], (𝐴𝐺)𝑔 exists. So (𝐴𝐺)(1,5) exists, and then, by [13, Theorem  2], 𝐴(2)𝑇,𝑆 exists.
Similarly, we can show that (i)⇔(iii). Equation (2.1) comes from [14, equation (2.16)].

Lemma 2.2. Let 𝐴=(ğ‘Žğ‘–ğ‘—)∈ℂ𝑚×𝑛 and 𝐺=(𝑔𝑖𝑗)∈ℂ𝑡𝑛×𝑚. Then rank(𝐺𝐴).𝑖(𝑔.𝑗)≤𝑡, where 1≤𝑖≤𝑛, 1≤𝑗≤𝑚, and rank(𝐴𝐺)𝑖.(𝑔𝑗.)≤𝑡, where 1≤𝑖≤𝑚, 1≤𝑗≤𝑛.

Proof. Let 𝑃𝑖𝑘(ğ‘Ž) be an 𝑛×𝑛 matrix with ğ‘Ž in the (𝑖,𝑘) entry, 1 in all diagonal entries, and 0 in others. It is an elementary matrix and (𝐺𝐴).𝑖𝑔.ğ‘—î€¸î‘ğ‘˜â‰ ğ‘–ğ‘ƒğ‘–ğ‘˜î€·âˆ’ğ‘Žğ‘—ğ‘˜î€¸=âŽ›âŽœâŽœâŽœâŽœâŽœâŽœâŽî“ğ‘˜â‰ ğ‘—ğ‘”1ğ‘˜ğ‘Žğ‘˜1⋯𝑔1𝑗⋯𝑘≠𝑗𝑔1ğ‘˜ğ‘Žğ‘˜ğ‘›î“â‹®â‹®â‹®â‹®â‹®ğ‘˜â‰ ğ‘—ğ‘”ğ‘›ğ‘˜ğ‘Žğ‘˜1â‹¯ğ‘”ğ‘›ğ‘—â‹¯î“ğ‘˜â‰ ğ‘—ğ‘”ğ‘›ğ‘˜ğ‘Žğ‘˜ğ‘›âŽžâŽŸâŽŸâŽŸâŽŸâŽŸâŽŸâŽ ğ‘–th=âŽ›âŽœâŽœâŽœâŽœâŽœâŽœâŽœâŽœâŽğ‘”11⋯𝑔1𝑗⋯𝑔1𝑚𝑔⋮⋮⋮⋮⋮𝑖1⋯𝑔𝑖𝑗⋯𝑔𝑖𝑚𝑔⋮⋮⋮⋮⋮𝑛1â‹¯ğ‘”ğ‘›ğ‘—â‹¯ğ‘”ğ‘›ğ‘šâŽžâŽŸâŽŸâŽŸâŽŸâŽŸâŽŸâŽŸâŽŸâŽ âŽ›âŽœâŽœâŽœâŽœâŽœâŽœâŽœâŽœâŽğ‘Ž11⋯0â‹¯ğ‘Ž1ğ‘›ğ‘Žâ‹®â‹®â‹®â‹®â‹®0⋯1⋯0⋮⋮⋮⋮⋮𝑚1⋯0â‹¯ğ‘Žğ‘šğ‘šâŽžâŽŸâŽŸâŽŸâŽŸâŽŸâŽŸâŽŸâŽŸâŽ ğ‘—th.𝑖th(2.3) It follows from the invertibility of 𝑃𝑖𝑘(ğ‘Ž),𝑖≠𝑘, that rank(𝐺𝐴).𝑖(𝑔.𝑗)≤𝑡.
Analogously, the inequation rank(𝐴𝐺)𝑖.(𝑔𝑗.)≤𝑡 can be proved. So the proof is complete.

Recall that if 𝑓𝐴(𝜆)=det(𝜆𝐼+𝐴)=𝜆𝑛+𝑑1𝜆𝑛−1⋯+𝑑𝑛−1𝜆+𝑑𝑛 is the characteristic polynomial of an 𝑛×𝑛 matrix—𝐴 over ℂ, then 𝑑𝑖 is the sum of all 𝑖×𝑖 principal minors of 𝐴, where 𝑖=1,…,𝑛 (see, e.g., [16]).

Theorem 2.3. Let 𝐴,𝑇,𝑆, and 𝐺 be the same as in Theorem 2.1. Write 𝐺=(𝑔𝑖𝑗). Suppose that the generalized inverse 𝐴(2)𝑇,𝑆 of 𝐴 exists. Then 𝐴(2)𝑇,𝑆 can be represented as follows: 𝐴(2)𝑇,𝑆=𝑥𝑖𝑗𝑑𝑡(𝐺𝐴)𝑛×𝑚,(2.4) where 𝑥𝑖𝑗=𝛽∈𝒬𝑡,𝑛{𝑖}|||(𝐺𝐴).𝑖(𝑔.𝑗)𝛽𝛽|||,𝑑𝑡(𝐺𝐴)=𝛽∈𝒬𝑡,𝑛|||(𝐺𝐴)𝛽𝛽|||,(2.5) or 𝐴(2)𝑇,𝑆=𝑦𝑖𝑗𝑑𝑡(𝐴𝐺)𝑛×𝑚,(2.6) where 𝑦𝑖𝑗=𝛼∈𝒬𝑡,𝑚{𝑗}||(𝐴𝐺)𝑗.(𝑔𝑖.)𝛼𝛼||,𝑑𝑡(𝐴𝐺)=𝛼∈𝒬𝑡,𝑚||(𝐴𝐺)𝛼𝛼||.(2.7)

Proof. We will only show the representation (2.5) since the proof of (2.7) is similar. If −𝜆 is not the eigenvalue of 𝐺𝐴, then the matrix 𝜆𝐼+𝐺𝐴 is invertible, and (𝜆𝐼+𝐺𝐴)−1=1âŽ›âŽœâŽœâŽœâŽœâŽœâŽœâŽğ‘‹det(𝜆𝐼+𝐺𝐴)11𝑋21⋯𝑋𝑛1𝑋12𝑋22⋯𝑋𝑛2𝑋⋮⋮⋮⋮1𝑛𝑋2ğ‘›â‹¯ğ‘‹ğ‘›ğ‘›âŽžâŽŸâŽŸâŽŸâŽŸâŽŸâŽŸâŽ ,(2.8) where 𝑋𝑖𝑗,𝑖,𝑗=1,…,𝑛, are cofactors of 𝜆𝐼+𝐺𝐴. It is easy to see that 𝑛𝑙=1𝑋𝑖𝑙𝑔𝑙𝑗=det(𝜆𝐼+𝐺𝐴).𝑖𝑔.𝑗.(2.9) So, by (2.1), 𝐴(2)𝑇,𝑆=lim𝜆→0⎛⎜⎜⎜⎜⎜⎜⎝det(𝜆𝐼+𝐺𝐴).1𝑔.1⋯det(𝜆𝐼+𝐺𝐴)det(𝜆𝐼+𝐺𝐴).1𝑔.𝑚det(𝜆𝐼+𝐺𝐴)⋮⋮⋮det(𝜆𝐼+𝐺𝐴).𝑛𝑔.1⋯det(𝜆𝐼+𝐺𝐴)det(𝜆𝐼+𝐺𝐴).𝑛𝑔.ğ‘šî€¸âŽžâŽŸâŽŸâŽŸâŽŸâŽŸâŽŸâŽ .det(𝜆𝐼+𝐺𝐴)(2.10)
We have the characteristic polynomial of 𝐺𝐴𝑓𝐺𝐴(𝜆)=det(𝜆𝐼+𝐺𝐴)=𝜆𝑛+𝑑1𝜆𝑛−1+𝑑2𝜆𝑛−2+⋯+𝑑𝑛,(2.11) where 𝑑𝑖(1≤𝑖≤𝑛) is a sum of 𝑖×𝑖 principal minors of 𝐺𝐴. Since rank(𝐺𝐴)≤rank(𝐺)=𝑡, 𝑑𝑛=𝑑𝑛−1=⋯=𝑑𝑡+1=0 and det(𝜆𝐼+𝐺𝐴)=𝜆𝑛+𝑑1𝜆𝑛−1+𝑑2𝜆𝑛−2+⋯+𝑑𝑡𝜆𝑛−𝑡.(2.12) Expanding det(𝜆𝐼+𝐺𝐴).𝑖(𝑔.𝑗), we have det(𝜆𝐼+𝐺𝐴).𝑖𝑔.𝑗=𝑥1(𝑖𝑗)𝜆𝑛−1+𝑥2(𝑖𝑗)𝜆𝑛−2+⋯+𝑥𝑛(𝑖𝑗),(2.13) where 𝑥𝑘(𝑖𝑗)=∑𝛽∈𝒬𝑘,𝑛{𝑖}|((𝐺𝐴).𝑖(𝑔.𝑗))𝛽𝛽|, 1≤𝑘≤𝑛, for 1≤𝑖≤𝑛 and 1≤𝑗≤𝑚.
By Lemma 2.2, rank(𝐺𝐴).𝑖(𝑔.𝑗)≤𝑡 and so |((𝐺𝐴).𝑖(𝑔.𝑗))𝛽𝛽|=0, 𝑘>𝑡 and 𝛽∈𝒬𝑘,𝑛{𝑖}, for all 𝑖,𝑗. Therefore, 𝑥𝑘(𝑖𝑗)=0, 𝑘≤𝑛, for all 𝑖,𝑗. Consequently, det(𝜆𝐼+𝐺𝐴).𝑖𝑔.𝑗=𝑥1(𝑖𝑗)𝜆𝑛−1+𝑥2(𝑖𝑗)𝜆𝑛−2+⋯+𝑥𝑡(𝑖𝑗)𝜆𝑛−𝑡.(2.14)
Substituting (2.12) and (2.14) into (2.10) yields 𝐴(2)𝑇,𝑆=lim𝜆→0âŽ›âŽœâŽœâŽœâŽœâŽœâŽœâŽğ‘¥1(11)𝜆𝑛−1+⋯+𝑥𝑡(11)𝜆𝑛−𝑡𝜆𝑛+𝑑1𝜆𝑛−1+⋯+𝑑𝑡𝜆𝑛−𝑡⋯𝑥1(1𝑚)𝜆𝑛−1+⋯+𝑥𝑡(1𝑚)𝜆𝑛−𝑡𝜆𝑛+𝑑1𝜆𝑛−1+⋯+𝑑𝑡𝜆𝑛−𝑡𝑥⋮⋮⋮1(𝑛1)𝜆𝑛−1+⋯+𝑥𝑡(𝑛1)𝜆𝑛−𝑡𝜆𝑛+𝑑1𝜆𝑛−1+⋯+𝑑𝑡𝜆𝑛−𝑡⋯𝑥1(𝑛𝑚)𝜆𝑛−1+⋯+𝑥𝑡(𝑛𝑚)𝜆𝑛−𝑡𝜆𝑛+𝑑1𝜆𝑛−1+⋯+ğ‘‘ğ‘¡ğœ†ğ‘›âˆ’ğ‘¡âŽžâŽŸâŽŸâŽŸâŽŸâŽŸâŽŸâŽ =âŽ›âŽœâŽœâŽœâŽœâŽœâŽœâŽğ‘¥ğ‘¡(11)𝑑𝑡⋯𝑥𝑡(1𝑚)𝑑𝑡𝑥⋮⋮⋮𝑡(𝑛1)𝑑𝑡⋯𝑥𝑡(𝑛𝑚)ğ‘‘ğ‘¡âŽžâŽŸâŽŸâŽŸâŽŸâŽŸâŽŸâŽ .(2.15)
Substituting 𝑥𝑖𝑗 for 𝑥𝑡(𝑖𝑗) in the above equation, we reach (2.5).

Remark 2.4. The proofs of Lemma 2.2 and Theorem 2.3 are based on the general techniques and methods obtained previously by [17], respectively.

Remark 2.5. (i) By using (2.5), we can obtain (2.17) in [12, Theorem  2.3]. In fact, 𝑢=𝑑𝑡(𝐺𝐴) and, by the Binet-Cauchy formula, 𝑥𝑖𝑗=𝛽∈𝒬𝑡,𝑛{𝑖}|||(𝐺𝐴).𝑖(𝑔.𝑗)𝛽𝛽|||=𝛽∈𝒬𝑡,𝑛{𝑖}𝑘𝑔𝑘𝑗𝜕|||(𝐺𝐴)𝛽𝛽|||𝜕𝑠𝑘𝑖=𝛽∈𝒬𝑡,𝑛,𝛼∈𝒬𝑡,𝑚𝑘𝑔𝑘𝑗𝜕||𝐺𝛼𝛽||𝜕𝑔𝑘𝑗𝜕||𝐴𝛽𝛼||ğœ•ğ‘Žğ‘—ğ‘–=𝛼∈𝒬𝑡,𝑚,𝛽∈𝒬𝑡,𝑛𝐺det𝛼𝛽𝜕||𝐴𝛽𝛼||ğœ•ğ‘Žğ‘—ğ‘–,(2.16) where 𝑠𝑘𝑗=(𝐺𝐴)𝑘𝑗. Note that 𝜕|𝐴𝛽𝛼|/ğœ•ğ‘Žğ‘–ğ‘—=0 if 𝑖∉𝛼 or 𝑗∉𝛽. In addition, using the symbols in [13], we can rewrite (2.5) as [13, equation (13)] over ℂ.
(ii) This method is especially efficient when 𝐺𝐴 or 𝐴𝐺 is given (comparing with [12, Theorem  2]).

Observing the particular case from Theorem 2.3, 𝐺=(𝑔𝑖𝑗)=𝑁−1𝐴∗𝑀, where 𝑀 and 𝑁 are Hermitian positive definite matrices, we obtain the following corollary in which the symbols 𝑔.𝑗∶=(𝑔).𝑗 and 𝑔𝑖.∶=(𝑔)𝑖..

Corollary 2.6. Let 𝐴∈ℂ𝑟𝑚×𝑛 and 𝐺=𝑁−1𝐴∗𝑀, where 𝑀 and 𝑁 are Hermitian positive definite matrices of order 𝑚 and 𝑛, respectively, Then 𝐴†𝑀𝑁=𝑥𝑖𝑗𝑑𝑟(𝐺𝐴)𝑛×𝑚,(2.17) where 𝑥𝑖𝑗=𝛽∈𝒬𝑟,𝑛{𝑖}|||(𝐺𝐴).𝑖(𝑔.𝑗)𝛽𝛽|||,𝑑𝑟(𝐺𝐴)=𝛽∈𝒬𝑟,𝑛|||(𝐺𝐴)𝛽𝛽|||,(2.18) or 𝐴†𝑀𝑁=𝑦𝑖𝑗𝑑𝑟(𝐴𝐺)𝑛×𝑚,(2.19) where 𝑦𝑖𝑗=𝛼∈𝒬𝑟,𝑚{𝑗}||(𝐴𝐺)𝑗.(𝑔𝑖.)𝛼𝛼||,𝑑𝑟(𝐴𝐺)=𝛼∈𝒬𝑟,𝑚||(𝐴𝐺)𝛼𝛼||.(2.20)

If 𝑀 and 𝑁 are identity matrices, then we can obtain the following result.

Corollary 2.7 (see [17, Theorem  2.2]). The Moore-Penrose inverse 𝐴† of 𝐴=(ğ‘Žğ‘–ğ‘—)∈ℂ𝑟𝑚×𝑛 can be represented as follows: 𝐴†=𝑥𝑖𝑗𝑑𝑟(𝐴∗𝐴)𝑛×𝑚,(2.21) where 𝑥𝑖𝑗=𝛽∈𝒬𝑟,𝑛{𝑖}|||𝐴∗𝐴.ğ‘–î€·ğ‘Žâˆ—.𝑗𝛽𝛽|||,𝑑𝑟𝐴∗𝐴=𝛽∈𝒬𝑟,𝑛|||𝐴∗𝐴𝛽𝛽|||,(2.22) or 𝐴†=𝑦𝑖𝑗𝑑𝑟(𝐴𝐴∗)𝑛×𝑚,(2.23) where 𝑦𝑖𝑗=𝛼∈𝒬𝑟,𝑚{𝑗}|||𝐴𝐴∗𝑗.(ğ‘Žâˆ—ğ‘–.)𝛼𝛼|||,𝑑𝑟𝐴𝐴∗=𝛼∈𝒬𝑟,𝑚||𝐴𝐴∗𝛼𝛼||.(2.24)

Note that 𝐴𝑑,𝑊=(𝑊𝐴𝑊)(2)ℛ((𝐴𝑊)𝑘𝐴),𝒩((𝐴𝑊)𝑘𝐴). Therefore, when 𝐺=(𝐴𝑊)𝑘𝐴 in Theorem 2.3, we have the following corollary.

Corollary 2.8. Let 𝐴∈ℂ𝑚×𝑛, 𝑊∈ℂ𝑛×𝑚, and 𝑘=max{Ind(𝐴𝑊),Ind(𝑊𝐴)}. If rank(𝐴𝑊)𝑘=𝑡, rank(𝑊𝐴)𝑘=𝑟, and (𝐴𝑊)𝑘𝐴=(𝑐𝑖𝑗)𝑚×𝑛, then 𝐴𝑑,𝑊=𝑥𝑖𝑗𝑑𝑡(𝐴𝑊)𝑘+2𝑚×𝑛,(2.25) where 𝑥𝑖𝑗=𝛽∈𝒬𝑡,𝑚{𝑖}|||(𝐴𝑊)𝑘+2.𝑖𝑐.𝑗𝛽𝛽|||,𝑑𝑡(𝑊𝐴)𝑘+2=𝛽∈𝒬𝑡,m|||(𝐴𝑊)𝑘+2𝛽𝛽|||,(2.26) or 𝐴𝑑,𝑊=𝑦𝑖𝑗𝑑𝑟(𝑊𝐴)𝑘+2𝑚×𝑛,(2.27) where 𝑦𝑖𝑗=𝛼∈𝒬𝑟,𝑛{𝑗}|||(𝑊𝐴)𝑘+2𝑗.𝑐𝑖.𝛼𝛼|||,𝑑𝑟(𝑊𝐴)𝑘+2=𝛼∈𝒬𝑟,𝑛||(𝑊𝐴)𝑘+2𝛼𝛼||.(2.28)

When 𝐺=𝐴𝑘 with 𝑘=Ind(𝐴) in Theorem 2.3, we have the following corollary.

Corollary 2.9 (see [17, Theorem  3.3]). Let 𝐴∈ℂ𝑛×𝑛 with Ind𝐴=𝑘 and rank𝐴𝑘=𝑟, and 𝐴𝑘=(ğ‘Ž(𝑘)𝑖𝑗)𝑛×𝑛. Then 𝐴𝑑=𝑥𝑖𝑗𝑑𝑟𝐴𝑘+1𝑛×𝑛,(2.29) where 𝑑𝑖𝑗=𝛽∈𝒬𝑟,𝑛{𝑖}||||𝐴𝑘+1.ğ‘–î€¸î‚€ğ‘Ž(𝑘).𝑗𝛽𝛽||||,𝑑𝑟𝐴𝑘+1=𝛽∈𝒬𝑟,𝑛|||𝐴𝑘+1𝛽𝛽|||.(2.30)

Finally, we turn our attention to the two projectors 𝐴(2)𝑇,𝑆𝐴 and 𝐴𝐴(2)𝑇,𝑆. The limiting expressions for 𝐴(2)𝑇,𝑆 in (2.1) bring us the following:𝐴(2)𝑇,𝑆𝐴=lim𝜆→0(𝐺𝐴+𝜆𝐼)−1𝐺𝐴,𝐴𝐴(2)𝑇,𝑆=lim𝜆→0𝐴𝐺(𝐴𝐺+𝜆𝐼)−1.(2.31)

Corollary 2.10. Let 𝐴,𝑇,𝑆, and 𝐺 be the same as in Theorem 2.1. Write 𝐺𝐴=(𝑠𝑖𝑗) and 𝐴𝐺=(â„Žğ‘–ğ‘—). Suppose that 𝐴(2)𝑇,S exists. Then 𝐴(2)𝑇,𝑆𝐴 of 𝐴𝐴(2)𝑇,𝑆 can be represented as follows: 𝐴(2)𝑇,𝑆𝑥𝐴=𝑖𝑗𝑑𝑡(𝐺𝐴)𝑛×𝑛,(2.32) where 𝑥𝑖𝑗=𝛽∈𝒬𝑡,𝑛{𝑖}|||(𝐺𝐴).𝑖𝑠.𝑗𝛽𝛽|||,𝑑𝑡(𝐺𝐴)=𝛽∈𝒬𝑡,𝑛|||(𝐺𝐴)𝛽𝛽|||,𝐴𝐴(2)𝑇,𝑆=𝑦𝑖𝑗𝑑𝑡(𝐴𝐺)𝑚×𝑚,(2.33) where 𝑦𝑖𝑗=𝛼∈𝒬𝑡,𝑚{𝑗}||(𝐴𝐺)𝑗.(â„Žğ‘–.)𝛼𝛼||,𝑑𝑡(𝐴𝐺)=𝛼∈𝒬𝑡,𝑚||(𝐴𝐺)𝛼𝛼||.(2.34)

3. Cramer Rules for the Solution of the Restricted Matrix Equation

The restricted matrix equation problem is mainly to find solution of a matrix equation or a system of matrix equations in a set of matrices which satisfy some constraint conditions. Such problems play an important role in applications in structural design, system identification, principal component analysis, exploration, remote sensing, biology, electricity, molecular spectroscopy, automatics control theory, vibration theory, finite elements, circuit theory, linear optimal, and so on. For example, the finite-element static model correction problem can be transformed to solve some constraint condition solution and its best approximation of the matrix equation 𝐴𝑋=𝐵. The undamped finite-element dynamic model correction problem can be attributed to solve some constraint condition solution and its best approximation of the matrix equation 𝐴𝑇𝑋𝐴=𝐵. These motivate the gradual development of theory in respect of the solution to the restricted matrix equation in recent years (see [18–27]).

In this section, we consider the restricted matrix equation𝐴𝑋𝐵=𝐷,ℛ(𝑋)⊂𝑇,𝒩(𝑋)⊃𝑆,(3.1) where 𝐴∈ℂ𝑟𝑚×𝑛, ğµâˆˆâ„‚ğ‘Ã—ğ‘žÌƒğ‘Ÿ, ğ·âˆˆâ„‚ğ‘šÃ—ğ‘ž, 𝑇⊂ℂ𝑛, 𝑆⊂ℂ𝑚, î‚ğ‘‡âŠ‚â„‚ğ‘ž and 𝑆⊂ℂ𝑝, satisfy𝑆dim(𝑇)=dim⟂𝑇𝑆=𝑡≤𝑟,dim=dim⟂=̃𝑡≤̃𝑟.(3.2) Assume that there exist matrices 𝐺∈ℂ𝑛×𝑚 and î‚ğºâˆˆâ„‚ğ‘žÃ—ğ‘ satisfying𝐺=𝐺=ℛ(𝐺)=𝑇,𝒩(𝐺)=𝑆,ℛ𝑇,𝒩𝑆.(3.3) If 𝐴(2)𝑇,𝑆 and 𝐵(2)𝑇,𝑆 exist and 𝐷∈ℛ(𝐴𝐺,𝐺𝐵), then the restricted matrix equation (3.1) has the unique solution𝑋=𝐴(2)𝑇,𝑆𝐷𝐵(2)𝑇,𝑆(3.4) (see [2,  Theorem  3.3.3] for the proof).

In particular, when 𝐷 is a vector 𝑏 and 𝐵=𝐺=𝐼1, the restricted matrix equation (3.1) becomes the restricted linear equation𝐴𝑥=𝑏,𝑥∈𝑇.(3.5) If 𝑏∈𝐴ℛ(𝐺), then 𝑥=𝐴(2)𝑇,𝑆𝑏 is the unique solution of the restricted linear equation (3.5) (see also [10,  Theorem  2.1]).

Theorem 3.1. Given 𝐴,𝐵,𝐷=(𝑑𝑖𝑗),𝐺=(𝑔𝑖𝑗),𝐺=(̃𝑔𝑖𝑗𝑇),𝑇,𝑆,, and 𝑆 as above. If 𝐴(2)𝑇,𝑆 and 𝐵(2)𝑇,𝑆 exist and 𝐷∈ℛ(𝐴𝐺,𝐺𝐵), then 𝑋=𝐴(2)𝑇,𝑆𝐷𝐵(2)𝑇,𝑆 is the unique solution of the restricted matrix equation (3.1) and it can be represented as 𝑥𝑖𝑗=∑𝑚𝑘=1∑𝛽∈𝒬𝑡,𝑛{𝑖},𝛼∈𝒬̃𝑡,𝑝{𝑗}|||(𝐺𝐴).𝑖𝑔.𝑘𝛽𝛽|||||||𝐵𝐺𝑗.𝑓𝑘.𝛼𝛼||||𝑑𝑡(𝐺𝐴)𝑑̃𝑡𝐵𝐺(3.6) or 𝑥𝑖𝑗=âˆ‘ğ‘žğ‘˜=1∑𝛽∈𝒬𝑡,𝑛{𝑖},𝛼∈𝒬̃𝑡,𝑝{𝑗}|||(𝐺𝐴).𝑖(𝑓.𝑘)𝛽𝛽|||||||𝐵𝐺𝑗.(̃𝑔𝑘.)𝛼𝛼||||𝑑𝑡(𝐺𝐴)𝑑̃𝑡𝐵𝐺,(3.7) where 𝑓𝑘.=𝑑𝑘.𝐺 and 𝑓.𝑘=𝐺𝑑.𝑘, 𝑖=1,…,𝑛, and 𝑗=1,…,𝑝.

Proof. By the argument above, we have 𝑋=𝐴(2)𝑇,𝑆𝐷𝐵(2)𝑇,𝑆 is the unique solution of the restricted matrix equation (3.1). Setting 𝑌=𝐷𝐵(2)𝑇,𝑆 and using (2.7), we get that 𝑦𝑘𝑗=ğ‘žî“â„Ž=1ğ‘‘ğ‘˜â„Žî‚µğµî‚(2)𝑇,î‚ğ‘†î‚¶â„Žğ‘—=ğ‘žî“â„Ž=1ğ‘‘ğ‘˜â„Žâˆ‘ğ›¼âˆˆğ’¬Ìƒğ‘¡,𝑝{𝑗}||||𝐵𝐺𝑗.î€·Ìƒğ‘”â„Ž.𝛼𝛼||||𝑑̃𝑡𝐵𝐺=∑𝛼∈𝒬̃𝑡,𝑝{𝑗}||||𝐵𝐺𝑗.î€·âˆ‘ğ‘žâ„Ž=1ğ‘‘ğ‘˜â„ŽÌƒğ‘”â„Ž.𝛼𝛼||||𝑑̃𝑡𝐵𝐺=∑𝛼∈𝒬̃𝑡,𝑝{𝑗}||||𝐵𝐺𝑗.𝑓𝑘.𝛼𝛼||||𝑑̃𝑡𝐵𝐺,(3.8) where 𝑓𝑘.=𝑑𝑘.𝐺. Since 𝑋=𝐴(2)𝑇,𝑆𝑌, by (2.5), 𝑥𝑖𝑗=𝑚𝑘=1𝐴(2)𝑇,𝑆𝑖𝑘𝑦𝑘𝑗=𝑚𝑘=1∑𝛽∈𝒬𝑡,𝑛{𝑖}|||(𝐺𝐴).𝑖𝑔.𝑘𝛽𝛽|||𝑑𝑡∑(𝐺𝐴)𝛼∈𝒬̃𝑡,𝑝{𝑗}||||𝐵𝐺𝑗.𝑓𝑘.𝛼𝛼||||𝑑̃𝑡𝐵𝐺.(3.9) Hence, we have (3.6).
We can obtain (3.7) in the same way.

In particular, when 𝐷 is a vector 𝑏 and 𝐵=𝐺=𝐼1 in the above theorem, we have the following result from (3.7).

Theorem 3.2. Given 𝐴,𝐺,𝑇, and 𝑆 as above. If 𝑏∈𝐴ℛ(𝐺), then 𝑥=𝐴(2)𝑇,𝑆𝑏 is the unique solution of the restricted linear equation 𝐴𝑥=𝑏, 𝑥∈𝑇, and it can be represented as 𝑥𝑖=∑𝛽∈𝒬𝑡,𝑛{𝑖}|||(𝐺𝐴).𝑖(𝑓)𝛽𝛽|||𝑑𝑡(𝐺𝐴),𝑗=1,…,𝑛,(3.10) where 𝑓=𝐺𝑏.

Remark 3.3. Using the symbols in [13], we can rewrite (3.10) as [13, equation (27)].

4. Example

Let⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎠,⎛⎜⎜⎝⎞⎟⎟⎠.𝐴=12220211004200210000,𝐵=211001,𝐷=01−10000000,𝐺=1−1200021000000000000𝐺=100020(4.1) Obviously, rank𝐴=3, dim𝐴𝑇=dim𝑇=2, and𝑇=ℛ(𝐺)⊂ℂ4,𝑆=𝒩(𝐺)⊂ℂ5,𝐺𝑇=ℛ⊂ℂ2,𝐺𝑆=𝒩⊂ℂ3.(4.2) It is easy to verify that 𝐴𝑇⊕𝑆=ℂ5 and 𝐵𝑇⊕𝑆=ℂ3. Thus, 𝐴(2)𝑇,𝑆 and 𝐵(2)𝑇,𝑆 exist by Lemma 1.1.

Now consider the restricted matrix equation𝐴𝑋𝐵=𝐷,ℛ(𝑋)⊂𝑇,𝒩(𝑋)⊃𝑆.(4.3) Clearly,⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠,⎛⎜⎜⎝⎞⎟⎟⎠,𝐴𝐺=1340004200000000000000000𝐺𝐵=2120(4.4) and it is easy to verify that ℛ(𝐷)⊂ℛ(𝐴𝐺) and 𝒩(𝐷)⊃𝒩(𝐺𝐵) hold.

Note that ℛ(𝐷)⊂ℛ(𝐴𝐺) and 𝒩(𝐷)⊃𝒩(𝐺𝐵) if and only if 𝐷∈ℛ(𝐴𝐺,𝐺𝐵). So, by Theorem 3.1, the unique solution of (4.3) exists.

Computing⎛⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎠,𝑑𝐺𝐴=1095046400000000,𝐵𝐺=2201000202||||||||||||+||||||||||||+||||||||||||+||||||||||||+||||||||||||+||||||||||||𝑑(𝐺𝐴)=100419001500460044000000=4,2𝐵𝐺=||||||||||||+||||||||||||+||||||||||||⎛⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎠⎛⎜⎜⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎟⎟⎠=⎛⎜⎜⎜⎜⎜⎜⎝⎞⎟⎟⎟⎟⎟⎟⎠,221020000020=−2,𝑓=1−120002100000000000001−1000000011−200000(4.5) and setting 𝑦𝑖𝑘=∑𝛽∈𝒬𝑡,𝑛{𝑖}|((𝐺𝐴).𝑖(𝑓.𝑘))𝛽𝛽|, we have Table 1.

Similarly, setting 𝑧𝑘𝑗=∑𝛼∈𝒬̃𝑡,𝑝{𝑗}|((𝐵𝐺)𝑗.(̃𝑔𝑘.))𝛼𝛼|, we have Table 2.

So, by (3.7), we have𝑥𝑋=𝑖𝑗=⎛⎜⎜⎜⎜⎜⎜⎝11−100−20⎞⎟⎟⎟⎟⎟⎟⎠.000000(4.6)

Acknowledgments

The authors would like to thank the referees for their helpful comments and suggestions. This work was supported by the National Natural Science Foundation of China 11061005, the Ministry of Education Science and Technology Key Project under Grant 210164, and Grants HCIC201103 of Guangxi Key Laboratory of Hybrid Computational and IC Design Analysis Open Fund.