Abstract

We investigate the determinantal representation by exploiting the limiting expression for the generalized inverse 𝐴(2)𝑇,𝑆. We show the equivalent relationship between the existence and limiting expression of 𝐴(2)𝑇,𝑆 and some limiting processes of matrices and deduce the new determinantal representations of 𝐴(2)𝑇,𝑆, based on some analog of the classical adjoint matrix. Using the analog of the classical adjoint matrix, we present Cramer rules for the restricted matrix equation 𝑆𝐴𝑋𝐵=𝐷,(𝑋)𝑇,𝒩(𝑋).

1. Introduction

Throughout this paper 𝑚×𝑛 denotes the set of 𝑚×𝑛 matrices over the complex number field , and 𝑟𝑚×𝑛 denotes its subset in which every matrix has rank 𝑟. 𝐼 stands for the identity matrix of appropriate order (dimension).

Let 𝐴𝑚×𝑛, and let 𝑀 and 𝑁 be Hermitian positive definite matrices of orders 𝑚 and 𝑛, respectively. Consider the following equations:𝐴𝑋𝐴=𝐴,(1)𝑋𝐴𝑋=𝐴,(2)(𝐴𝑋)=𝐴𝑋,(3)(𝑀𝐴𝑋)=𝑀𝐴𝑋,(3𝑀)(𝑋𝐴)=𝑋𝐴,(4)(𝑁𝑋𝐴)=𝑁𝑋𝐴.(4𝑁)𝑋 is called a {2}- (or outer) inverse of 𝐴 if it satisfies (2) and denoted by 𝐴(2). 𝑋 is called the Moore-Penrose inverse of 𝐴 if it satisfies (1), (2), (3), and (5) and denoted by 𝐴. 𝑋 is called the weighted Moore-Penrose inverse of 𝐴 (with respect to 𝑀,𝑁) if it satisfies (1), (2), (4), and (6) and denoted by 𝐴+𝑀𝑁 (see, e.g., [1, 2]).

Let 𝐴𝑛×𝑛. Then a matrix 𝑋 satisfying𝐴𝑘𝑋𝐴=𝐴𝑘,(1𝑘)𝑋𝐴𝑋=𝑋,(2)𝐴𝑋=𝑋𝐴,(5) where 𝑘 is some positive integer, is called the Drazin inverse of 𝐴 and denoted by 𝐴𝑑. The smallest positive integer 𝑘 such that 𝑋 and 𝐴 satisfy (7),  (8), and (9), then it is called the Drazin index and denoted by 𝑘=Ind(𝐴). It is clear that Ind(𝐴) is the smallest positive integer 𝑘 satisfying rank(𝐴𝑘)=rank(𝐴𝑘+1) (see [3]). If 𝑘=1, then 𝑋 is called the group inverse of 𝐴 and denoted by 𝐴𝑔. As is well known, 𝐴𝑔 exists if and only if rank𝐴=rank𝐴2. The generalized inverses, and in particular Moore-Penrose, group and Drazin inverses, have also been studied in the context of semigroups, rings of Banach and 𝐶 algebras (see [48]).

In addition, if a matrix 𝑋 satisfies (1) and (5), then it is called a {1,5}-inverse of 𝐴 and is denoted by 𝐴(1,5).

Let 𝐴𝑚×𝑛, 𝑊𝑛×𝑚. Then the matrix 𝑋𝑚×𝑛 satisfying(𝐴𝑊)𝑘+1𝑋𝑊=(𝐴𝑊)𝑘,(1𝑘𝑊)𝑋𝑊𝐴𝑊𝑋=𝑋,(2𝑊)𝐴𝑊𝑋=𝑋𝑊𝐴,(5𝑊) where 𝑘 is some nonnegative integer, is called the 𝑊-weighted Drazin inverse of 𝐴, and is denoted by 𝑋=𝐴𝑑,𝑊 (see [9]). It is obvious that when 𝑚=𝑛 and 𝑊=𝐼𝑛, 𝑋 is called the Drazin inverse of 𝐴.

Lemma 1.1 (see [1, Theorem  2.14]). Let 𝐴𝑟𝑚×𝑛, and let 𝑇 and 𝑆 be subspaces of 𝑛 and 𝑚, respectively, with dim𝑇=dim𝑆=𝑡𝑟. Then A has a {2}-inverse 𝑋 such that (𝑋)=𝑇 and 𝒩(𝑋)=𝑆 if and only if 𝐴𝑇𝑆=𝑚(1.1) in which case 𝑋 is unique and denoted by 𝐴(2)𝑇,𝑆.

If 𝐴(2)𝑇,𝑆 exists and there exists a matrix 𝐺 such that (𝐺)=𝑇 and 𝒩(𝐺)=𝑆, then 𝐺𝐴𝐴(2)𝑇,𝑆=𝐺 and 𝐴(2)𝑇,𝑆𝐴𝐺=𝐺.

It is well known that several important generalized inverses, such as the Moore-Penrose inverse 𝐴, the weighted Moore-Penrose inverse 𝐴+𝑀,𝑁, the Drazin inverse 𝐴𝑑, and the group inverse 𝐴𝑔, are outer inverses 𝐴(2)𝑇,𝑆 for some specific choice of 𝑇 and 𝑆, are all the generalized inverse 𝐴(2)𝑇,𝑆, {2}- (or outer) inverse of 𝐴 with the prescribed range 𝑇 and null space 𝑆 (see [2, 10] in the context of complex matrices and [11] in the context of semigroups).

Determinantal representation of the generalized inverse 𝐴(2)𝑇,𝑆 was studied in [12, 13]. We will investigate further such representation by exploiting the limiting expression for 𝐴(2)𝑇,𝑆. The paper is organized as follows. In Section 2, we investigate the equivalent relationship between the existence of 𝐴(2)𝑇,𝑆 and the limiting process of matrices lim𝜆0𝐺(𝐴𝐺+𝜆𝐼)1 or lim𝜆0(𝐺𝐴+𝜆𝐼)1𝐺 and deduce the new determinantal representations of 𝐴(2)𝑇,𝑆, based on some analog of the classical adjoint matrix, by exploiting limiting expression. In Section 3, using the analog of the classical adjoint matrix in Section 2, we present Cramer rules for the restricted matrix equation 𝑆𝐴𝑋𝐵=𝐷,(𝑋)𝑇,𝒩(𝑋). In Section 4, we give an example for solving the solution of the restricted matrix equation by using our expression. We introduce the following notations.

For 1𝑘𝑛, the symbol 𝒬𝑘,𝑛 denotes the set {𝛼𝛼=(𝛼1,,𝛼𝑘),1𝛼1<<𝛼𝑘𝑛,where𝛼𝑖,𝑖=1,,𝑘,areintegers}. And 𝒬𝑘,𝑛{𝑗}={𝛽𝛽𝒬𝑘,𝑛,𝑗𝛽}, where 𝑗{1,,𝑛}.

Let 𝐴=(𝑎𝑖𝑗)𝑚×𝑛. The symbols 𝑎.𝑗 and 𝑎𝑖. stand for the 𝑗th column and the 𝑖th row of 𝐴, respectively. In the same way, denote by 𝑎.𝑗 and 𝑎𝑖. the 𝑗th column and the 𝑖th row of Hermitian adjoint matrix 𝐴. The symbol 𝐴.𝑗(𝑏) (or 𝐴𝑗.(𝑏)) denotes the matrix obtained from 𝐴 by replacing its 𝑗th column (or row) with some vector 𝑏 (or 𝑏𝑇). We write the range of 𝐴 by (𝐴)={𝐴𝑥𝑥𝑛} and the null space of 𝐴 by 𝒩(𝐴)={𝑥𝑛𝐴𝑥=0}. Let 𝐵𝑝×𝑞. We define the range of a pair of 𝐴 and 𝐵 as (𝐴,𝐵)={𝐴𝑊𝐵𝑊𝑛×𝑝}.

Let 𝛼𝒬𝑘,𝑚 and 𝛽𝒬𝑘,𝑛, where 1𝑘min{𝑚,𝑛}. Then |𝐴𝛼𝛽| denotes a minor of 𝐴 determined by the row indexed by 𝛼 and the columns indexed by 𝛽. When 𝑚=𝑛, the cofactor of 𝑎𝑖𝑗 in 𝐴 is denoted by 𝜕|𝐴|/𝜕𝑎𝑖𝑗.

2. Analogs of the Adjugate Matrix for 𝐴(2)𝑇,𝑆

We start with the following theorem which reveals the intrinsic relation between the existence of 𝐴(2)𝑇,𝑆 and of lim𝜆0𝐺(𝐴𝐺+𝜆𝐼)1or lim𝜆0(𝐺𝐴+𝜆𝐼)1𝐺. Here 𝜆0 means 𝜆0 through any neighborhood of 0 in which excludes the nonzero eigenvalues of a square matrix. In [14], Wei pointed out that the existence of 𝐴(2)𝑇,𝑆 implies the existence of lim𝜆0𝐺(𝐴𝐺+𝜆𝐼)1 or lim𝜆0(𝐺𝐴+𝜆𝐼)1𝐺. The following result will show that the converse is true under some condition.

Theorem 2.1. Let 𝐴𝑟𝑚×𝑛, and let 𝑇 and 𝑆 be subspaces of 𝑛 and 𝑚, respectively, with dim𝑇=dim𝑆=𝑡𝑟. Let 𝐺𝑟𝑛×𝑚 with (𝐺)=𝑇 and 𝒩(𝐺)=𝑆. Then the following statements are equivalent:(i)𝐴(2)𝑇,𝑆 exists;(ii)lim𝜆0𝐺(𝐴𝐺+𝜆𝐼)1 exists and rank(𝐴𝐺)=rank(𝐺);(iii)lim𝜆0(𝐺𝐴+𝜆𝐼)1𝐺 exists and rank(𝐺𝐴)=rank(𝐺).
In this case, 𝐴(2)𝑇,𝑆=lim𝜆0(𝐺𝐴+𝜆𝐼)1𝐺=lim𝜆0𝐺(𝐴𝐺+𝜆𝐼)1.(2.1)

Proof. (i)(ii) Assume that 𝐴(2)𝑇,𝑆 exists. By [14, Theorem  2.4], lim𝜆0𝐺(𝐴𝐺+𝜆𝐼)1 exists. Since 𝐺=𝐴(2)𝑇,𝑆𝐴𝐺, rank(𝐴𝐺)=rank(𝐺).
Conversely, assume that lim𝜆0𝐺(𝐴𝐺+𝜆𝐼)1 exists and rank(𝐴𝐺)=rank(𝐺). So lim𝜆0(𝐴𝐺+𝜆𝐼)1𝐴𝐺=lim𝜆0𝐴𝐺(𝐴𝐺+𝜆𝐼)1(2.2) exists. By [15, Theorem], (𝐴𝐺)𝑔 exists. So (𝐴𝐺)(1,5) exists, and then, by [13, Theorem  2], 𝐴(2)𝑇,𝑆 exists.
Similarly, we can show that (i)(iii). Equation (2.1) comes from [14, equation (2.16)].

Lemma 2.2. Let 𝐴=(𝑎𝑖𝑗)𝑚×𝑛 and 𝐺=(𝑔𝑖𝑗)𝑡𝑛×𝑚. Then rank(𝐺𝐴).𝑖(𝑔.𝑗)𝑡, where 1𝑖𝑛, 1𝑗𝑚, and rank(𝐴𝐺)𝑖.(𝑔𝑗.)𝑡, where 1𝑖𝑚, 1𝑗𝑛.

Proof. Let 𝑃𝑖𝑘(𝑎) be an 𝑛×𝑛 matrix with 𝑎 in the (𝑖,𝑘) entry, 1 in all diagonal entries, and 0 in others. It is an elementary matrix and (𝐺𝐴).𝑖𝑔.𝑗𝑘𝑖𝑃𝑖𝑘𝑎𝑗𝑘=𝑘𝑗𝑔1𝑘𝑎𝑘1𝑔1𝑗𝑘𝑗𝑔1𝑘𝑎𝑘𝑛𝑘𝑗𝑔𝑛𝑘𝑎𝑘1𝑔𝑛𝑗𝑘𝑗𝑔𝑛𝑘𝑎𝑘𝑛𝑖th=𝑔11𝑔1𝑗𝑔1𝑚𝑔𝑖1𝑔𝑖𝑗𝑔𝑖𝑚𝑔𝑛1𝑔𝑛𝑗𝑔𝑛𝑚𝑎110𝑎1𝑛𝑎010𝑚10𝑎𝑚𝑚𝑗th.𝑖th(2.3) It follows from the invertibility of 𝑃𝑖𝑘(𝑎),𝑖𝑘, that rank(𝐺𝐴).𝑖(𝑔.𝑗)𝑡.
Analogously, the inequation rank(𝐴𝐺)𝑖.(𝑔𝑗.)𝑡 can be proved. So the proof is complete.

Recall that if 𝑓𝐴(𝜆)=det(𝜆𝐼+𝐴)=𝜆𝑛+𝑑1𝜆𝑛1+𝑑𝑛1𝜆+𝑑𝑛 is the characteristic polynomial of an 𝑛×𝑛 matrix—𝐴 over , then 𝑑𝑖 is the sum of all 𝑖×𝑖 principal minors of 𝐴, where 𝑖=1,,𝑛 (see, e.g., [16]).

Theorem 2.3. Let 𝐴,𝑇,𝑆, and 𝐺 be the same as in Theorem 2.1. Write 𝐺=(𝑔𝑖𝑗). Suppose that the generalized inverse 𝐴(2)𝑇,𝑆 of 𝐴 exists. Then 𝐴(2)𝑇,𝑆 can be represented as follows: 𝐴(2)𝑇,𝑆=𝑥𝑖𝑗𝑑𝑡(𝐺𝐴)𝑛×𝑚,(2.4) where 𝑥𝑖𝑗=𝛽𝒬𝑡,𝑛{𝑖}|||(𝐺𝐴).𝑖(𝑔.𝑗)𝛽𝛽|||,𝑑𝑡(𝐺𝐴)=𝛽𝒬𝑡,𝑛|||(𝐺𝐴)𝛽𝛽|||,(2.5) or 𝐴(2)𝑇,𝑆=𝑦𝑖𝑗𝑑𝑡(𝐴𝐺)𝑛×𝑚,(2.6) where 𝑦𝑖𝑗=𝛼𝒬𝑡,𝑚{𝑗}||(𝐴𝐺)𝑗.(𝑔𝑖.)𝛼𝛼||,𝑑𝑡(𝐴𝐺)=𝛼𝒬𝑡,𝑚||(𝐴𝐺)𝛼𝛼||.(2.7)

Proof. We will only show the representation (2.5) since the proof of (2.7) is similar. If 𝜆 is not the eigenvalue of 𝐺𝐴, then the matrix 𝜆𝐼+𝐺𝐴 is invertible, and (𝜆𝐼+𝐺𝐴)1=1𝑋det(𝜆𝐼+𝐺𝐴)11𝑋21𝑋𝑛1𝑋12𝑋22𝑋𝑛2𝑋1𝑛𝑋2𝑛𝑋𝑛𝑛,(2.8) where 𝑋𝑖𝑗,𝑖,𝑗=1,,𝑛, are cofactors of 𝜆𝐼+𝐺𝐴. It is easy to see that 𝑛𝑙=1𝑋𝑖𝑙𝑔𝑙𝑗=det(𝜆𝐼+𝐺𝐴).𝑖𝑔.𝑗.(2.9) So, by (2.1), 𝐴(2)𝑇,𝑆=lim𝜆0det(𝜆𝐼+𝐺𝐴).1𝑔.1det(𝜆𝐼+𝐺𝐴)det(𝜆𝐼+𝐺𝐴).1𝑔.𝑚det(𝜆𝐼+𝐺𝐴)det(𝜆𝐼+𝐺𝐴).𝑛𝑔.1det(𝜆𝐼+𝐺𝐴)det(𝜆𝐼+𝐺𝐴).𝑛𝑔.𝑚.det(𝜆𝐼+𝐺𝐴)(2.10)
We have the characteristic polynomial of 𝐺𝐴𝑓𝐺𝐴(𝜆)=det(𝜆𝐼+𝐺𝐴)=𝜆𝑛+𝑑1𝜆𝑛1+𝑑2𝜆𝑛2++𝑑𝑛,(2.11) where 𝑑𝑖(1𝑖𝑛) is a sum of 𝑖×𝑖 principal minors of 𝐺𝐴. Since rank(𝐺𝐴)rank(𝐺)=𝑡, 𝑑𝑛=𝑑𝑛1==𝑑𝑡+1=0 and det(𝜆𝐼+𝐺𝐴)=𝜆𝑛+𝑑1𝜆𝑛1+𝑑2𝜆𝑛2++𝑑𝑡𝜆𝑛𝑡.(2.12) Expanding det(𝜆𝐼+𝐺𝐴).𝑖(𝑔.𝑗), we have det(𝜆𝐼+𝐺𝐴).𝑖𝑔.𝑗=𝑥1(𝑖𝑗)𝜆𝑛1+𝑥2(𝑖𝑗)𝜆𝑛2++𝑥𝑛(𝑖𝑗),(2.13) where 𝑥𝑘(𝑖𝑗)=𝛽𝒬𝑘,𝑛{𝑖}|((𝐺𝐴).𝑖(𝑔.𝑗))𝛽𝛽|, 1𝑘𝑛, for 1𝑖𝑛 and 1𝑗𝑚.
By Lemma 2.2, rank(𝐺𝐴).𝑖(𝑔.𝑗)𝑡 and so |((𝐺𝐴).𝑖(𝑔.𝑗))𝛽𝛽|=0, 𝑘>𝑡 and 𝛽𝒬𝑘,𝑛{𝑖}, for all 𝑖,𝑗. Therefore, 𝑥𝑘(𝑖𝑗)=0, 𝑘𝑛, for all 𝑖,𝑗. Consequently, det(𝜆𝐼+𝐺𝐴).𝑖𝑔.𝑗=𝑥1(𝑖𝑗)𝜆𝑛1+𝑥2(𝑖𝑗)𝜆𝑛2++𝑥𝑡(𝑖𝑗)𝜆𝑛𝑡.(2.14)
Substituting (2.12) and (2.14) into (2.10) yields 𝐴(2)𝑇,𝑆=lim𝜆0𝑥1(11)𝜆𝑛1++𝑥𝑡(11)𝜆𝑛𝑡𝜆𝑛+𝑑1𝜆𝑛1++𝑑𝑡𝜆𝑛𝑡𝑥1(1𝑚)𝜆𝑛1++𝑥𝑡(1𝑚)𝜆𝑛𝑡𝜆𝑛+𝑑1𝜆𝑛1++𝑑𝑡𝜆𝑛𝑡𝑥1(𝑛1)𝜆𝑛1++𝑥𝑡(𝑛1)𝜆𝑛𝑡𝜆𝑛+𝑑1𝜆𝑛1++𝑑𝑡𝜆𝑛𝑡𝑥1(𝑛𝑚)𝜆𝑛1++𝑥𝑡(𝑛𝑚)𝜆𝑛𝑡𝜆𝑛+𝑑1𝜆𝑛1++𝑑𝑡𝜆𝑛𝑡=𝑥𝑡(11)𝑑𝑡𝑥𝑡(1𝑚)𝑑𝑡𝑥𝑡(𝑛1)𝑑𝑡𝑥𝑡(𝑛𝑚)𝑑𝑡.(2.15)
Substituting 𝑥𝑖𝑗 for 𝑥𝑡(𝑖𝑗) in the above equation, we reach (2.5).

Remark 2.4. The proofs of Lemma 2.2 and Theorem 2.3 are based on the general techniques and methods obtained previously by [17], respectively.

Remark 2.5. (i) By using (2.5), we can obtain (2.17) in [12, Theorem  2.3]. In fact, 𝑢=𝑑𝑡(𝐺𝐴) and, by the Binet-Cauchy formula, 𝑥𝑖𝑗=𝛽𝒬𝑡,𝑛{𝑖}|||(𝐺𝐴).𝑖(𝑔.𝑗)𝛽𝛽|||=𝛽𝒬𝑡,𝑛{𝑖}𝑘𝑔𝑘𝑗𝜕|||(𝐺𝐴)𝛽𝛽|||𝜕𝑠𝑘𝑖=𝛽𝒬𝑡,𝑛,𝛼𝒬𝑡,𝑚𝑘𝑔𝑘𝑗𝜕||𝐺𝛼𝛽||𝜕𝑔𝑘𝑗𝜕||𝐴𝛽𝛼||𝜕𝑎𝑗𝑖=𝛼𝒬𝑡,𝑚,𝛽𝒬𝑡,𝑛𝐺det𝛼𝛽𝜕||𝐴𝛽𝛼||𝜕𝑎𝑗𝑖,(2.16) where 𝑠𝑘𝑗=(𝐺𝐴)𝑘𝑗. Note that 𝜕|𝐴𝛽𝛼|/𝜕𝑎𝑖𝑗=0 if 𝑖𝛼 or 𝑗𝛽. In addition, using the symbols in [13], we can rewrite (2.5) as [13, equation (13)] over .
(ii) This method is especially efficient when 𝐺𝐴 or 𝐴𝐺 is given (comparing with [12, Theorem  2]).

Observing the particular case from Theorem 2.3, 𝐺=(𝑔𝑖𝑗)=𝑁1𝐴𝑀, where 𝑀 and 𝑁 are Hermitian positive definite matrices, we obtain the following corollary in which the symbols 𝑔.𝑗=(𝑔).𝑗 and 𝑔𝑖.=(𝑔)𝑖..

Corollary 2.6. Let 𝐴𝑟𝑚×𝑛 and 𝐺=𝑁1𝐴𝑀, where 𝑀 and 𝑁 are Hermitian positive definite matrices of order 𝑚 and 𝑛, respectively, Then 𝐴𝑀𝑁=𝑥𝑖𝑗𝑑𝑟(𝐺𝐴)𝑛×𝑚,(2.17) where 𝑥𝑖𝑗=𝛽𝒬𝑟,𝑛{𝑖}|||(𝐺𝐴).𝑖(𝑔.𝑗)𝛽𝛽|||,𝑑𝑟(𝐺𝐴)=𝛽𝒬𝑟,𝑛|||(𝐺𝐴)𝛽𝛽|||,(2.18) or 𝐴𝑀𝑁=𝑦𝑖𝑗𝑑𝑟(𝐴𝐺)𝑛×𝑚,(2.19) where 𝑦𝑖𝑗=𝛼𝒬𝑟,𝑚{𝑗}||(𝐴𝐺)𝑗.(𝑔𝑖.)𝛼𝛼||,𝑑𝑟(𝐴𝐺)=𝛼𝒬𝑟,𝑚||(𝐴𝐺)𝛼𝛼||.(2.20)

If 𝑀 and 𝑁 are identity matrices, then we can obtain the following result.

Corollary 2.7 (see [17, Theorem  2.2]). The Moore-Penrose inverse 𝐴 of 𝐴=(𝑎𝑖𝑗)𝑟𝑚×𝑛 can be represented as follows: 𝐴=𝑥𝑖𝑗𝑑𝑟(𝐴𝐴)𝑛×𝑚,(2.21) where 𝑥𝑖𝑗=𝛽𝒬𝑟,𝑛{𝑖}|||𝐴𝐴.𝑖𝑎.𝑗𝛽𝛽|||,𝑑𝑟𝐴𝐴=𝛽𝒬𝑟,𝑛|||𝐴𝐴𝛽𝛽|||,(2.22) or 𝐴=𝑦𝑖𝑗𝑑𝑟(𝐴𝐴)𝑛×𝑚,(2.23) where 𝑦𝑖𝑗=𝛼𝒬𝑟,𝑚{𝑗}|||𝐴𝐴𝑗.(𝑎𝑖.)𝛼𝛼|||,𝑑𝑟𝐴𝐴=𝛼𝒬𝑟,𝑚||𝐴𝐴𝛼𝛼||.(2.24)

Note that 𝐴𝑑,𝑊=(𝑊𝐴𝑊)(2)((𝐴𝑊)𝑘𝐴),𝒩((𝐴𝑊)𝑘𝐴). Therefore, when 𝐺=(𝐴𝑊)𝑘𝐴 in Theorem 2.3, we have the following corollary.

Corollary 2.8. Let 𝐴𝑚×𝑛, 𝑊𝑛×𝑚, and 𝑘=max{Ind(𝐴𝑊),Ind(𝑊𝐴)}. If rank(𝐴𝑊)𝑘=𝑡, rank(𝑊𝐴)𝑘=𝑟, and (𝐴𝑊)𝑘𝐴=(𝑐𝑖𝑗)𝑚×𝑛, then 𝐴𝑑,𝑊=𝑥𝑖𝑗𝑑𝑡(𝐴𝑊)𝑘+2𝑚×𝑛,(2.25) where 𝑥𝑖𝑗=𝛽𝒬𝑡,𝑚{𝑖}|||(𝐴𝑊)𝑘+2.𝑖𝑐.𝑗𝛽𝛽|||,𝑑𝑡(𝑊𝐴)𝑘+2=𝛽𝒬𝑡,m|||(𝐴𝑊)𝑘+2𝛽𝛽|||,(2.26) or 𝐴𝑑,𝑊=𝑦𝑖𝑗𝑑𝑟(𝑊𝐴)𝑘+2𝑚×𝑛,(2.27) where 𝑦𝑖𝑗=𝛼𝒬𝑟,𝑛{𝑗}|||(𝑊𝐴)𝑘+2𝑗.𝑐𝑖.𝛼𝛼|||,𝑑𝑟(𝑊𝐴)𝑘+2=𝛼𝒬𝑟,𝑛||(𝑊𝐴)𝑘+2𝛼𝛼||.(2.28)

When 𝐺=𝐴𝑘 with 𝑘=Ind(𝐴) in Theorem 2.3, we have the following corollary.

Corollary 2.9 (see [17, Theorem  3.3]). Let 𝐴𝑛×𝑛 with Ind𝐴=𝑘 and rank𝐴𝑘=𝑟, and 𝐴𝑘=(𝑎(𝑘)𝑖𝑗)𝑛×𝑛. Then 𝐴𝑑=𝑥𝑖𝑗𝑑𝑟𝐴𝑘+1𝑛×𝑛,(2.29) where 𝑑𝑖𝑗=𝛽𝒬𝑟,𝑛{𝑖}||||𝐴𝑘+1.𝑖𝑎(𝑘).𝑗𝛽𝛽||||,𝑑𝑟𝐴𝑘+1=𝛽𝒬𝑟,𝑛|||𝐴𝑘+1𝛽𝛽|||.(2.30)

Finally, we turn our attention to the two projectors 𝐴(2)𝑇,𝑆𝐴 and 𝐴𝐴(2)𝑇,𝑆. The limiting expressions for 𝐴(2)𝑇,𝑆 in (2.1) bring us the following:𝐴(2)𝑇,𝑆𝐴=lim𝜆0(𝐺𝐴+𝜆𝐼)1𝐺𝐴,𝐴𝐴(2)𝑇,𝑆=lim𝜆0𝐴𝐺(𝐴𝐺+𝜆𝐼)1.(2.31)

Corollary 2.10. Let 𝐴,𝑇,𝑆, and 𝐺 be the same as in Theorem 2.1. Write 𝐺𝐴=(𝑠𝑖𝑗) and 𝐴𝐺=(𝑖𝑗). Suppose that 𝐴(2)𝑇,S exists. Then 𝐴(2)𝑇,𝑆𝐴 of 𝐴𝐴(2)𝑇,𝑆 can be represented as follows: 𝐴(2)𝑇,𝑆𝑥𝐴=𝑖𝑗𝑑𝑡(𝐺𝐴)𝑛×𝑛,(2.32) where 𝑥𝑖𝑗=𝛽𝒬𝑡,𝑛{𝑖}|||(𝐺𝐴).𝑖𝑠.𝑗𝛽𝛽|||,𝑑𝑡(𝐺𝐴)=𝛽𝒬𝑡,𝑛|||(𝐺𝐴)𝛽𝛽|||,𝐴𝐴(2)𝑇,𝑆=𝑦𝑖𝑗𝑑𝑡(𝐴𝐺)𝑚×𝑚,(2.33) where 𝑦𝑖𝑗=𝛼𝒬𝑡,𝑚{𝑗}||(𝐴𝐺)𝑗.(𝑖.)𝛼𝛼||,𝑑𝑡(𝐴𝐺)=𝛼𝒬𝑡,𝑚||(𝐴𝐺)𝛼𝛼||.(2.34)

3. Cramer Rules for the Solution of the Restricted Matrix Equation

The restricted matrix equation problem is mainly to find solution of a matrix equation or a system of matrix equations in a set of matrices which satisfy some constraint conditions. Such problems play an important role in applications in structural design, system identification, principal component analysis, exploration, remote sensing, biology, electricity, molecular spectroscopy, automatics control theory, vibration theory, finite elements, circuit theory, linear optimal, and so on. For example, the finite-element static model correction problem can be transformed to solve some constraint condition solution and its best approximation of the matrix equation 𝐴𝑋=𝐵. The undamped finite-element dynamic model correction problem can be attributed to solve some constraint condition solution and its best approximation of the matrix equation 𝐴𝑇𝑋𝐴=𝐵. These motivate the gradual development of theory in respect of the solution to the restricted matrix equation in recent years (see [1827]).

In this section, we consider the restricted matrix equation𝐴𝑋𝐵=𝐷,(𝑋)𝑇,𝒩(𝑋)𝑆,(3.1) where 𝐴𝑟𝑚×𝑛, 𝐵𝑝×𝑞̃𝑟, 𝐷𝑚×𝑞, 𝑇𝑛, 𝑆𝑚, 𝑇𝑞 and 𝑆𝑝, satisfy𝑆dim(𝑇)=dim𝑇𝑆=𝑡𝑟,dim=dim=̃𝑡̃𝑟.(3.2) Assume that there exist matrices 𝐺𝑛×𝑚 and 𝐺𝑞×𝑝 satisfying𝐺=𝐺=(𝐺)=𝑇,𝒩(𝐺)=𝑆,𝑇,𝒩𝑆.(3.3) If 𝐴(2)𝑇,𝑆 and 𝐵(2)𝑇,𝑆 exist and 𝐷(𝐴𝐺,𝐺𝐵), then the restricted matrix equation (3.1) has the unique solution𝑋=𝐴(2)𝑇,𝑆𝐷𝐵(2)𝑇,𝑆(3.4) (see [2,  Theorem  3.3.3] for the proof).

In particular, when 𝐷 is a vector 𝑏 and 𝐵=𝐺=𝐼1, the restricted matrix equation (3.1) becomes the restricted linear equation𝐴𝑥=𝑏,𝑥𝑇.(3.5) If 𝑏𝐴(𝐺), then 𝑥=𝐴(2)𝑇,𝑆𝑏 is the unique solution of the restricted linear equation (3.5) (see also [10,  Theorem  2.1]).

Theorem 3.1. Given 𝐴,𝐵,𝐷=(𝑑𝑖𝑗),𝐺=(𝑔𝑖𝑗),𝐺=(̃𝑔𝑖𝑗𝑇),𝑇,𝑆,, and 𝑆 as above. If 𝐴(2)𝑇,𝑆 and 𝐵(2)𝑇,𝑆 exist and 𝐷(𝐴𝐺,𝐺𝐵), then 𝑋=𝐴(2)𝑇,𝑆𝐷𝐵(2)𝑇,𝑆 is the unique solution of the restricted matrix equation (3.1) and it can be represented as 𝑥𝑖𝑗=𝑚𝑘=1𝛽𝒬𝑡,𝑛{𝑖},𝛼𝒬̃𝑡,𝑝{𝑗}|||(𝐺𝐴).𝑖𝑔.𝑘𝛽𝛽|||||||𝐵𝐺𝑗.𝑓𝑘.𝛼𝛼||||𝑑𝑡(𝐺𝐴)𝑑̃𝑡𝐵𝐺(3.6) or 𝑥𝑖𝑗=𝑞𝑘=1𝛽𝒬𝑡,𝑛{𝑖},𝛼𝒬̃𝑡,𝑝{𝑗}|||(𝐺𝐴).𝑖(𝑓.𝑘)𝛽𝛽|||||||𝐵𝐺𝑗.(̃𝑔𝑘.)𝛼𝛼||||𝑑𝑡(𝐺𝐴)𝑑̃𝑡𝐵𝐺,(3.7) where 𝑓𝑘.=𝑑𝑘.𝐺 and 𝑓.𝑘=𝐺𝑑.𝑘, 𝑖=1,,𝑛, and 𝑗=1,,𝑝.

Proof. By the argument above, we have 𝑋=𝐴(2)𝑇,𝑆𝐷𝐵(2)𝑇,𝑆 is the unique solution of the restricted matrix equation (3.1). Setting 𝑌=𝐷𝐵(2)𝑇,𝑆 and using (2.7), we get that 𝑦𝑘𝑗=𝑞=1𝑑𝑘𝐵(2)𝑇,𝑆𝑗=𝑞=1𝑑𝑘𝛼𝒬̃𝑡,𝑝{𝑗}||||𝐵𝐺𝑗.̃𝑔.𝛼𝛼||||𝑑̃𝑡𝐵𝐺=𝛼𝒬̃𝑡,𝑝{𝑗}||||𝐵𝐺𝑗.𝑞=1𝑑𝑘̃𝑔.𝛼𝛼||||𝑑̃𝑡𝐵𝐺=𝛼𝒬̃𝑡,𝑝{𝑗}||||𝐵𝐺𝑗.𝑓𝑘.𝛼𝛼||||𝑑̃𝑡𝐵𝐺,(3.8) where 𝑓𝑘.=𝑑𝑘.𝐺. Since 𝑋=𝐴(2)𝑇,𝑆𝑌, by (2.5), 𝑥𝑖𝑗=𝑚𝑘=1𝐴(2)𝑇,𝑆𝑖𝑘𝑦𝑘𝑗=𝑚𝑘=1𝛽𝒬𝑡,𝑛{𝑖}|||(𝐺𝐴).𝑖𝑔.𝑘𝛽𝛽|||𝑑𝑡(𝐺𝐴)𝛼𝒬̃𝑡,𝑝{𝑗}||||𝐵𝐺𝑗.𝑓𝑘.𝛼𝛼||||𝑑̃𝑡𝐵𝐺.(3.9) Hence, we have (3.6).
We can obtain (3.7) in the same way.

In particular, when 𝐷 is a vector 𝑏 and 𝐵=𝐺=𝐼1 in the above theorem, we have the following result from (3.7).

Theorem 3.2. Given 𝐴,𝐺,𝑇, and 𝑆 as above. If 𝑏𝐴(𝐺), then 𝑥=𝐴(2)𝑇,𝑆𝑏 is the unique solution of the restricted linear equation 𝐴𝑥=𝑏, 𝑥𝑇, and it can be represented as 𝑥𝑖=𝛽𝒬𝑡,𝑛{𝑖}|||(𝐺𝐴).𝑖(𝑓)𝛽𝛽|||𝑑𝑡(𝐺𝐴),𝑗=1,,𝑛,(3.10) where 𝑓=𝐺𝑏.

Remark 3.3. Using the symbols in [13], we can rewrite (3.10) as [13, equation (27)].

4. Example

Let,.𝐴=12220211004200210000,𝐵=211001,𝐷=0110000000,𝐺=11200021000000000000𝐺=100020(4.1) Obviously, rank𝐴=3, dim𝐴𝑇=dim𝑇=2, and𝑇=(𝐺)4,𝑆=𝒩(𝐺)5,𝐺𝑇=2,𝐺𝑆=𝒩3.(4.2) It is easy to verify that 𝐴𝑇𝑆=5 and 𝐵𝑇𝑆=3. Thus, 𝐴(2)𝑇,𝑆 and 𝐵(2)𝑇,𝑆 exist by Lemma 1.1.

Now consider the restricted matrix equation𝐴𝑋𝐵=𝐷,(𝑋)𝑇,𝒩(𝑋)𝑆.(4.3) Clearly,,,𝐴𝐺=1340004200000000000000000𝐺𝐵=2120(4.4) and it is easy to verify that (𝐷)(𝐴𝐺) and 𝒩(𝐷)𝒩(𝐺𝐵) hold.

Note that (𝐷)(𝐴𝐺) and 𝒩(𝐷)𝒩(𝐺𝐵) if and only if 𝐷(𝐴𝐺,𝐺𝐵). So, by Theorem 3.1, the unique solution of (4.3) exists.

Computing,𝑑𝐺𝐴=1095046400000000,𝐵𝐺=2201000202||||||||||||+||||||||||||+||||||||||||+||||||||||||+||||||||||||+||||||||||||𝑑(𝐺𝐴)=100419001500460044000000=4,2𝐵𝐺=||||||||||||+||||||||||||+||||||||||||=,221020000020=2,𝑓=11200021000000000000011000000011200000(4.5) and setting 𝑦𝑖𝑘=𝛽𝒬𝑡,𝑛{𝑖}|((𝐺𝐴).𝑖(𝑓.𝑘))𝛽𝛽|, we have Table 1.

Similarly, setting 𝑧𝑘𝑗=𝛼𝒬̃𝑡,𝑝{𝑗}|((𝐵𝐺)𝑗.(̃𝑔𝑘.))𝛼𝛼|, we have Table 2.

So, by (3.7), we have𝑥𝑋=𝑖𝑗=1110020.000000(4.6)

Acknowledgments

The authors would like to thank the referees for their helpful comments and suggestions. This work was supported by the National Natural Science Foundation of China 11061005, the Ministry of Education Science and Technology Key Project under Grant 210164, and Grants HCIC201103 of Guangxi Key Laboratory of Hybrid Computational and IC Design Analysis Open Fund.