Journal of Applied Mathematics

Volume 2013, Article ID 680975, 7 pages

http://dx.doi.org/10.1155/2013/680975

## On Nonnegative Moore-Penrose Inverses of Perturbed Matrices

Department of Mathematics, Indian Institute of Technology Madras, Chennai, Tamil Nadu 600036, India

Received 22 April 2013; Accepted 7 May 2013

Academic Editor: Yang Zhang

Copyright © 2013 Shani Jose and K. C. Sivakumar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Nonnegativity of the Moore-Penrose inverse of a perturbation of the form is considered when . Using a generalized version of the Sherman-Morrison-Woodbury formula, conditions for to be nonnegative are derived. Applications of the results are presented briefly. Iterative versions of the results are also studied.

#### 1. Introduction

We consider the problem of characterizing nonnegativity of the Moore-Penrose inverse for matrix perturbations of the type , when the Moore-Penrose inverse of is nonnegative. Here, we say that a matrix is nonnegative and denote it by if , . This problem was motivated by the results in [1], where the authors consider an -matrix and find sufficient conditions for the perturbed matrix to be an -matrix. Let us recall that a matrix is said to -matrix if for all , , . An -matrix is a nonsingular -matrix with nonnegative inverse. The authors in [1] use the well-known Sherman-Morrison-Woodbury (SMW) formula as one of the important tools to prove their main result. The SMW formula gives an expression for the inverse of in terms of the inverse of , when it exists. When is nonsingular, is nonsingular if and only is nonsingular. In that case,

The main objective of the present work is to study certain structured perturbations of matrices such that the Moore-Penrose inverse of the perturbation is nonnegative whenever the Moore-Penrose inverse of is nonnegative. Clearly, this class of matrices includes the class of matrices that have nonnegative inverses, especially -matrices. In our approach, extensions of SMW formula for singular matrices play a crucial role. Let us mention that this problem has been studied in the literature. (See, for instance [2] for matrices and [3] for operators over Hilbert spaces). We refer the reader to the references in the latter for other recent extensions.

In this paper, first we present alternative proofs of generalizations of the SMW formula for the cases of the Moore-Penrose inverse (Theorem 5) and the group inverse (Theorem 6) in Section 3. In Section 4, we characterize the nonnegativity of . This is done in Theorem 9 and is one of the main results of the present work. As a consequence, we present a result for -matrices which seems new. We present a couple of applications of the main result in Theorems 13 and 15. In the concluding section, we study iterative versions of the results of the second section. We prove two characterizations for to be nonnegative in Theorems 18 and 21.

Before concluding this introductory section, let us give a motivation for the work that we have undertaken here. It is a well-documented fact that -matrices arise quite often in solving sparse systems of linear equations. An extensive theory of -matrices has been developed relative to their role in numerical analysis involving the notion of splitting in iterative methods and discretization of differential equations, in the mathematical modeling of an economy, optimization, and Markov chains [4, 5]. Specifically, the inspiration for the present study comes from the work of [1], where the authors consider a system of linear inequalities arising out of a problem in third generation wireless communication systems. The matrix defining the inequalities there is an -matrix. In the likelihood that the matrix of this problem is singular (due to truncation or round-off errors), the earlier method becomes inapplicable. Our endeavour is to extend the applicability of these results to more general matrices, for instance, matrices with nonnegative Moore-Penrose inverses. Finally, as mentioned earlier, since matrices with nonnegative generalized inverses include in particular -matrices, it is apparent that our results are expected to enlarge the applicability of the methods presently available for -matrices, even in a very general framework, including the specific problem mentioned above.

#### 2. Preliminaries

Let , , and denote the set of all real numbers, the -dimensional real Euclidean space, and the set of all matrices over . For , let , , , and denote the range space, the null space, the orthogonal complement of the range space, and the transpose of the matrix , respectively. For , we say that is nonnegative, that is, if and only if for all . As mentioned earlier, for a matrix we use to denote that all the entries of are nonnegative. Also, we write if .

Let denote the spectral radius of the matrix . If for , then is invertible. The next result gives a necessary and sufficient condition for the nonnegativity of . This will be one of the results that will be used in proving the first main result.

Lemma 1 (see [5, Lemma 2.1, Chapter 6]). *Let be nonnegative. Then, if and only if exists and
*

More generally, matrices having nonnegative inverses are characterized using a property called monotonicity. The notion of monotonicity was introduced by Collatz [6]. A real matrix is called *monotone* if . It was proved by Collatz [6] that is monotone if and only if exists and .

One of the frequently used tools in studying monotone matrices is the notion of a regular splitting. We only refer the reader to the book [5] for more details on the relationship between these concepts.

The notion of monotonicity has been extended in a variety of ways to singular matrices using generalized inverses. First, let us briefly review the notion of two important generalized inverses.

For , the Moore-Penrose inverse is the unique satisfying the Penrose equations: , , , and . The unique Moore-Penrose inverse of is denoted by , and it coincides with when is invertible.

The following theorem by Desoer and Whalen, which is used in the sequel, gives an equivalent definition for the Moore-Penrose inverse. Let us mention that this result was proved for operators between Hilbert spaces.

Theorem 2 (see [7]). *Let . Then is the unique matrix satisfying *(i)*, ,*(ii)*, .*

Now, for , any satisfying the equations , , and is called the group inverse of . The group inverse does not exist for every matrix. But whenever it exists, it is unique. A necessary and sufficient condition for the existence of the group inverse of is that the index of is 1, where the index of a matrix is the smallest positive integer such that .

Some of the well-known properties of the Moore-Penrose inverse and the group inverse are given as follows: , , , and . In particular, if and only if . Also, , , and . Here, for complementary subspaces and of , denotes the projection of onto along . denotes if . For details, we refer the reader to the book [8].

In matrix analysis, a decomposition (splitting) of a matrix is considered in order to study the convergence of iterative schemes that are used in the solution of linear systems of algebraic equations. As mentioned earlier, regular splittings are useful in characterizing matrices with nonnegative inverses, whereas, proper splittings are used for studying singular systems of linear equations. Let us next recall this notion. For a matrix , a decomposition is called a proper splitting [9] if and . It is rather well-known that a proper splitting exists for every matrix and that it can be obtained using a full-rank factorization of the matrix. For details, we refer to [10]. Certain properties of a proper splitting are collected in the next result.

Theorem 3 (see [9, Theorem 1]). *Let be a proper splitting of . Then, *(a)*,
*(b)* is nonsingular and, *(c)*. *

The following result by Berman and Plemmons [9] gives a characterization for to be nonnegative when has a proper splitting. This result will be used in proving our first main result.

Theorem 4 (see [9, Corollary 4]). *Let be a proper splitting of , where and . Then if and only if . *

#### 3. Extensions of the SMW Formula for Generalized Inverses

The primary objects of consideration in this paper are generalized inverses of perturbations of certain types of a matrix . Naturally, extensions of the SMW formula for generalized inverses are relevant in the proofs. In what follows, we present two generalizations of the SMW formula for matrices. We would like to emphasize that our proofs also carry over to infinite dimensional spaces (the proof of the first result is verbatim and the proof of the second result with slight modifications applicable to range spaces instead of ranks of the operators concerned). However, we are confining our attention to the case of matrices. Let us also add that these results have been proved in [3] for operators over infinite dimensional spaces. We have chosen to include these results here as our proofs are different from the ones in [3] and that our intention is to provide a self-contained treatment.

Theorem 5 (see [3, Theorem 2.1]). *Let , , and be such that
**
Let , and . If
**
then
*

*Proof. *Set , where . From the conditions (3) and (4), it follows that , , , , , and .

Now,
Thus, . Since , it follows that , .

Let . Then, so that . Substituting and simplifying it, we get . Also, and so . Thus, for . Hence, by Theorem 2, .

The result for the group inverse follows.

Theorem 6. *Let be such that exists. Let , and be nonsingular. Assume that , and is nonsingular. Suppose that . Then, exists and the following formula holds:
**
Conversely, if exists, then the formula above holds, and we have . *

*Proof. *Since exists, and are complementary subspaces of .

Suppose that . As , it follows that . Thus . By the rank-nullity theorem, the nullity of both and are the same. Again, since , it follows that . Thus, and are complementary subspaces. This guarantees the existence of the group inverse of .

Conversely, suppose that exists. It can be verified by direct computation that is the group inverse of . Also, we have , so that , and hence the .

We conclude this section with a fairly old result [2] as a consequence of Theorem 5.

Theorem 7 (see [2, Theorem 15]). *Let of , and . Let be an nonsingular matrix. Assume that , , and is nonsingular. Let . Then
*

#### 4. Nonnegativity of

In this section, we consider perturbations of the form and derive characterizations for to be nonnegative when , , and . In order to motivate the first main result of this paper, let us recall the following well known characterization of -matrices [5].

Theorem 8. *Let be a -matrix with the representation , where and . Then the following statements are equivalent:*(a)* exists and .*(b)*There exists such that .*(c)*. *

Let us prove the first result of this article. This extends Theorem 8 to singular matrices. We will be interested in extensions of conditions (a) and (c) only.

Theorem 9. *Let be a proper splitting of with , and . Let be nonsingular and nonnegative, , and be nonnegative such that , and is nonsingular. Let . Then, the following are equivalent*(a)*.*(b)*.*(c)* where .*

*Proof. *First, we observe that since , , and is nonsingular, by Theorem 7, we have
We thus have and . Therefore,
Note that the first statement also implies that , by Theorem 4.

: By taking and , we get as a proper splitting for such that and (since ). Since , by Theorem 4, we have . This implies that . We also have . Thus, by Lemma 1, exists and is nonnegative. But, we have . Now, . This implies that since . This proves .

: We have . Also and . So, is a proper splitting. Also and . Since , it follows from (9) that . now follows from Theorem 4.

: Since , we have . Also we have , . Thus, , since . Now, by Theorem 4, we are done if the splitting is a proper splitting. Since is a proper splitting, we have and . Now, from the conditions in (10), we get that and . Hence is a proper splitting, and this completes the proof.

The following result is a special case of Theorem 9.

Theorem 10. *Let be a proper splitting of with , and . Let and be nonnegative such that , , and is nonsingular. Let . Then the following are equivalent: *(a)*.*(b)*.*(c)*, where .*

The following consequence of Theorem 10 appears to be new. This gives two characterizations for a perturbed -matrix to be an -matrix.

Corollary 11. *Let where, and (i.e., is an -matrix). Let and be nonnegative such that is nonsingular. Let . Then the following are equivalent: *(a)*. *(b)*. *(c)*. *

*Proof. *From the proof of Theorem 10, since is nonsingular, it follows that and . This shows that is invertible. The rest of the proof is omitted, as it is an easy consequence of the previous result.

In the rest of this section, we discuss two applications of Theorem 10. First, we characterize the least element in a polyhedral set defined by a perturbed matrix. Next, we consider the following Suppose that the “endpoints” of an interval matrix satisfy a certain positivity property. Then all matrices of a particular subset of the interval also satisfy that positivity condition. The problem now is if that we are given a specific structured perturbation of these endpoints, what conditions guarantee that the positivity property for the corresponding subset remains valid.

The first result is motivated by Theorem 12 below. Let us recall that with respect to the usual order, an element is called a * least element* of if it satisfies for all . Note that a nonempty set may not have the least element, and if it exists, then it is unique. In this connection, the following result is known.

Theorem 12 (see [11, Theorem 3.2]). *For and , let
**
Then, a vector is the least element of if and only if with . *

Now, we obtain the nonnegative least element of a polyhedral set defined by a perturbed matrix. This is an immediate application of Theorem 10.

Theorem 13. *Let be such that . Let , and let be nonnegative, such that , , and is nonsingular. Suppose that . For , , let
**
where . Then, is the least element of . *

*Proof. *From the assumptions, using Theorem 10, it follows that . The conclusion now follows from Theorem 12.

To state and prove the result for interval matrices, let us first recall the notion of interval matrices. For , , an interval (matrix) is defined as . The interval is said to be range-kernel regular if and . The following result [12] provides necessary and sufficient conditions for for , where .

Theorem 14. *Let be range-kernel regular. Then, the following are equivalent: *(a)* whenever ,*(b)* and .**In such a case, we have .*

Now, we present a result for the perturbation.

Theorem 15. *Let be range-kernel regular with and . Let , , , and be nonnegative matrices such that , , , and . Suppose that and are nonsingular with nonnegative inverses. Suppose further that . Let , where and . Finally, let . Then, *(a)* and . *(b)* whenever .**In that case, .*

*Proof. *It follows from Theorem 7 that
Also, we have , , , and . Hence, , , , and . This implies that the interval is range-kernel regular. Now, since and satisfy the conditions of Theorem 10, we have and proving (a). Hence, by Theorem 14, whenever . Again, by Theorem 14, we have .

#### 5. Iterations That Preserve Nonnegativity of the Moore-Penrose Inverse

In this section, we present results that typically provide conditions for iteratively defined matrices to have nonnegative Moore-Penrose inverses given that the matrices that we start with have this property. We start with the following result about the rank-one perturbation case, which is a direct consequence of Theorem 10.

Theorem 16. *Let be such that . Let and be nonnegative vectors such that , , and . Then if and only if .*

*Example 17. *Let us consider . It can be verified that can be written in the form where and is the circulant matrix generated by the row . We have . Also, the decomposition , where , and is a proper splitting of with , , and .

Let . Then, and let are nonnegative, and . We have and for . Also, it can be seen that . This illustrates Theorem 10.

Let and be nonnegative. Denote and for . Then . The following theorem is obtained by a recurring application of the rank-one result of Theorem 16.

Theorem 18. *Let and let be as above. Further, suppose that , and be nonzero. Let , for all . Then if and only if , where is taken as . *

*Proof. *Set for , where is identified as . The conditions in the theorem can be written as:
Also, we have , for all .

Now, assume that . Then by Theorem 16 and the conditions in (14) for , we have . Also since by assumption for all , we have for all .

The converse part can be proved iteratively. Then condition and the conditions in (14) for imply that . Repeating the argument for to proves the result.

The following result is an extension of Lemma 2.3 in [1], which is in turn obtained as a corollary. This corollary will be used in proving another characterization for .

Theorem 19. *Let , be such that , and . Further, suppose that and . Let . Then, if and only if and .*

*Proof. *Let us first observe that and . Set
It then follows that and . Using these two equations, it can easily be shown that .

Suppose that and . Then, . Conversely suppose that . Then, we must have . Let denote the standard basis of . Then, for , we have , . Since , it follows that for . Thus, .

Corollary 20 (see [1, Lemma 2.3]). *Let be nonsingular. Let be such that , and . Let . Then, is nonsingular with if and only if and .*

Now, we obtain another necessary and sufficient condition for to be nonnegative.

Theorem 21. *Let be such that . Let be nonnegative vectors in and respectively, for every , such that
**
where , and is taken as . If is nonsingular and is positive for all , then if and only if
*

*Proof. *The range conditions in (16) imply that and . Also, from the assumptions, it follows that is nonsingular. By Theorem 10, if and only if . Now,
Since , using Corollary 20, it follows that if and only if and .

Now, applying the above argument to the matrix , we have that holds if and only if holds and . Continuing the above argument, we get, if and only if , . This is condition (17).

We conclude the paper by considering an extension of Example 17.

*Example 22. *For a fixed , let be the circulant matrix generated by the row vector . Consider
where is the identity matrix of order , is vector with as the first entry and elsewhere. Then, . is the nonnegative Toeplitz matrix

Let be the first row of written as a column vector multiplied by and let be the first column of multiplied by , where , with being identified with , . We then have , , and . Now, if for each iteration, then the vectors and will be nonnegative. Hence, it is enough to check that in order to get . Experimentally, it has been observed that for , the condition holds true for any iteration. However, for , it is observed that , , , and . But, , and in that case, we observe that is not nonnegative.

#### References

- J. Ding, W. Pye, and L. Zhao, “Some results on structured $M$-matrices with an application to wireless communications,”
*Linear Algebra and its Applications*, vol. 416, no. 2-3, pp. 608–614, 2006. View at Publisher · View at Google Scholar · View at MathSciNet - S. K. Mitra and P. Bhimasankaram, “Generalized inverses of partitioned matrices and recalculation of least squares estimates for data or model changes,”
*Sankhyā A*, vol. 33, pp. 395–410, 1971. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - C. Y. Deng, “A generalization of the Sherman-Morrison-Woodbury formula,”
*Applied Mathematics Letters*, vol. 24, no. 9, pp. 1561–1564, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. Berman, M. Neumann, and R. J. Stern,
*Nonnegative Matrices in Dynamical Systems*, John Wiley & Sons, New York, NY, USA, 1989. View at MathSciNet - A. Berman and R. J. Plemmons,
*Nonnegative Matrices in the Mathematical Sciences*, Society for Industrial and Applied Mathematics (SIAM), 1994. View at Publisher · View at Google Scholar · View at MathSciNet - L. Collatz,
*Functional Analysis and Numerical Mathematics*, Academic Press, New York, NY, USA, 1966. View at MathSciNet - C. A. Desoer and B. H. Whalen, “A note on pseudoinverses,”
*Society for Industrial and Applied Mathematics*, vol. 11, pp. 442–447, 1963. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. Ben-Israel and T. N. E. Greville,
*Generalized Inverses:Theory and Applications*, Springer, New York, NY, USA, 2003. View at Zentralblatt MATH · View at MathSciNet - A. Berman and R. J. Plemmons, “Cones and iterative methods for best least squares solutions of linear systems,”
*SIAM Journal on Numerical Analysis*, vol. 11, pp. 145–154, 1974. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - D. Mishra and K. C. Sivakumar, “On splittings of matrices and nonnegative generalized inverses,”
*Operators and Matrices*, vol. 6, no. 1, pp. 85–95, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - D. Mishra and K. C. Sivakumar, “Nonnegative generalized inverses and least elements of polyhedral sets,”
*Linear Algebra and its Applications*, vol. 434, no. 12, pp. 2448–2455, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. R. Kannan and K. C. Sivakumar, “Moore-Penrose inverse positivity of interval matrices,”
*Linear Algebra and its Applications*, vol. 436, no. 3, pp. 571–578, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet