Abstract

We present a biorthogonal process for two subspaces of . Applying this process, we derive a matrix inequality, which generalizes the Bauer-Hausdorff inequality for vectors and includes the Wang-IP inequality for matrices. Meanwhile, we obtain its equivalent matrix inequality.

1. Introduction

The Cauchy-Bunyakovsky-Schwarz, or for short the C.B.S.-inequality, plays an important role in different branches of Modern Mathematics including Hilbert Space Theory, Probability & Statistics, Classical Real and Complex Analysis, Numerical Analysis, and Qualitative Theory of Differential Equations and their applications.

Given an -dimensional complex space and two linear subspaces and such that there exists such that for all and the following strengthened C.B.S.-inequality holds (see [1]): where denotes the standard Euclidian norm. The smallest quantity may be called the cosine of the angle between the spaces and , or be called the C.B.S.-ratio of and .

Strengthened C.B.S.-inequality has a long history and there exist various versions. The earliest result of this kind is due to the Wielandt [2] and was later generalized by many researchers. Among them two important extensions of the Wielandt inequality were given by Bauer and Householder [3], and Wang and Ip [4]. The aim of this paper is to present a matrix version of the Bauer-Hausdorff inequality like one of the Wielandt inequality given by Wang and Ip in [4].

On the practical side, this has been used in the analysis of two-level methods. The survey by Eijkhout and Vassilevski [1] attributes the basic theory of this inequality and its applications in multilevel methods for the solution of linear systems arising from finite element or finite difference discretisation of elliptic partial differential equations. Auzinger and Kirlinger [5] proposed another extension of this inequality for the resolvent conditions in the Kreiss matrix theorem [6].

Throughout the paper, we denote by the range of the matrix , and by the (closed) numerical range of an operator on the space , and by the linear subspace spanned by with , .

Definition 1. For , denotes the orthogonal projector onto the column space (range) of , where is the Moore-Penrose inverse of .

Definition 2. For two positive semidefinite Hermitian matrices , and , we say that is below with respect to the Löwner partial ordering, and we write , if is positive semidefinite.

2. A Matrix Inequality

Let be an positive definite Hermitian matrix. For any two nonzero complex vectors and , Bauer and Householder [3] asserted that where satisfies with being the angle between two vector subspaces and mentioned above, and with and being the largest and smallest, necessarily real and positive, eigenvalues of .

A very interesting and important special case of (6) is the Wielandt inequality established by Wielandt [2] when . The upper bound in (8) is called the Wielandt ratio (see [7]). Wang and Ip [4] generalized this inequality as follows.

Let and be complex and matrices, respectively. If , then for all generalized inverses in the Löwner partial ordering.

The first statistical application of the Wielandt inequality seems to be Eaton [8]. There are various equivalent versions of (8) in the literature. The most important one of them is the famous Kantorovich inequality [9, 10] which was used in estimating convergence rate of the steepest descent method for minimizing quadratic problems. Historical and biographical remark on them can be found in [7].

Lemma 3 (see Zhan, [11]). Let be a Hermitian matrix with block form Then is positive semidefinite if and only if and are positive semidefinite and if there is a contraction matrix such that .

If can be partitioned as (10) and then (9) becomes such that In other words, if is positive definite with the block (10), then there is a contraction matrix , with the maximum singular value being equal to or less than , such that .

The main result of this paper is stated in the following theorem.

Theorem 4. Let be a positive semidefinite Hermitian matrix with   and eigenvalues . For any and , if is the angle between the two vector spaces and and then

Theorem 4 will be proved in the next section. This states a very general form of a (strengthened) C.B.S.-inequality and covers various types of C.B.S.-inequalities and their matrix forms. For instance, the inequality (15) reduces to (6) when and reduces to (9) when .

Theorem 4 can be rewritten as the following equivalent form.

Theorem 5. Under the assumptions of Theorem 4, there is a contraction matrix (from Lemma 3), with the maximum singular value being equal to or less than , such that

The maximum singular value of the contraction matrix in Theorem 5 might be strictly less than . For example, if then such that That is, the singular value of the contraction matrix belongs to the interval .

3. Proof of the Main Result

In this section we present an elementary proof of Theorem 4 by a biorthogonal procedure.

The C.B.S.-ratio in (3) can be redefined as Since in finite dimensional spaces the unit sphere is compact, the maximum value of (19) is attained. The following result is obvious.

Lemma 6. Let be the C.B.S.-ratio of two subspaces and of satisfying (1). Then there exist two unit vectors and satisfying such that

One direct consequence of Lemma 6 is the following theorem.

Theorem 7. Let and be and dimensional linear subspaces of satisfying (1) with and . Then there exist two standard orthogonal bases of and of such that for each

Proof. We shall achieve the desired result by the following biorthogonal process. Start with and , . By Lemma 6, one finds two unit vectors and such that where is the C.B.S.-ratio of and with . If or , then the procedure is completed; otherwise, update and by setting It is easily proved that
Replace by , and repeat the above procedure until .
If is not equal to , one finds a standard orthogonal bases of () or of (). This procedure generates two bases of and of such that and for each . Equations (23) and (25) imply that such that these two bases are standard orthogonal. Finally, (21) holds since it is equivalent to (25).

In order to acquire our main result, we need the following lemmas.

Lemma 8. Let be the numerical range of a linear operator on the space . Then for any operator arbitrary, where the multiplication is defined by , .

Proof. For any , if , then . If and , then such that which results in the desired assertion.

When is invertible, Lemma 8 was established by Fujii [12].

Lemma 9. If , then the matrix has single eigenvalues , where and are two and unit matrices, and with a diagonal matrix . Furthermore, if , then has multiple eigenvalue 1.

Proof. If two vectors and are defined by then they are eigenvectors of with the eigenvalues and , respectively.
If , for each , the th column vector of the unit matrix is the eigenvector of with the multiple eigenvalue . The proof is completed.

Finally, we give the proof of Theorem 4.

Proof of Theorem 4. Let and be two standard orthogonal bases of ranges and , respectively, with the ranks and in Theorem 7. Without loss of generality, we may assume that . Let , where diag is a diagonal matrix with for each and the C.B.S.-ratio of and . There exist two matrices and such that and .
Letting , can be expressed as the form (29) such that from Lemma 9. Lemma 8 shows that the matrix has the largest and smallest eigenvalues and . Since (7) holds and the Wielandt ratio of is such that the matrix being positive semidefinite by the use of the Schur complement theory (see [13, 14]) to the inequality (9), which leads to the matrix is positive semidefinite. Applying the Schur complement theory again, the desired result is proved.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The present investigation was supported, in part, by the China Postdoctoral Science Foundation under Grant 2012M520417 and, in part, by the Natural Science Foundation of Fujian province of China under Grant 2012J01014. The authors are grateful to the anonymous referees for their careful comments and valuable suggestions on this paper.