A Bauer-Hausdorff Matrix Inequality
We present a biorthogonal process for two subspaces of . Applying this process, we derive a matrix inequality, which generalizes the Bauer-Hausdorff inequality for vectors and includes the Wang-IP inequality for matrices. Meanwhile, we obtain its equivalent matrix inequality.
The Cauchy-Bunyakovsky-Schwarz, or for short the C.B.S.-inequality, plays an important role in different branches of Modern Mathematics including Hilbert Space Theory, Probability & Statistics, Classical Real and Complex Analysis, Numerical Analysis, and Qualitative Theory of Differential Equations and their applications.
Given an -dimensional complex space and two linear subspaces and such that there exists such that for all and the following strengthened C.B.S.-inequality holds (see ): where denotes the standard Euclidian norm. The smallest quantity may be called the cosine of the angle between the spaces and , or be called the C.B.S.-ratio of and .
Strengthened C.B.S.-inequality has a long history and there exist various versions. The earliest result of this kind is due to the Wielandt  and was later generalized by many researchers. Among them two important extensions of the Wielandt inequality were given by Bauer and Householder , and Wang and Ip . The aim of this paper is to present a matrix version of the Bauer-Hausdorff inequality like one of the Wielandt inequality given by Wang and Ip in .
On the practical side, this has been used in the analysis of two-level methods. The survey by Eijkhout and Vassilevski  attributes the basic theory of this inequality and its applications in multilevel methods for the solution of linear systems arising from finite element or finite difference discretisation of elliptic partial differential equations. Auzinger and Kirlinger  proposed another extension of this inequality for the resolvent conditions in the Kreiss matrix theorem .
Throughout the paper, we denote by the range of the matrix , and by the (closed) numerical range of an operator on the space , and by the linear subspace spanned by with , .
Definition 1. For , denotes the orthogonal projector onto the column space (range) of , where is the Moore-Penrose inverse of .
Definition 2. For two positive semidefinite Hermitian matrices , and , we say that is below with respect to the Löwner partial ordering, and we write , if is positive semidefinite.
2. A Matrix Inequality
Let be an positive definite Hermitian matrix. For any two nonzero complex vectors and , Bauer and Householder  asserted that where satisfies with being the angle between two vector subspaces and mentioned above, and with and being the largest and smallest, necessarily real and positive, eigenvalues of .
A very interesting and important special case of (6) is the Wielandt inequality established by Wielandt  when . The upper bound in (8) is called the Wielandt ratio (see ). Wang and Ip  generalized this inequality as follows.
Let and be complex and matrices, respectively. If , then for all generalized inverses in the Löwner partial ordering.
The first statistical application of the Wielandt inequality seems to be Eaton . There are various equivalent versions of (8) in the literature. The most important one of them is the famous Kantorovich inequality [9, 10] which was used in estimating convergence rate of the steepest descent method for minimizing quadratic problems. Historical and biographical remark on them can be found in .
Lemma 3 (see Zhan, ). Let be a Hermitian matrix with block form Then is positive semidefinite if and only if and are positive semidefinite and if there is a contraction matrix such that .
If can be partitioned as (10) and then (9) becomes such that In other words, if is positive definite with the block (10), then there is a contraction matrix , with the maximum singular value being equal to or less than , such that .
The main result of this paper is stated in the following theorem.
Theorem 4. Let be a positive semidefinite Hermitian matrix with and eigenvalues . For any and , if is the angle between the two vector spaces and and then
Theorem 4 will be proved in the next section. This states a very general form of a (strengthened) C.B.S.-inequality and covers various types of C.B.S.-inequalities and their matrix forms. For instance, the inequality (15) reduces to (6) when and reduces to (9) when .
Theorem 4 can be rewritten as the following equivalent form.
The maximum singular value of the contraction matrix in Theorem 5 might be strictly less than . For example, if then such that That is, the singular value of the contraction matrix belongs to the interval .
3. Proof of the Main Result
In this section we present an elementary proof of Theorem 4 by a biorthogonal procedure.
Lemma 6. Let be the C.B.S.-ratio of two subspaces and of satisfying (1). Then there exist two unit vectors and satisfying such that
One direct consequence of Lemma 6 is the following theorem.
Theorem 7. Let and be and dimensional linear subspaces of satisfying (1) with and . Then there exist two standard orthogonal bases of and of such that for each
Proof. We shall achieve the desired result by the following biorthogonal process. Start with and , . By Lemma 6, one finds two unit vectors and such that
where is the C.B.S.-ratio of and with . If or , then the procedure is completed; otherwise, update and by setting
It is easily proved that
Replace by , and repeat the above procedure until .
If is not equal to , one finds a standard orthogonal bases of () or of (). This procedure generates two bases of and of such that and for each . Equations (23) and (25) imply that such that these two bases are standard orthogonal. Finally, (21) holds since it is equivalent to (25).
In order to acquire our main result, we need the following lemmas.
Lemma 8. Let be the numerical range of a linear operator on the space . Then for any operator arbitrary, where the multiplication is defined by , .
Proof. For any , if , then . If and , then such that which results in the desired assertion.
Lemma 9. If , then the matrix has single eigenvalues , where and are two and unit matrices, and with a diagonal matrix . Furthermore, if , then has multiple eigenvalue 1.
Proof. If two vectors and are defined by
then they are eigenvectors of with the eigenvalues and , respectively.
If , for each , the th column vector of the unit matrix is the eigenvector of with the multiple eigenvalue . The proof is completed.
Finally, we give the proof of Theorem 4.
Proof of Theorem 4. Let and be two standard orthogonal bases of ranges and , respectively, with the ranks and in Theorem 7. Without loss of generality, we may assume that . Let , where diag is a diagonal matrix with for each and the C.B.S.-ratio of and . There exist two matrices and such that and .
Letting , can be expressed as the form (29) such that from Lemma 9. Lemma 8 shows that the matrix has the largest and smallest eigenvalues and . Since (7) holds and the Wielandt ratio of is such that the matrix being positive semidefinite by the use of the Schur complement theory (see [13, 14]) to the inequality (9), which leads to the matrix is positive semidefinite. Applying the Schur complement theory again, the desired result is proved.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
The present investigation was supported, in part, by the China Postdoctoral Science Foundation under Grant 2012M520417 and, in part, by the Natural Science Foundation of Fujian province of China under Grant 2012J01014. The authors are grateful to the anonymous referees for their careful comments and valuable suggestions on this paper.
H. Wielandt, “Inclusion theorems for eigenvalues,” in Simultaneous Linear Equations and the Determination of Eigenvalues, National Bureau of Standards Applied Mathematics Series, pp. 75–78, U.S. Government Printing Office, Washington, DC, USA, 1953.View at: Google Scholar | Zentralblatt MATH | MathSciNet
S. W. Drury, S. Liu, C. Y. Lu, S. Puntanen, and G. P. H. Styan, “Some comments on several matrix inequalities with applications to canonical correlations: historical background and recent developments,” Sankhyā, vol. 64, no. 2, pp. 453–507, 2002.View at: Google Scholar | Zentralblatt MATH | MathSciNet
X. Z. Zhan, Matrix Theory, Higher Education Press, Beijing, China, 2008 (Chinese).View at: MathSciNet
M. Fujii, Wielandt Theorem: Simple Proof and Its Generalizations, Hokkaido University Technical Report Series in Mathematics, 1997.
R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, UK, 1985.View at: MathSciNet