Abstract

This paper considers the precise asymptotics of the spectral statistics of random matrices. Following the ideas of Gut and Spătaru (2000) and Liu and Lin (2006) on the precise asymptotics of i.i.d. random variables in the context of the complete convergence and the second-order moment convergence, respectively, we will establish the precise second-order moment convergence rates of a type of series constructed by the spectral statistics of Wigner matrices or sample covariance matrices.

1. Introduction and Main Results

This paper is concerned with the precise asymptotic behaviors of the spectral (eigenvalue) statistics of random matrices; two types of classical random matrices including Wigner matrices and sample covariance matrices will be considered. An Wigner matrix is defined to be a random Hermitian matrix , in which the real and imaginary parts of for are i.i.d. random variables with mean and variance (in that case, ), and , , are i.i.d. random variables with mean and variance . Denote by the real eigenvalues of the normalized Wigner matrix . The classical Wigner theorem states that the empirical distribution converges almost surely to the semicircle law with the density . Consequently, for any bounded continuous function , the spectral statistics satisfy that

The result can be viewed as an analog of the law of large number for independent random variables. As for the fluctuation of the spectral statistics, a remarkable work due to Bai et al. [1] states the following.

Lemma 1. Assume the entries of a Wigner matrix satisfy that and for all , . Let be an open interval including the interval and is the space of forth-order continuous differentiable functions on . Denote Then the empirical process converges weakly in finite dimension to a Gaussian process with mean zero and the covariance function given by

In addition, Guionnet and Zeitouni [2] give a concentration inequality on the empirical spectral measure of near the semicircle law.

Lemma 2. Assume the law of the entries satisfies the logarithmic Sobolev inequality (LSI) with constant . Let be a Lipschitz function and denote Then for any , there exists , such that

Based on the above existing results on the limiting spectral properties of random matrices, we will give another type of asymptotic spectral properties, that is, the precise asymptotics of the spectral statistics of random matrices. In particular, we will consider the precise second-order moment convergence rates of a type of series constructed by the spectral statistics of random matrices. Our first result can be listed as follows.

Theorem 3. Suppose that the law of the entries satisfies LSI with constant and for all , . Denote as (4), where has a bounded first-order derivative. Then for any , , one has where is a standard Gaussian random variable and

Now we turn to the sample covariance matrix, which is an important statistic in multivariate statistics analysis. Let be an random matrix whose entries are i.i.d. complex-valued random variables with mean 0 and variance 1 and and are independent random variables with mean and variance . Then can be viewed as the sample covariance matrix of samples of -dimensional random vectors. Let be the eigenvalues of , and define the empirical spectral distribution . When as , the famous Marchenko-Pastur theorem reveals that almost surely converges to the Marchenko-Pastur law with the density , where , .

Denote where is the Marchenko-Pastur law with the parameter . Similar to the result on Wigner matrices, we can give the result on the complex sample covariance matrices.

Theorem 4. Assume the law of the entries satisfies LSI with constant and for all , , . If one denotes to be an open interval including , has a bounded first order derivative. Then for any , , one has where , and and is the Stieltjes transform of , that is, .

We will give the proofs of theorems in Section 2. Below are a few words about the motivation of this paper. In a sense, our results are similar to the precise asymptotics of independent random variables in the context of complete convergence and moment convergence. Let , be i.i.d. random variables and ; there are a number of results on the convergence of a type of series where and are the positive functions defined on , and . In fact, the sum (11) tends to infinity when , one of the interesting problems is to examine the precise rate at which this occurs, and this amounts to finding a suitable normalizing rate function such that the sum (11) multiplied by has a nontrivial limit; this kind of results is frequently called “precise asymptotics.” The first result in this direction due to Heyde [3], who proved that under the assumptions that and . Some analogous results in more general case can be found in Gut and Spǎtaru [4, 5] and Gut and Steinebach [6]. Moreover, Chow [7] studied the convergence properties of the series when , , . There is a remarkable result obtained by Liu and Lin [8], who considered the precise asymptotics on the second-order moment convergence, which states that when , , and . Chen and Zhang [9] got the similar results on the second-order moment convergence of empirical process. Furthermore, there are also some precise asymptotic results in other contexts, such as the self-normalized sums, martingale-difference, random fields, and renewal process. It should be mentioned that the corresponding results on random matrices and random growth model have been studied by Su [10], who presented the precise asymptotics of the largest eigenvalues of Gaussian unitary ensembles, Laguerre unitary ensembles, and the longest increasing subsequence of a random permutation. In this paper, we will study the precise asymptotics on the spectral statistics of Wigner matrices and sample covariance matrices, which is also an interesting topic in random matrix theory.

In the rest of the paper, we will always write , where and is an arbitrary positive real number. , and stands for a random variable observing the standard Gaussian distribution. We also denote to be the absolutely positive constant whose value can be different from one place to another.

2. The Proofs

Before the main proof of Theorem 3, we will first give four propositions.

Proposition 5. Under the assumptions of Theorem 3, one has

Proof. We calculate that

Proposition 6. Under the assumptions of Theorem 3, one has

Proof. We can write
Under the assumptions of Theorem 3, the law of each entry also satisify LSI. By Lemma 5 of Vershynin [11], for each fixed positive integer , . According to Lemma 1, we can see that Thus, as , Using Toeplitz's lemma, we can deduce that
For the term , if has a bounded first-order derivative, then is a Lipschitz function.
By Theorem 8.2 of Bai and Silverstein [12], there exists a real number , such that By Lemma 2, for any , there exists such that
For each fixed , the assumption reveals that, when with a sufficient large number , we have . Thus there exists , such that
For the term , as the fact that , the well-known tail probability estimation states that for any , Hence, for any fixed , we can get By letting and then , a combination of (22) and (24) can get that . Then combined with the relations (16) and (19), we can complete the proof.

Proposition 7. Under the assumptions of Theorem 3, one has

Proof. By the similar argument in the proof of Proposition 5, we can show that

Proposition 8. Under the assumptions of Theorem 3, one has

Proof. Similar to the argument in the proof of Proposition 6, we can write
For the term , we have As , for , we have . Thus
Noticing that when is fixed, for sufficient large , the relation (21) tells us that there exists , such that
By the same argument as for , using the relation (23), we can easily prove that
Hence, we have proved that uniformly for . By Toeplitz's lemma again, we can further deduce that
For the term , we can write
By using the relation (21) again, for any fixed , when , there exists , such that Hence, by letting firstly and then taking , we get .
Following the proof of and using the relation (23), we can easily prove that vanishes as , which yields that
By combining (33) and (36), we conclude that the proof is completed.

Proof of Theorem 3. According to the fact that for any random variable and , we can see that
By Propositions 58, we can get Theorem 3 easily.

Proof of Theorem 4. The proof is essentially the same as for Theorem 3, which mainly depends on the central limit theorem of the spectral statistics and the concentration inequality of the empirical spectral measure. We will only list some key tools in the proof, and the details are omitted here.

Lemma 9 (see [13]). Consider the sample covariance matrix as above, where and for all , , . Let and denote to be an open interval including . Then the empirical processes converge weakly in finite dimension to a Gaussian process with mean zero and the covariance function given by where is defined in Theorem 4.

Lemma 10. Assume is a Lipschitz function and is defined by (8). Denote . Then under the assumptions of Theorem 4, for any , there exists , such that

Proof. If we denote then by the Corollary 1.8(b) in Guionnet and Zeitouni [2], for any , there exists such that
As we can write for each , we have

Just like the proof of Proposition 6, in order to use Lemma 10, we will need to estimate the order of , and the following remark is needed.

Remark 11. For the estimation of , by the result of Bai et al. [14], we can see that where is the empirical spectral distribution of . As a direct consequence, when is differentiable and is bounded, there exists a real number , such that .

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grant nos. 11326173 and 11301146) and the Foundation of Henan Educational Committee in China (Grant no. 13A110087).