Abstract
Wu et al. (2009) studied the asymptotic approximation of inverse moments for nonnegative independent random variables. Shen et al. (2011) extended the result of Wu et al. (2009) to the case of -mixing random variables. In the paper, we will further study the asymptotic approximation of inverse moments for nonnegative -mixing random variables, which improves the corresponding results of Wu et al. (2009), Wang et al. (2010), and Shen et al. (2011) under the case of identical distribution.
1. Introduction
Firstly, we will recall the definition of -mixing random variables.
Let be a sequence of random variables defined on a fixed probability space . Let and be positive integers. Write and . Given -algebras in , let Define the -mixing coefficients by
Definition 1.1. A sequence of random variables is said to be -mixing if as .
-mixing sequence was introduced by Kolmogorov and Rozanov [1]. It is easily seen that -mixing sequence contains independent sequence as a special case.
The main purpose of the paper is to study the asymptotic approximation of inverse moments for nonnegative -mixing random variables with identical distribution.
Let be a sequence of independent nonnegative random variables with finite second moments. Denote It is interesting to show that under suitable conditions the following equivalence relation holds, namely, where and are arbitrary real numbers.
Here and below, for two positive sequences and , we write if as . is a positive constant which can be different in various places.
The inverse moments can be applied in many practical applications. For example, they may be applied in Stein estimation and poststratification (see [2, 3]), evaluating risks of estimators and powers of tests (see [4, 5]). In addition, they also appear in the reliability (see [6]) and life testing (see [7]), insurance and financial mathematics (see [8]), complex systems (see [9]), and so on.
Under certain asymptotic-normality condition, relation (1.4) was established in Theoremโโ2.1 of Garcia and Palacios [10]. But, unfortunately, that theorem is not true under the suggested assumptions, as pointed out by Kaluszka and Okolewski [11]. The latter authors established (1.4) by modifying the assumptions as follows:(i) (, in the i.i.d. case);(ii), ;(iii)( condition) ().
Hu et al. [12] considered weaker conditions: , where satisfies condition and . Wu et al. [13] applied Bernsteinโs inequality and the truncated method to greatly improve the conclusion in weaker condition on moment. Wang et al. [14] extended the result for independent random variables to the case of NOD random variables. Shi et al. [15] obtained (1.4) for . Sung [16] studied the inverse moments for a class of nonnegative random variables.
Recently, Shen et al. [17] extended the result of Wu et al. [13] to the case of -mixing random variables and obtained the following result.
Theorem A. Let be a nonnegative -mixing sequence with . Suppose that (i), for all ;(ii), where is defined by (1.3);(iii)for some ,(iv)for some and any positive constants , Then for any and , (1.4) holds.
In this paper, we will further study the asymptotic approximation of inverse moments for nonnegative -mixing random variables with identical distribution. We will show that (1.4) holds under very mild conditions and the condition (iv) in Theorem A can be deleted. In place of the Bernstein type inequality used by Shen et al. [17], we make the use of Rosenthal type inequality of -mixing random variables. Our main results are as follows.
Theorem 1.2. Let be a sequence of nonnegative -mixing random variables with identical distribution and let be a sequence of positive constants. Let and be real numbers. . Assume that . Suppose that (i), for all ;(ii) as , where ;(iii)for all , there exist and such that Then (1.4) holds.
Corollary 1.3. Let be a sequence of nonnegative -mixing random variables with identical distribution and . Let be a sequence of positive constants satisfying for some and as . Let and be real numbers. . Assume that . Then (1.4) holds.
By Theorem 1.2, we can get the following convergence rate of relative error in the relation (1.4).
Theorem 1.4. Assume that conditions of Theorem 1.2 are satisfied and . . If for all large enough, where is a positive constant, then
Theorem 1.5. Assume that conditions of Theorem 1.2 are satisfied and . . Then
Taking in Theorem 1.2, we have the following asymptotic approximation of inverse moments for the partial sums of nonnegative -mixing random variables with identical distribution.
Theorem 1.6. Let be a sequence of nonnegative -mixing random variables with identical distribution. Let and be real numbers. . Assume that . Suppose that (i), ;(ii) as , where ;(iii)for all , there exist and such that Then .
Remark 1.7. Theorem 1.2 in this paper improves the corresponding results of Wu et al. [13], Wang et al. [14], and Shen et al. [17]. Firstly, Theorem 1.4 in this paper is based on the condition , for all , which is weaker than the condition , for all in the above cited references. Secondly, is an arbitrary sequence of positive constants in Theorem 1.2, while in the above cited references. Thirdly, the condition (iv) in Theorem A is not needed in Theorem 1.2. Finally, (1.7) is weaker than (1.5) under the case of identical distribution. Actually, by the condition (1.5), we can see that which implies that for all , there exists a positive integer such that that is, (1.7) holds.
2. Proof of the Main Results
In order to prove the main results of the paper, we need the following important moment inequality for -mixing random variables.
Lemma 2.1 (c.f. Shao [18, Corollaryโโ1.1]). Let and be a sequence of -mixing random variables. Assume that , and Then there exists a positive constant depending only on and such that for any and , where , and .
Remark 2.2. We point out that if is a sequence of -mixing random variables with identical distribution and the conditions of Lemma 2.1 hold, then we have The inequality above is the Rosenthal type inequality of identical distributed -mixing random variables.
Proof of Theorem 1.2. It is easily seen that is a convex function of on , therefore, we have by Jensenโs inequality that which implies that To prove (1.4), it is enough to prove that In order to prove (2.6), we need only to show that for all , By (iii), we can see that for all , Let For , since , we have By (2.8), we have for that Therefore, by (2.12), Markovโs inequality, Remark 2.2 and โs inequality, for any and all sufficiently large, Taking , we have by (2.10), (2.11), and (2.13) that which implies (2.7). This completes the proof of the theorem.
Proof of Corollary 1.3. The condition for some implies that
thus, as .
The fact and yield that as , which implies that for all , there exists such that
That is to say condition (iii) of Theorem 1.2 holds. Therefore, the desired result follows from Theorem 1.2 immediately.
Proof of Theorem 1.4. Firstly, we will examine . By Remark 2.2, and the condition for all large enough, we can get that
for all large enough.
Denote for . By Taylorโs expansion, we can see that
where is between and . It is easily seen that is decreasing in . Therefore, by (2.18), Cauchy-Schwartz inequality, (2.17) and (1.4), we have
This leads to (1.8). The proof is complete.
Proof of Theorem 1.5. The proof is similar to that of Theorem 1.4. In place of , we make the use of . The proof is complete.
Acknowledgments
The authors are most grateful to the Editor Tetsuji Tokihiro and an anonymous referee for the careful reading of the paper and valuable suggestions which helped to improve an earlier version of this paper. The paper is supported by the Academic innovation team of Anhui University (KJTD001B).