Abstract
Applying to the moment inequality of negatively dependent random variables the complete convergence for weighted sums of sequences of negatively dependent random variables is discussed. As a result, complete convergence theorems for negatively dependent sequences of random variables are extended.
1. Introduction and Lemmas
Definition 1.1. Random variables and are said to be negatively dependent (ND) if
for all . A collection of random variables is said to be pairwise negatively dependent (PND) if every pair of random variables in the collection satisfies (1.1).
It is important to note that (1.1) implies that
for all . Moreover, it follows that (1.2) implies (1.1), and, hence, (1.1) and (1.2) are equivalent. However, (1.1) and (1.2) are not equivalent for a collection of 3 or more random variables. Consequently, the following definition is needed to define sequences of negatively dependent random variables.
Definition 1.2. The random variables are said to be negatively dependent (ND) if, for all real , An infinite sequence of random variables is said to be ND if every finite subset is ND.
Definition 1.3. Random variables , , are said to be negatively associated (NA) if, for every pair of disjoint subsets and of ,
where and are increasing in every variable (or decreasing in every variable), provided this covariance exists. A random variables sequence is said to be NA if every finite subfamily is NA.
The definition of PND is given by Lehmann [1], the concept of ND is given by Bozorgnia et al. [2], and the definition of NA is introduced by Joag-Dev and Proschan [3]. These concepts of dependence random variables have been very useful in reliability theory and applications.
First, note that by letting , and , , separately, it is easy to see that NA implies (1.3). Hence, NA implies ND. But there are many examples which are ND but are not NA. We list the following two examples.
Example 1.4. Let be a binary random variable such that for . Let take the values , , , and , each with probability 1/4.
It can be verified that all the ND conditions hold. However,
Hence, , , and are not NA.
In the next example possesses ND, but does not possess NA obtained by Joag-Dev and Proschan [3].
Example 1.5. Let be a binary random variable such that for . Let and have the same bivariate distributions, and let have joint distribution as shown in Table 1.
It can be verified that all the ND conditions hold. However,
violating NA.
From the above examples, it is shown that ND does not imply NA and ND is much weaker than NA. In the papers listed earlier, a number of well-known multivariate distributions are shown to possess the ND properties, such as (a) multinomial, (b) convolution of unlike multinomials, (c) multivariate hypergeometric, (d) dirichlet, (e) dirichlet compound multinomial, and (f) multinomials having certain covariance matrices. Because of the wide applications of ND random variables, the notions of ND random variables have received more and more attention recently. A series of useful results have been established (cf. Bozorgnia et al. [2], Amini [4], Fakoor and Azarnoosh [5], Nili Sani et al. [6], Klesov et al. [7], and Wu and Jiang [8]). Hence, the extending of the limit properties of independent or NA random variables to the case of ND random variables is highly desirable and of considerable significance in the theory and application. In this paper we study and obtain some probability inequalities and some complete convergence theorems for weighted sums of sequences of negatively dependent random variables.
In the following, let () denote that there exists a constant such that () for sufficiently large , and let mean and . Also, let denote and
Lemma 1.6 (see [2]). Let be ND random variables and a sequence of Borel functions all of which are monotone increasing (or all are monotone decreasing). Then is still a sequence of ND r. v. ’s.
Lemma 1.7 (see [2]). Let be nonnegative r. v. ’s which are ND. Then In particular, let be ND, and let be all nonnegative (or non-positive) real numbers. Then
Lemma 1.8. Let be an ND sequence with and . Then for , where depends only on .
Remark 1.9. If is a sequence of independent random variables, then (1.9) is the classic Rosenthal inequality [9]. Therefore, (1.9) is a generalization of the Rosenthal inequality.
Proof of Lemma 1.8. Let , . It is easy to show that is a negatively dependent sequence by Lemma 1.6. Noting that is a nondecreasing function of on and that , , we have
Here the last inequality follows from , for all .
Note that and is ND, we conclude from the above inequality and Lemma 1.7 that, for any and , we get
Letting , we get
Putting this one into (1.12), we get furthermore
Putting into the above inequality, we get
Letting take the place of in the above inequality, we can get
Thus
Multiplying (1.17) by , letting , and integrating over , according to
we obtain
where is Beta function. Letting we can deduce (1.9) from (1.19). From (1.9), we can prove (1.10) by a similar way of Stout's paper [10, Theorem 2.3.1].
Lemma 1.10. Let be a sequence of ND random variables. Then there exists a positive constant such that, for any and all ,
Proof. Let and . Without loss of generality, assume that . Note that and are still ND by Lemma 1.6. Using (1.9), we get Combining with the Cauchy-Schwarz inequality, we obtain Thus that is,
2. Main Results and the Proofs
The concept of complete convergence of a sequence of random variables was introduced by Hsu and Robbins [11] as follows. A sequence of random variables converges completely to the constant if , for all . In view of the Borel-Cantelli lemma, this implies that almost surely. Therefore, complete convergence is one of the most important problems in probability theory. Hsu and Robbins [11] proved that the sequence of arithmetic means of independent and identically distributed (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. Baum and Katz [12] proved that if is a sequence of i.i.d. random variables with mean zero, then is equivalent to the condition that , for all . Recent results of the complete convergence can be found in Li et al. [13], Liang and Su [14], Wu [15, 16], and Sung [17].
In this paper we study the complete convergence for negatively dependent random variables. As a result, we extend some complete convergence theorems for independent random variables to the negatively dependent random variables without necessarily imposing any extra conditions.
Theorem 2.1. Let be a sequence of identically distributed ND random variables and an array of real numbers, and let , . If, for some , then, for , if and only if For , (2.4) implies (2.5), conversely, and (2.5) and decreasing on imply (2.4).
For , , we have the following theorem.
Theorem 2.2. Let be a sequence of identically distributed ND random variables and an array of real numbers, and let . If then, for , if and only if For , (2.8) implies (2.9), conversely, and (2.9) and decreasing on imply (2.8).
Remark 2.3. Since NA random variables are a special case of ND r. v. 's, Theorems 2.1 and 2.2 extend the work of Liang and Su [14, Theorem 2.1].
Remark 2.4. Since, for some , as implies that taking , then conditions (2.1) and (2.6) are weaker than conditions (2.13) and (2.9) in Li et al. [13]. Therefore, Theorems 2.1 and 2.2 not only promote and improve the work of Li et al. [13, Theorem 2.2] for i.i.d. random variables to an ND setting but also obtain their necessities and relax the range of .
Proof of Theorem 2.1. Equation (2.4)(2.5). To prove (2.5) it suffices to show that
where and . Thus, without loss of generality, we can assume that for all . For small enough and sufficiently large integer , which will be determined later, let
Thus . Note that
So, to prove (2.5) it suffices to show that
For any ,
Now, we prove that
(i) For , taking such that , by (2.4) and (2.15), we get
For , letting , by (2.2), (2.4), and (2.15), we get
Hence, (2.16) holds. Therefore, to prove it suffices to prove that
Note that is still ND by the definition of and Lemma 1.6. Using the Markov inequality and Lemma 1.8, we get for a suitably large , which will be determined later,
Taking , then , and, by (2.15), we get
(i) For , taking such that and taking , from (2.15) and , we have
(ii) For , taking and , where is defined by (2.3), we get, from (2.3), (2.4), (2.15), and ,
Since
we have
By Lemma 1.6, is still ND. Hence, for we conclude that
via (2.4) and (2.15). from the definition of . Hence by (2.26) and by taking and such that , we have
Similarly, we have and .
Last, we prove that . Let . By the definition of and (2.1), we have
Combining with (2.15),
Now we prove (2.5)⇒(2.4). Since
then from (2.5) we have
Combining with the hypotheses of Theorem 2.1,
Thus, for sufficiently large ,
By Lemma 1.6, is still ND. By applying Lemma 1.10 and (2.1), we obtain
Substituting the above inequality in (2.5), we get
So, by the process of proof of ,
Proof of Theorem 2.2. Let , . Using the same notations and method of Theorem 2.1, we need only to give the different parts.
Letting (2.7) take the place of (2.15), similarly to the proof of (2.19) and (2.26), we obtain
Taking , we have
For , taking , we get
For , from (2.8). Letting , by the Hlder inequality,
By the definition of ,
Similarly to the proof (2.31), we have
Equation (2.9)(2.8) Using the same method of the necessary part of Theorem 2.1, we can easily get
Acknowledgments
The author is very grateful to the referees and the editors for their valuable comments and some helpful suggestions that improved the clarity and readability of the paper. This work was supported by the National Natural Science Foundation of China (11061012), the Support Program the New Century Guangxi China Ten-Hundred-Thousand Talents Project (2005214), and the Guangxi China Science Foundation (2010GXNSFA013120). Professor Dr. Qunying Wu engages in probability and statistics.