Abstract

Applying to the moment inequality of negatively dependent random variables the complete convergence for weighted sums of sequences of negatively dependent random variables is discussed. As a result, complete convergence theorems for negatively dependent sequences of random variables are extended.

1. Introduction and Lemmas

Definition 1.1. Random variables 𝑋 and 𝑌 are said to be negatively dependent (ND) if 𝑃(𝑋𝑥,𝑌𝑦)𝑃(𝑋𝑥)𝑃(𝑌𝑦)(1.1) for all 𝑥,𝑦R. A collection of random variables is said to be pairwise negatively dependent (PND) if every pair of random variables in the collection satisfies (1.1).
It is important to note that (1.1) implies that 𝑃(𝑋>𝑥,𝑌>𝑦)𝑃(𝑋>𝑥)𝑃(𝑌>𝑦)(1.2) for all 𝑥,𝑦R. Moreover, it follows that (1.2) implies (1.1), and, hence, (1.1) and (1.2) are equivalent. However, (1.1) and (1.2) are not equivalent for a collection of 3 or more random variables. Consequently, the following definition is needed to define sequences of negatively dependent random variables.

Definition 1.2. The random variables 𝑋1,,𝑋𝑛 are said to be negatively dependent (ND) if, for all real 𝑥1,,𝑥𝑛, 𝑃𝑛𝑗=1𝑋𝑗𝑥𝑗𝑛𝑗=1𝑃𝑋𝑗𝑥𝑗,𝑃𝑛𝑗=1𝑋𝑗>𝑥𝑗𝑛𝑗=1𝑃𝑋𝑗>𝑥𝑗.(1.3) An infinite sequence of random variables {𝑋𝑛;𝑛1} is said to be ND if every finite subset 𝑋1,,𝑋𝑛 is ND.

Definition 1.3. Random variables 𝑋1,𝑋2,,𝑋𝑛, 𝑛2, are said to be negatively associated (NA) if, for every pair of disjoint subsets 𝐴1 and 𝐴2 of {1,2,,𝑛}, 𝑓cov1𝑋𝑖;𝑖𝐴1,𝑓2𝑋𝑗;𝑗𝐴20,(1.4) where 𝑓1 and 𝑓2 are increasing in every variable (or decreasing in every variable), provided this covariance exists. A random variables sequence {𝑋𝑛;𝑛1} is said to be NA if every finite subfamily is NA.
The definition of PND is given by Lehmann [1], the concept of ND is given by Bozorgnia et al. [2], and the definition of NA is introduced by Joag-Dev and Proschan [3]. These concepts of dependence random variables have been very useful in reliability theory and applications.
First, note that by letting 𝑓1(𝑋1,𝑋2,,𝑋𝑛1)=𝐼(𝑋1𝑥1,𝑋2𝑥2,,𝑋𝑛1𝑥𝑛1), 𝑓2(𝑋𝑛)=𝐼(𝑋𝑛𝑥𝑛) and 𝑓1(𝑋1,𝑋2,,𝑋𝑛1)=𝐼(𝑋1>𝑥1,𝑋2>𝑥2,,𝑋𝑛1>𝑥𝑛1), 𝑓2(𝑋𝑛)=𝐼(𝑋𝑛>𝑥𝑛), separately, it is easy to see that NA implies (1.3). Hence, NA implies ND. But there are many examples which are ND but are not NA. We list the following two examples.

Example 1.4. Let 𝑋𝑖 be a binary random variable such that 𝑃(𝑋𝑖=1)=𝑃(𝑋𝑖=0)=0.5 for 𝑖=1,2,3. Let (𝑋1,𝑋2,𝑋3) take the values (0,0,1), (0,1,0), (1,0,0), and (1,1,1), each with probability 1/4.
It can be verified that all the ND conditions hold. However, 𝑃𝑋1+𝑋31,𝑋2=408𝑋𝑃1+𝑋3𝑃𝑋12=308.(1.5) Hence, 𝑋1, 𝑋2, and 𝑋3 are not NA.

In the next example 𝑋=(𝑋1,𝑋2,𝑋3,𝑋4) possesses ND, but does not possess NA obtained by Joag-Dev and Proschan [3].

Example 1.5. Let 𝑋𝑖 be a binary random variable such that 𝑃(𝑋𝑖=1)=.5 for 𝑖=1,2,3,4. Let (𝑋1,𝑋2) and (𝑋3,𝑋4) have the same bivariate distributions, and let 𝑋=(𝑋1,𝑋2,𝑋3,𝑋4) have joint distribution as shown in Table 1.
It can be verified that all the ND conditions hold. However, 𝑃𝑋𝑖𝑋=1,𝑖=1,2,3,4>𝑃1=𝑋2𝑃𝑋=13=𝑋4=1,(1.6) violating NA.

From the above examples, it is shown that ND does not imply NA and ND is much weaker than NA. In the papers listed earlier, a number of well-known multivariate distributions are shown to possess the ND properties, such as (a) multinomial, (b) convolution of unlike multinomials, (c) multivariate hypergeometric, (d) dirichlet, (e) dirichlet compound multinomial, and (f) multinomials having certain covariance matrices. Because of the wide applications of ND random variables, the notions of ND random variables have received more and more attention recently. A series of useful results have been established (cf. Bozorgnia et al. [2], Amini [4], Fakoor and Azarnoosh [5], Nili Sani et al. [6], Klesov et al. [7], and Wu and Jiang [8]). Hence, the extending of the limit properties of independent or NA random variables to the case of ND random variables is highly desirable and of considerable significance in the theory and application. In this paper we study and obtain some probability inequalities and some complete convergence theorems for weighted sums of sequences of negatively dependent random variables.

In the following, let 𝑎𝑛𝑏𝑛 (𝑎𝑛𝑏𝑛) denote that there exists a constant 𝑐>0 such that 𝑎𝑛𝑐𝑏𝑛 (𝑎𝑛𝑐𝑏𝑛) for sufficiently large 𝑛, and let 𝑎𝑛𝑏𝑛 mean 𝑎𝑛𝑏𝑛 and 𝑎𝑛𝑏𝑛. Also, let log𝑥 denote ln(max(e,𝑥)) and 𝑆𝑛=𝑛𝑗=1𝑋𝑗.

Lemma 1.6 (see [2]). Let 𝑋1,,𝑋𝑛 be ND random variables and {𝑓𝑛;𝑛1} a sequence of Borel functions all of which are monotone increasing (or all are monotone decreasing). Then {𝑓𝑛(𝑋𝑛);𝑛1} is still a sequence of ND r. v. ’s.

Lemma 1.7 (see [2]). Let 𝑋1,,𝑋𝑛 be nonnegative r. v. ’s which are ND. Then 𝐸𝑛𝑗=1𝑋𝑗𝑛𝑗=1𝐸𝑋𝑗.(1.7) In particular, let 𝑋1,,𝑋𝑛 be ND, and let 𝑡1,,𝑡𝑛 be all nonnegative (or non-positive) real numbers. Then 𝐸exp𝑛𝑗=1𝑡𝑗𝑋𝑗𝑛𝑗=1𝐸𝑡expj𝑋𝑗.(1.8)

Lemma 1.8. Let {𝑋𝑛;𝑛1} be an ND sequence with 𝐸𝑋𝑛=0 and 𝐸|𝑋𝑛|𝑝<,𝑝2. Then for 𝐵𝑛=𝑛𝑖=1𝐸𝑋2𝑖, 𝐸||𝑆𝑛||𝑝𝑐𝑝𝑛𝑖=1𝐸||𝑋𝑖||𝑝+𝐵𝑛𝑝/2,𝐸(1.9)max1𝑖𝑛||𝑆𝑖||𝑝c𝑝log𝑝𝑛𝑛𝑖=1𝐸||𝑋𝑖||𝑝+𝐵𝑛𝑝/2,(1.10) where 𝑐𝑝>0 depends only on 𝑝.

Remark 1.9. If {𝑋𝑛;𝑛1} is a sequence of independent random variables, then (1.9) is the classic Rosenthal inequality [9]. Therefore, (1.9) is a generalization of the Rosenthal inequality.

Proof of Lemma 1.8. Let 𝑎>0, 𝑋𝑖=min(𝑋𝑖,𝑎),and𝑆𝑛=𝑛𝑖=1𝑋𝑖. It is easy to show that {𝑋𝑖;𝑖1} is a negatively dependent sequence by Lemma 1.6. Noting that (e𝑥1𝑥)/𝑥2 is a nondecreasing function of 𝑥 on R and that 𝐸𝑋𝑖𝐸𝑋𝑖=0, 𝑡𝑋𝑖𝑡𝑎, we have 𝐸e𝑡𝑋𝑖=1+𝑡𝐸𝑋𝑖e+𝐸𝑡𝑋𝑖1𝑡𝑋𝑖𝑡2𝑋𝑖2𝑡2𝑋𝑖2e1+𝑡𝑎𝑎1𝑡𝑎2𝐸𝑋𝑖2e1+𝑡𝑎𝑎1𝑡𝑎2𝐸𝑋2𝑖eexp𝑡𝑎𝑎1𝑡𝑎2𝐸𝑋2𝑖.(1.11) Here the last inequality follows from 1+𝑥e𝑥, for all 𝑥R.
Note that 𝐵𝑛=𝑛𝑖=1𝐸𝑋2𝑖 and {𝑋𝑖;𝑖1} is ND, we conclude from the above inequality and Lemma 1.7 that, for any 𝑥>0 and >0, we get e𝑥𝐸e𝑆𝑛=e𝑥𝐸𝑛𝑖=1e𝑋𝑖e𝑛𝑥𝑖=1𝐸e𝑋𝑖eexp𝑥+𝑎𝑎1𝑎2𝐵𝑛.(1.12)
Letting =ln((𝑥𝑎)/𝐵𝑛+1)/𝑎>0, we get e𝑎𝑎1𝑎2𝐵𝑛=𝑥𝑎𝐵𝑛𝑎2ln𝑥𝑎𝐵𝑛𝑥+1𝑎.(1.13) Putting this one into (1.12), we get furthermore e𝑥𝐸e𝑆𝑛𝑥exp𝑎𝑥𝑎ln𝑥𝑎𝐵𝑛+1.(1.14) Putting 𝑥/𝑎=𝑡 into the above inequality, we get 𝑃𝑆𝑛𝑥𝑛𝑖=1𝑃𝑋𝑖𝑆>𝑎+𝑃𝑛𝑥𝑛𝑖=1𝑃𝑋𝑖>𝑎+e𝑥𝐸e𝑆𝑛𝑛𝑖=1𝑃𝑋𝑖>𝑥𝑡𝑥+exp𝑡𝑡ln2𝑡𝐵𝑛=+1𝑛𝑖=1𝑃𝑋𝑖>𝑥𝑡+e𝑡𝑥1+2𝑡𝐵𝑛𝑡.(1.15) Letting 𝑋𝑖 take the place of 𝑋𝑖 in the above inequality, we can get 𝑃𝑆𝑛𝑆𝑥=𝑃𝑛𝑥𝑛𝑖=1𝑃𝑋𝑖>𝑥𝑡+e𝑡𝑥1+2𝑡𝐵𝑛𝑡=𝑛𝑖=1𝑃𝑋𝑖<𝑥𝑡+e𝑡𝑥1+2𝑡𝐵𝑛𝑡.(1.16) Thus 𝑃||𝑆𝑛||𝑆𝑥=𝑃𝑛𝑆𝑥+𝑃𝑛𝑥𝑛𝑖=1𝑃||𝑋𝑖||<𝑥𝑡+2e𝑡𝑥1+2𝑡𝐵𝑛𝑡.(1.17)
Multiplying (1.17) by 𝑝𝑥𝑝1, letting 𝑡=𝑝, and integrating over 0<𝑥<+, according to 𝐸||𝑋||𝑝=𝑝0+𝑥𝑝1𝑃||𝑋||𝑥d𝑥,(1.18) we obtain 𝐸||𝑆𝑛||𝑝=𝑝0+𝑥𝑝1𝑃||𝑆𝑛||𝑥d𝑥𝑝𝑛𝑖=10+𝑥𝑝1𝑃||𝑋𝑖||𝑥𝑝d𝑥+2𝑝e𝑝0+𝑥𝑝1𝑥1+2𝑝𝐵𝑛𝑝d𝑥=𝑝𝑛𝑝+1𝑖=1𝐸||𝑋𝑖||𝑝+𝑝e𝑝𝑝𝐵𝑛𝑝/20+𝑢𝑝/21(1+𝑢)𝑝d𝑢=𝑝𝑛𝑝+1𝑖=1𝐸||𝑋𝑖||𝑝+𝑝𝑝/2+1e𝑝𝐵𝑝2,𝑝2𝐵𝑛𝑝/2,(1.19) where 𝐵(𝛼,𝛽)=10𝑥𝛼1(1𝑥)𝛽1d𝑥=0+𝑥𝛼1(1+𝑥)(𝛼+𝛽)d𝑥,𝛼,𝛽>0 is Beta function. Letting 𝑐𝑝=max(𝑝𝑝+1,𝑝1+𝑝/2e𝑝𝐵(𝑝/2,𝑝/2)),we can deduce (1.9) from (1.19). From (1.9), we can prove (1.10) by a similar way of Stout's paper [10, Theorem  2.3.1].

Lemma 1.10. Let {𝑋𝑛;𝑛1} be a sequence of ND random variables. Then there exists a positive constant 𝑐 such that, for any 𝑥0 and all 𝑛1, 1𝑃max1𝑘𝑛||𝑋𝑘||>𝑥2𝑛𝑘=1𝑃||𝑋𝑘||>𝑥𝑐𝑃max1𝑘𝑛||𝑋𝑘||.>𝑥(1.20)

Proof. Let 𝐴𝑘=(|𝑋𝑘|>𝑥) and 𝛼𝑛=1𝑃(𝑛𝑘=1𝐴𝑘)=1𝑃(max1𝑘𝑛|𝑋𝑘|>𝑥). Without loss of generality, assume that 𝛼𝑛>0. Note that {𝐼(𝑋𝑘>𝑥)𝐸𝐼(𝑋𝑘>𝑥);𝑘1} and {𝐼(𝑋𝑘<𝑥)𝐸𝐼(𝑋𝑘<𝑥);𝑘1} are still ND by Lemma 1.6. Using (1.9), we get 𝐸𝑛𝑘=1𝐼𝐴𝑘𝐸𝐼𝐴𝑘2=𝐸𝑛𝑘=1𝐼(𝑋𝑘>𝑥)𝐸𝐼(𝑋𝑘>𝑥)+𝐼(𝑋𝑘<𝑥)𝐸𝐼(𝑋𝑘<𝑥)22𝐸𝑛𝑘=1𝐼(𝑋𝑘>𝑥)𝐸𝐼(𝑋𝑘>𝑥)2+2𝐸𝑛𝑘=1𝐼(𝑋𝑘<𝑥)𝐸𝐼(𝑋𝑘<𝑥)2𝑐𝑛𝑘=1𝑃𝐴𝑘.(1.21) Combining with the Cauchy-Schwarz inequality, we obtain 𝑛𝑘=1𝑃𝐴𝑘=𝑛𝑘=1𝑃𝐴𝑘,𝑛𝑗=1𝐴𝑗=𝑛𝑘=1𝐸𝐼𝐴𝑘𝐼𝑛𝑗=1𝐴𝑗=𝐸𝑛𝑘=1𝐼𝐴𝑘𝐸𝐼𝐴𝑘𝐼𝑛𝑗=1𝐴𝑗+𝑛𝑘=1𝑃𝐴𝑘𝑃𝑛𝑗=1𝐴𝑗𝐸𝑛𝑘=1𝐼𝐴𝑘𝐸𝐼𝐴𝑘2𝐸𝐼𝑛𝑗=1𝐴𝑗1/2+1𝛼𝑛𝑛𝑘=1𝑃𝐴𝑘𝑐1𝛼𝑛𝛼𝑛𝛼𝑛𝑛𝑘=1𝑃𝐴𝑘1/2+1𝛼𝑛𝑛𝑘=1𝑃𝐴𝑘12𝑐1𝛼𝑛𝛼𝑛+𝛼𝑛𝑛𝑘=1𝑃𝐴𝑘+1𝛼𝑛𝑛𝑘=1𝑃𝐴𝑘.(1.22) Thus 𝛼2𝑛𝑛𝑘=1𝑃𝐴𝑘𝑐1𝛼𝑛,(1.23) that is, 1𝑃max1𝑘𝑛||𝑋𝑘||>𝑥2𝑛𝑘=1𝑃||𝑋𝑘||>𝑥𝑐𝑃max1𝑘𝑛||𝑋𝑘||.>𝑥(1.24)

2. Main Results and the Proofs

The concept of complete convergence of a sequence of random variables was introduced by Hsu and Robbins [11] as follows. A sequence {𝑌𝑛;𝑛1} of random variables converges completely to the constant 𝑐 if 𝑛=1𝑃(|𝑋𝑛𝑐|>𝜀)<, for all 𝜀>0. In view of the Borel-Cantelli lemma, this implies that 𝑌𝑛0 almost surely. Therefore, complete convergence is one of the most important problems in probability theory. Hsu and Robbins [11] proved that the sequence of arithmetic means of independent and identically distributed (i.i.d.) random variables converges completely to the expected value if the variance of the summands is finite. Baum and Katz [12] proved that if {𝑋,𝑋𝑛;𝑛1} is a sequence of i.i.d. random variables with mean zero, then 𝐸|𝑋|𝑝(𝑡+2)<(1𝑝<2,𝑡1) is equivalent to the condition that 𝑛=1𝑛𝑡𝑃(𝑛1=1|𝑋𝑖|/𝑛1/𝑝>𝜀)<, for all 𝜀>0. Recent results of the complete convergence can be found in Li et al. [13], Liang and Su [14], Wu [15, 16], and Sung [17].

In this paper we study the complete convergence for negatively dependent random variables. As a result, we extend some complete convergence theorems for independent random variables to the negatively dependent random variables without necessarily imposing any extra conditions.

Theorem 2.1. Let {𝑋,𝑋𝑛;𝑛1} be a sequence of identically distributed ND random variables and {𝑎𝑛𝑘;1𝑘𝑛,𝑛1} an array of real numbers, and let 𝑟>1, 𝑝>2. If, for some 2𝑞<𝑝, ||𝑎𝑁(𝑛,𝑚+1)=𝑘1;𝑛𝑘||(𝑚+1)1/𝑝𝑚𝑞(𝑟1)/𝑝,𝑛,𝑚1,(2.1)𝐸𝑋=0for1𝑞(𝑟1),(2.2)𝑛𝑘=1𝑎2𝑛𝑘𝑛𝛿2for2𝑞(𝑟1)andsome0<𝛿<𝑝,(2.3) then, for 𝑟2, 𝐸||𝑋||𝑝(𝑟1)<(2.4) if and only if 𝑛=1𝑛𝑟2𝑃max1𝑘𝑛|||||𝑘𝑖=1𝑎𝑛𝑖𝑋𝑖|||||>𝜀𝑛1/𝑝<,𝜀>0.(2.5) For 1<𝑟<2, (2.4) implies (2.5), conversely, and (2.5) and 𝑛𝑟2𝑃(max1𝑘𝑛|𝑎𝑛𝑘𝑋𝑘|>𝑛1/𝑝) decreasing on 𝑛 imply (2.4).

For 𝑝=2, 𝑞=2, we have the following theorem.

Theorem 2.2. Let {𝑋,𝑋𝑛;𝑛1} be a sequence of identically distributed ND random variables and {𝑎𝑛𝑘;1𝑘𝑛,𝑛1} an array of real numbers, and let 𝑟>1. If ||𝑎𝑁(𝑛,𝑚+1)=𝑘;𝑛𝑘||(𝑚+1)1/2𝑚𝑟1,𝑛,𝑚1,(2.6)𝐸𝑋=0,12(𝑟1),𝑛𝑘=1||𝑎𝑛𝑘||2(𝑟1)=𝑂(1),(2.7) then, for 𝑟2, 𝐸||𝑋||2(𝑟1)||𝑋||log<(2.8) if and only if 𝑛=1𝑛𝑟2𝑃max1𝑘𝑛|||||𝑘𝑖=1𝑎𝑛𝑖𝑋𝑖|||||>𝜀𝑛1/2<,𝜀>0.(2.9) For 1<𝑟<2, (2.8) implies (2.9), conversely, and (2.9) and 𝑛𝑟2𝑃(max1𝑘𝑛|𝑎𝑛𝑘𝑋𝑘|>𝑛1/2) decreasing on 𝑛 imply (2.8).

Remark 2.3. Since NA random variables are a special case of ND r. v. 's, Theorems 2.1 and 2.2 extend the work of Liang and Su [14, Theorem  2.1].

Remark 2.4. Since, for some 2𝑞𝑝, 𝑘𝑁|𝑎𝑛𝑘|𝑞(𝑟1)1 as 𝑛 implies that ||𝑎𝑁(𝑛,𝑚+1)=𝑘1;𝑛𝑘||(𝑚+1)1/𝑝𝑚𝑞(𝑟1)/𝑝as𝑛,(2.10) taking 𝑟=2, then conditions (2.1) and (2.6) are weaker than conditions (2.13) and (2.9) in Li et al. [13]. Therefore, Theorems 2.1 and 2.2 not only promote and improve the work of Li et al. [13, Theorem 2.2] for i.i.d. random variables to an ND setting but also obtain their necessities and relax the range of 𝑟.

Proof of Theorem 2.1. Equation (2.4)(2.5). To prove (2.5) it suffices to show that 𝑛=1𝑛𝑟2𝑃max1𝑘𝑛|||||𝑘𝑖=1𝑎±𝑛𝑖𝑋𝑖|||||>𝜀𝑛1/𝑝<,𝜀>0,(2.11) where 𝑎+𝑛𝑖=max(𝑎𝑛𝑖,0) and 𝑎𝑛𝑖=max(𝑎𝑛𝑖,0). Thus, without loss of generality, we can assume that 𝑎𝑛𝑖>0 for all 𝑛1,𝑖𝑛. For 0<𝛼<1/𝑝 small enough and sufficiently large integer 𝐾, which will be determined later, let 𝑋(1)𝑛𝑖=𝑛𝛼𝐼(𝑎𝑛𝑖𝑋𝑖<𝑛𝛼)+𝑎𝑛𝑖𝑋𝑖𝐼(𝑎𝑛𝑖|𝑋𝑖|𝑛𝛼)+𝑛𝛼𝐼(𝑎𝑛𝑖𝑋𝑖>𝑛𝛼),𝑋(2)𝑛𝑖=𝑎𝑛𝑖𝑋𝑖𝑛𝛼𝐼(𝑛𝛼<𝑎𝑛𝑖𝑋𝑖<𝜀𝑛1/𝑝/𝐾),𝑋(3)𝑛𝑖=𝑎𝑛𝑖𝑋𝑖+𝑛𝛼𝐼(𝜀𝑛1/𝑝/𝐾<𝑎𝑛𝑖𝑋𝑖<𝑛𝛼),𝑋(4)𝑛𝑖=𝑎𝑛𝑖𝑋𝑛𝑖𝑋(1)𝑛𝑖𝑋(2)𝑛𝑖𝑋(3)𝑛𝑖=𝑎𝑛𝑖𝑋𝑖+𝑛𝛼𝐼(𝑎𝑛𝑖𝑋𝑖𝜀𝑛1/𝑝/𝐾)+𝑎𝑛𝑖𝑋𝑖𝑛𝛼𝐼(𝑎𝑛𝑖𝑋𝑖𝜀𝑛1/𝑝/𝐾),𝑆(𝑗)𝑛𝑘=𝑘𝑖=1𝑋(𝑗)𝑛𝑖,𝑗=1,2,3,4;1𝑘𝑛,𝑛1.(2.12) Thus 𝑆𝑛𝑘=𝑘𝑖=1𝑎𝑛𝑖𝑋𝑖=4𝑗=1𝑆(𝑗)𝑛𝑘. Note that max1𝑘𝑛||𝑆𝑛𝑘||>4𝜀𝑛1/𝑝4𝑗=1max1𝑘𝑛||𝑆(𝑗)𝑛𝑘||>𝜀𝑛1/𝑝.(2.13) So, to prove (2.5) it suffices to show that 𝐼𝑗=𝑛=1𝑛𝑟2𝑃max1𝑘𝑛||𝑆(𝑗)𝑛𝑘||>𝜀𝑛1/𝑝<,𝑗=1,2,3,4.(2.14) For any 𝑞>𝑞, 𝑛𝑖=1𝑎𝑞(𝑟1)𝑛𝑖=𝑗=1(𝑗+1)1𝑎𝑝𝑛𝑖<𝑗1𝑎𝑞(𝑟1)𝑛𝑖𝑗=1(𝑗+1)1𝑎𝑝𝑛𝑖<𝑗1𝑗𝑞(𝑟1)/𝑝𝑗=1(𝑁(𝑛,𝑗+1)𝑁(𝑛,𝑗))𝑗𝑞(𝑟1)/𝑝𝑗=1𝑁𝑗(𝑛,𝑗)𝑞(𝑟1)/𝑝(𝑗+1)𝑞(𝑟1)/𝑝𝑗=1𝑗1(𝑞𝑞)(𝑟1)/𝑝<.(2.15)
Now, we prove that 𝑛1/𝑝max1𝑘𝑛||𝐸𝑆(1)𝑛𝑘||0,𝑛.(2.16)
(i) For 0<𝑞(𝑟1)<1, taking 𝑞<𝑞<𝑝 such that 0<𝑞(𝑟1)<1, by (2.4) and (2.15), we get 𝑛1/𝑝max1𝑘𝑛||𝐸𝑆(1)𝑛𝑘||𝑛𝑛1/𝑝𝑖=1𝐸||𝑎𝑛𝑖𝑋𝑖||𝐼(|𝑎𝑛𝑖𝑋𝑖|𝑛𝛼)+𝑛𝛼𝑃||𝑎𝑛𝑖𝑋𝑖||>𝑛𝛼𝑛1/𝑝𝑛𝑖=1𝐸||𝑎𝑛𝑖𝑋𝑖||𝑞(𝑟1)||𝑎𝑛𝑖𝑋𝑖||1𝑞(𝑟1)𝐼(|𝑎𝑛𝑖𝑋𝑖|𝑛𝛼)+𝑛𝛼𝛼𝑞𝑛(𝑟1)𝑖=1𝐸||𝑎𝑛𝑖𝑋𝑖||𝑞(𝑟1)𝑛1/𝑝+𝛼𝛼𝑞(𝑟1)0,𝑛.(2.17) For 1𝑞(𝑟1), letting 𝑞<𝑞<𝑝, by (2.2), (2.4), and (2.15), we get 𝑛1/𝑝max1𝑘𝑛||𝐸𝑆(1)𝑛𝑘||𝑛𝑛1/𝑝𝑖=1𝐸||𝑎𝑛𝑖𝑋𝑖||𝐼(|𝑎𝑛𝑖𝑋𝑖|>𝑛𝛼)+𝑛𝛼𝑃||𝑎𝑛𝑖𝑋𝑖||>𝑛𝛼𝑛𝑛1/𝑝𝑖=1𝐸||𝑎𝑛𝑖𝑋𝑖||||𝑎𝑛𝑖𝑋𝑖||𝑛𝛼𝑞(𝑟1)1𝐼(|𝑎𝑛𝑖𝑋𝑖|𝑛𝛼)+𝑛𝛼𝛼𝑞(𝑟1)𝐸||𝑎𝑛𝑖𝑋𝑖||𝑞(𝑟1)𝑛1/𝑝+𝛼𝛼𝑞(𝑟1)0.(2.18) Hence, (2.16) holds. Therefore, to prove 𝐼1< it suffices to prove that 𝐼1=𝑛=1𝑛𝑟2𝑃max1𝑘𝑛||𝑆(1)𝑛𝑘𝐸𝑆(1)𝑛𝑘||>𝜀𝑛1/𝑝<,𝜀>0.(2.19) Note that {𝑋(1)𝑛𝑖;1𝑖𝑛,𝑛1} is still ND by the definition of 𝑋(1)𝑛𝑖 and Lemma 1.6. Using the Markov inequality and Lemma 1.8, we get for a suitably large 𝑀, which will be determined later, 𝐼1𝑛=1𝑛𝑟2𝑀/𝑝𝐸max1𝑘𝑛||𝑆(1)𝑛𝑘𝐸𝑆(1)𝑛𝑘||𝑀𝑛=1𝑛𝑟2𝑀/𝑝log𝑀𝑛𝑛𝑖=1𝐸||𝑋(1)𝑛𝑖||𝑀+𝑛𝑖=1𝐸𝑋(1)𝑛𝑖2𝑀/2𝐼=11+𝐼12.(2.20) Taking 𝑀>max(2,𝑝(𝑟1)(1𝛼𝑞)/(1𝛼𝑝)), then 𝑟2𝑀/𝑝+𝛼𝑀𝛼𝑞(𝑟1)<1, and, by (2.15), we get 𝐼11𝑛=1𝑛𝑟2𝑀/𝑝log𝑀𝑛𝑛𝑖=1𝐸||𝑎𝑛𝑖𝑋𝑖||𝑀𝐼(|𝑎𝑛𝑖𝑋𝑖|𝑛𝛼)+𝑛𝑀𝛼𝑃||𝑎𝑛𝑖𝑋𝑖||>𝑛𝛼𝑛=1𝑛𝑟2𝑀/𝑝log𝑀𝑛𝑛𝑖=1𝐸||𝑎𝑛𝑖𝑋𝑖||𝑞(𝑟1)𝑛𝛼(𝑀𝑞(𝑟1))+𝑛𝛼(𝑀𝑞(𝑟1))𝐸||𝑎𝑛𝑖𝑋𝑖||𝑞(𝑟1)𝑛=1𝑛𝑟2𝑀/𝑝+𝛼𝑀𝛼𝑞(𝑟1)log𝑀𝑛<.(2.21)
(i) For 𝑞(𝑟1)<2, taking 𝑞<𝑞<𝑝 such that 𝑞(𝑟1)<2 and taking 𝑀>max(2,2𝑝(𝑟1)/(22𝛼𝑝+𝛼𝑝𝑞(𝑟1))), from (2.15) and 𝑟2𝑀/𝑝+𝛼𝑀𝑀𝛼𝑞(𝑟1)/2<1, we have 𝐼12𝑛=1𝑛𝑟2𝑀/𝑝log𝑀𝑛𝑛𝑖=1𝐸||𝑎𝑛𝑖𝑋𝑖||𝑞(𝑟1)𝑛𝛼(2𝑞(𝑟1))𝐼(|𝑎𝑛𝑖𝑋𝑖|𝑛𝛼)+𝑛2𝛼𝛼𝑞(𝑟1)𝐸||𝑎𝑛𝑖𝑋𝑖||𝑞(𝑟1)𝑀/2𝑛=1𝑛𝑟2𝑀/𝑝+𝛼𝑀𝑀𝛼𝑞(𝑟1)/2log𝑀𝑛<.(2.22)
(ii) For 𝑞(𝑟1)2, taking 𝑞<𝑞<𝑝 and 𝑀>max(2,2𝑝(𝑟1)/(2𝑝𝛿)), where 𝛿 is defined by (2.3), we get, from (2.3), (2.4), (2.15), and 𝑟2𝑀/𝑝+𝛿𝑀/2<1, 𝐼12𝑛=1𝑛𝑟2𝑀/𝑝log𝑀𝑛𝑛𝑖=1𝑎2𝑛𝑖+𝑛2𝛼𝛼𝑞(𝑟1)𝐸||𝑎𝑛𝑖𝑋𝑖||𝑞(𝑟1)𝑀/2𝑛=1𝑛𝑟2𝑀/𝑝+𝛿𝑀/2log𝑀𝑛<.(2.23) Since 𝑛𝑖=1𝑋(2)𝑛𝑖>𝜀𝑛1/𝑝=𝑛𝑖=1𝑎𝑛𝑖𝑋𝑖𝑛𝛼𝐼(𝑛𝛼<𝑎𝑛𝑖𝑋𝑖<𝜀𝑛1/𝑝/𝐾)>𝜀𝑛1/𝑝thereatleastexist𝐾indices𝑘suchthat𝑎𝑛𝑘𝑋𝑘>𝑛𝛼,(2.24) we have 𝑃𝑛𝑖=1𝑋(2)𝑛𝑖>𝜀𝑛1/𝑝1𝑖1<𝑖2<<𝑖𝐾𝑛𝑃𝑎𝑛𝑖1𝑋𝑖1>𝑛𝛼,𝑎𝑛𝑖2𝑋𝑖2>𝑛𝛼,,𝑎𝑛𝑖𝐾𝑋𝑖𝐾>𝑛𝛼.(2.25)
By Lemma 1.6, {𝑎𝑛𝑖𝑋𝑖;1𝑖𝑛,𝑛1} is still ND. Hence, for 𝑞<𝑞<𝑝 we conclude that 𝑃𝑛𝑖=1𝑋(2)𝑛𝑖>𝜀𝑛1/𝑝1𝑖1<𝑖2<<𝑖𝐾𝐾𝑛𝑗=1𝑃𝑎𝑛𝑖𝑗𝑋𝑖𝑗>𝑛𝛼𝑛𝑖=1𝑃||𝑎𝑛𝑖𝑋𝑖||>𝑛𝛼𝐾𝑛𝑖=1𝑛𝛼𝑞(𝑟1)𝐸||𝑎𝑛𝑖𝑋𝑖||𝑞(𝑟1)𝐾𝑛𝛼𝑞(𝑟1)𝐾,(2.26) via (2.4) and (2.15). 𝑋(2)𝑛𝑖>0 from the definition of 𝑋(2)𝑛𝑖. Hence by (2.26) and by taking 𝛼>0 and 𝐾 such that 𝑟2𝛼𝐾𝑞(𝑟1)<1, we have 𝐼2=𝑛=1𝑛𝑟2𝑃𝑛𝑖=1𝑋(2)𝑛𝑖>𝜀𝑛1/𝑝𝑛=1𝑛𝑟2𝛼𝑞(𝑟1)𝐾<.(2.27) Similarly, we have 𝑋(3)𝑛𝑖<0 and 𝐼3<.
Last, we prove that 𝐼4<. Let 𝑌=K𝑋/𝜀. By the definition of 𝑋(4)𝑛𝑖 and (2.1), we have 𝑃max1𝑘𝑛||𝑆(4)𝑛𝑘||>𝜀𝑛1/𝑝𝑃𝑛𝑖=1||𝑋(4)𝑛𝑖||>𝜀𝑛1/𝑝𝑃𝑛𝑖=1𝑎𝑛𝑖||𝑋𝑖||>𝜀𝑛1/𝑝𝐾𝑛𝑖=1𝑃𝑎𝑛𝑖||𝑋𝑖||>𝜀𝑛1/𝑝𝐾=𝑗=1(𝑗+1)1𝑎𝑝𝑛𝑖<𝑗1𝑃||𝑌||>(𝑛𝑗)1/𝑝=𝑗=1(𝑁(𝑛,𝑗+1)𝑁(𝑛,𝑗))𝑙=𝑛𝑗𝑃||𝑌||𝑙𝑝=<𝑙+1𝑙=𝑛[𝑙/𝑛]𝑗=1||𝑌||(𝑁(𝑛,𝑗+1)𝑁(𝑛,𝑗))𝑃𝑙𝑝<𝑙+1𝑙=𝑛𝑙𝑛𝑞(𝑟1)/𝑝𝑃||𝑌||𝑙𝑝.<𝑙+1(2.28) Combining with (2.15), 𝐼4𝑛=1𝑛𝑟2𝑙=𝑛𝑙𝑛𝑞(𝑟1)/𝑝𝑃||𝑌||𝑙𝑝=<𝑙+1𝑙𝑙=1𝑛=1𝑛𝑟2𝑞(𝑟1)/𝑝𝑙𝑞(𝑟1)/𝑝𝑃||𝑌||𝑙𝑝<𝑙+1𝑙=1𝑙𝑟1𝑃||𝑌||𝑙𝑝||𝑌||<𝑙+1𝐸𝑝(𝑟1)||𝑋||𝐸𝑝(𝑟1)<.(2.29)
Now we prove (2.5)(2.4). Since max1𝑗𝑛||𝑎𝑛𝑗𝑋𝑗||max1𝑗𝑛|||||𝑗𝑖=1𝑎𝑛𝑖𝑋𝑖|||||+max1𝑗𝑛|||||𝑗1𝑖=1𝑎𝑛𝑖𝑋𝑖|||||,(2.30) then from (2.5) we have 𝑛=1𝑛𝑟2𝑃max1𝑗𝑛||𝑎𝑛𝑗𝑋𝑗||>𝑛1/𝑝<.(2.31) Combining with the hypotheses of Theorem 2.1, 𝑃max1𝑗𝑛||𝑎𝑛𝑗𝑋𝑗||>𝑛1/𝑝0,𝑛.(2.32) Thus, for sufficiently large 𝑛, 𝑃max1𝑗𝑛||𝑎𝑛𝑗𝑋𝑗||>𝑛1/𝑝<12.(2.33) By Lemma 1.6, {𝑎𝑛𝑗𝑋𝑗;1𝑗𝑛,𝑛1} is still ND. By applying Lemma 1.10 and (2.1), we obtain 𝑛𝑘=1𝑃||𝑎𝑛𝑘𝑋𝑘||>𝑛1/𝑝4𝐶𝑃max1𝑘𝑛||𝑎𝑛𝑘𝑋𝑘||>𝑛1/𝑝.(2.34) Substituting the above inequality in (2.5), we get 𝑛=1𝑛𝑛𝑟2𝑘=1𝑃||𝑎𝑛𝑘𝑋𝑘||>𝑛1/𝑝<.(2.35) So, by the process of proof of 𝐼4<, 𝐸||𝑋||𝑝(𝑟1)𝑛=1𝑛𝑛𝑟2𝑘=1𝑃||𝑎𝑛𝑘𝑋𝑘||>𝑛1/𝑝<.(2.36)

Proof of Theorem 2.2. Let 𝑝=2, 𝛼<1/𝑝=1/2,and𝐾>1/(2𝛼). Using the same notations and method of Theorem 2.1, we need only to give the different parts.
Letting (2.7) take the place of (2.15), similarly to the proof of (2.19) and (2.26), we obtain 𝑛1/2max1𝑘𝑛||𝐸𝑆(1)𝑛𝑘||𝑛1/2+𝛼2𝛼(𝑟1)0,𝑛.(2.37) Taking 𝑀>max(2,2(𝑟1)), we have 𝐼11𝑛=1𝑛1(12𝛼)(𝑀/2(𝑟1))log𝑀𝑛<.(2.38) For 𝑟11, taking 𝑀>max(2,2(𝑟1)/(12𝛼+2𝛼(𝑟1))), we get 𝐼12𝑛=1𝑛1(12𝛼(𝑟1)2𝛼)𝑀/2+(𝑟1)log𝑀𝑛<.(2.39) For 𝑟1>1, 𝐸𝑋2𝑛𝑖< from (2.8). Letting 𝑀>2(𝑟1)2, by the Ḧolder inequality, 𝐼12𝑛=1𝑛𝑟2𝑀/2log𝑀𝑛𝑛𝑖=1𝑎2𝑛𝑖+𝑛2𝛼2𝛼(𝑟1)𝐸𝑎𝑛𝑖𝑋𝑖2(𝑟1)𝑀/2𝑛=1𝑛𝑟2𝑀/2log𝑀𝑛𝑛𝑖=1𝑎2(𝑟1)𝑛𝑖1/(𝑟1)𝑛𝑖=11𝑟2/(𝑟1)𝑀/2𝑛=1𝑛1𝑀/2(𝑟1)+(𝑟1)log𝑀𝑛<.(2.40) By the definition of 𝐾, 𝐼2𝑛=1𝑛1(𝑟1)(2𝛼𝐾1)<.(2.41) Similarly to the proof (2.31), we have 𝐼4𝑙𝑙=1𝑛=1𝑛1𝑙𝑟1𝑃||𝑌||𝑙2=<𝑙+1𝑙=1𝑙𝑟1||𝑌||log𝑙𝑃𝑙2||𝑌||<𝑙+1𝐸2(𝑟1)||𝑌||||𝑋||log𝐸2(𝑟1)||𝑋||log<.(2.42)Equation (2.9)(2.8) Using the same method of the necessary part of Theorem 2.1, we can easily get 𝐸||𝑋||2(𝑟1)||𝑋||log𝑛=1𝑛𝑛𝑟2𝑘=1𝑃||𝑎𝑛𝑘𝑋𝑘||>𝑛1/2<.(2.43)

Acknowledgments

The author is very grateful to the referees and the editors for their valuable comments and some helpful suggestions that improved the clarity and readability of the paper. This work was supported by the National Natural Science Foundation of China (11061012), the Support Program the New Century Guangxi China Ten-Hundred-Thousand Talents Project (2005214), and the Guangxi China Science Foundation (2010GXNSFA013120). Professor Dr. Qunying Wu engages in probability and statistics.