Abstract
The complete moment convergence of weighted sums for arrays of rowwise φ-mixing random variables is investigated. By using moment inequality and truncation method, the sufficient conditions for complete moment convergence of weighted sums for arrays of rowwise φ-mixing random variables are obtained. The results of Ahmed et al. (2002) are complemented. As an application, the complete moment convergence of moving average processes based on a φ-mixing random sequence is obtained, which improves the result of Kim et al. (2008).
1. Introduction
Hsu and Robbins [1] introduced the concept of complete convergence of . A sequence is said to converge completely to a constant if Moreover, they proved that the sequence of arithmetic means of independent identically distributed (i.i.d.) random variables converge completely to the expected value if the variance of the summands is finite. The converse theorem was proved by Erdös [2]. This result has been generalized and extended in several directions, see Baum and Katz [3], Chow [4], Gut [5], Taylor et al. [6], and Cai and Xu [7]. In particular, Ahmed et al. [8] obtained the following result in Banach space.
Theorem A. Let be an array of rowwise independent random elements in a separable real Banach space . Let for some random variable , constant and all and . Suppose that is an array of constants such that Let be such that and fix such that . Denote . If and in probability, then for all .
Chow [4] established the following refinement which is a complete moment convergence result for sums of random variables.
Theorem B. Let , and . Suppose that . Then
The main purpose of this paper is to discuss again the above results for arrays of rowwise -mixing random variables. The author takes the inspiration in [8] and discusses the complete moment convergence of weighted sums for arrays of rowwise -mixing random variables by applying truncation methods. The results of Ahmed et al. [8] are extended to -mixing case. As an application, the corresponding results of moving average processes based on a -mixing random sequence are obtained, which extend and improve the result of Kim and Ko [9].
For the proof of the main results, we need to restate a few definitions and lemmas for easy reference. Throughout this paper, will represent positive constants, the value of which may change from one place to another. The symbol denotes the indicator function of ; indicates the maximum integer not larger than . For a finite set , the symbol denotes the number of elements in the set .
Definition 1.1. A sequence of random variables is said to be a sequence of -mixing random variables, if where , .
Definition 1.2. A sequence of random variables is said to be stochastically dominated by a random variable (write ) if there exists a constant , such that for all and .
The following lemma is a well-known result.
Lemma 1.3. Let the sequence of random variables be stochastically dominated by a random variable . Then for any
Definition 1.4. A real-valued function , positive and measurable on for some , is said to be slowly varying if for each .
By the properties of slowly varying function, we can easily prove the following lemma. Here we omit the details of the proof.
Lemma 1.5. Let be a slowly varying function as , then there exists (depends only on ) such that (i) for any and positive integer ,(ii) for any and positive integer .
The following lemma will play an important role in the proof of our main results. The proof is due to Shao [10].
Lemma 1.6. Let be a sequence of -mixing random variables with mean zero. Suppose that there exists a sequence of positive numbers such that for any . Then for any , there exists such that
Lemma 1.7. Let be a sequence of -mixing random variables with , then there exists such that for any and
Proof. By Lemma 5.4.4 in [11] and Hölder's inequality, we have Therefore, (1.8) holds.
2. Main Results
Now we state our main results. The proofs will be given in Section 3.
Theorem 2.1. Let be an array of rowwise -mixing random variables with , and . Let be a slowing varying function, and be an array of constants such that (a)If and there exists some such that , and , then implies (b)If , then implies
Remark 2.2. If , then implies that (2.2) holds. In fact,
Remark 2.3. Note that Therefore, from (2.5), we obtain that the complete moment convergence implies the complete convergence, that is, under the conditions of Theorem 2.1, result (2.2) implies and (2.3) implies
Corollary 2.4. Under the conditions of Theorem 2.1, (1)if and there exists some such that , and , then implies (2)if , then implies
Corollary 2.5. Let be an array of rowwise -mixing random variables with , and . Suppose that is a slowly varying function. (1)Let and . If , then (2)Let . If , then
Corollary 2.6. Suppose that , where is a sequence of real numbers with , and is a sequence of -mixing random variables with , and . Let be a slowly varying function.(1)Let . If , then (2)Let . If , then
Remark 2.7. Corollary 2.6 obtains the result about the complete moment convergence of moving average processes based on a -mixing random sequence with different distributions. We extend the results of Chen et al. [12] from the complete convergence to the complete moment convergence. The result of Kim and Ko [9] is a special case of Corollary 2.6 (1). Moreover, our result covers the case of , which was not considered by Kim and Ko.
3. Proofs of the Main Results
Proof of Theorem 2.1. Without loss of generality, we can assume
Let for any , , and . First note that implies for any . Therefore, for ,
Hence, for large enough we have . Then
Noting that , by Lemma 1.5, Markov inequality, (1.6), and (3.1), we have
Now we estimate , noting that , by Lemma 1.7, we have
By Lemma 1.6, Markov inequality, inequality, and (1.5), for any , we have
So,
From (3.4), we have .
For , we consider the following two cases.
If , then . Taking such that , we have
If , we choose such that . Taking such that , we have
So, .
Now, we estimate . Set . Then , where is the set of positive integers. Note also that for all ,
Hence, we have
Note that
Taking large enough such that , for , by Lemma 1.6 and (3.11), we get
For , we obtain
So . Finally, we prove . In fact, noting and , using Markov inequality and (3.1), we get
Thus, we complete the proof in (a). Next, we prove (b). Note that implies that (3.2) holds. Therefore, from the proof in (a), to complete the proof of (b), we only need to prove
In fact, noting , , and . By taking in the proof of (3.12), (3.13), and (3.14), we get
Then, by (3.17), we have
The proof of Theorem 2.1 is completed.
Proof of Corollary 2.4. Note that Therefore, (2.8) and (2.9) hold by Theorem 2.1.
Proof of Corollary 2.5. By applying Theorem 2.1, taking for , and for , then we obtain (2.10). Similarly, taking , for , and for , we obtain (2.11) by Theorem 2.1.
Proof of Corollary 2.6. Let and for all . Since , we have and . By applying Corollary 2.4, taking , , , we obtain Therefore, (2.12) and (2.13) hold.
Acknowledgment
The paper is supported by the National Natural Science Foundation of China (no. 11271020 and 11201004), the Key Project of Chinese Ministry of Education (no. 211077), the Natural Science Foundation of Education Department of Anhui Province (KJ2012ZD01), and the Anhui Provincial Natural Science Foundation (no. 10040606Q30 and 1208085MA11).