Abstract

Classical central limit theorem is considered the heart of probability and statistics theory. Our interest in this paper is central limit theorems for functions of random variables under mixing conditions. We impose mixing conditions on the differences between the joint cumulative distribution functions and the product of the marginal cumulative distribution functions. By using characteristic functions, we obtain several limit theorems extending previous results.

1. Introduction

The central limit theorem is one of the most remarkable results of the theory of probability [1], which is critical to understand inferential statistics and hypothesis testing [2, 3]. The assumption of independence for a sequence of observations is often a technical convenience. Real data frequently exhibit some dependence and at the least some correlation at small lags. One extensively investigated kind of dependence is the -dependence case (see e.g., [48]), in which random variables are considered as independent as long as they are -step apart. More general measures of dependence are called mixing conditions, which are derived from the estimation of the difference between characteristic functions of averages of dependent and independent random variables. These conditions have some appealing physical interpretations. Various mixing conditions have been proposed by researchers; (see e.g., [914]) to name just a few.

Our main interest in this note is the central limit theorem for dependent classes of random variables. Following the work [12], instead of estimating the difference between the characteristic functions of the sum of dependent random variables and those of the sum of independent random variables of the same distributions, we compute the exact value of this difference. Our results weaken the conditions imposed on the random variable sequences in Theorem 1 and Theorem 2 of [12] and can be used to describe systems which are globally determined but locally random. It is noteworthy that the work [12] has been extended in another direction, where the sum of a random number of random variables is examined [15].

The rest of the paper is organized as follows. In Section 2, we present our central limit theorems, and in Section 3, we give the proofs.

2. Main Results

Before proceeding, we introduce some notations. For , denote by and the maximal and minimal values between them, respectively. Let and be random variables. The image of is denoted by . We write meaning that converges in distribution to as . Denote by the standard normal variable. Let (or simply ) and represent the mean and variance of , respectively.

Theorem 2.1. Let be a sequence of identically distributed random variables and a sequence of measurable functions such that satisfying, for all , either or for some and , . Assume that , , and for some . Let and . If for sufficiently large where is any choice of indices such that , then as .

Corollary 2.2. Let be a sequence of identically distributed random variables and a sequence of measurable functions such that satisfying, for all , expressions (2.1) or (2.2). Assume that for . Suppose there exists , , such that for sufficiently large the inequality holds, where is any choice of indices such that , then as .

Generally, is a sequence of nonlinearly bounded functions. If we set and , then for all , which is the special case considered in [12] (Theorem 3). It is worth noting that in the conditions of Theorem 2.1 we used the expression , where is a real number. Naturally, we preclude the situation where the random variable takes negative values and is not an integer, since such kind of power is undefined.

Fix and let be relatively small, compared to , then the right-hand side of (2.3) is close to zero. If we let be close to , then the right-hand side of (2.3) is close to 1. Hence, the dependence condition (2.3) allows a stronger dependence within a larger class of random variables and demands more independence within a smaller class. This structure can be used to describe systems which are globally determined but locally random.

Theorem 2.1 and Corollary 2.2 deal with dependence conditions on an “event-to-event” basis, while the following analogous results consider dependence on an “average” matter.

Theorem 2.3. Let be a sequence of identically distributed random variables and a sequence of measurable functions such that satisfying, for all , either or for some and , . Assume that , , and for some . Let and . If for sufficiently large where is any choice of indices such that , then as .

Corollary 2.4. Let be a sequence of identically distributed random variables and a sequence of measurable functions such that satisfying, for all , expressions (2.7) or (2.8). Assume that for . Suppose there exists , , such that for sufficiently large the inequality holds, where is any choice of indices such that , then as .

3. Proofs

We will prove Theorems 2.1 and 2.3 in this section through several lemmas, and the proof of corollaries follows directly.

For convenience, denote by , , the product of all ’s except . The following lemma regarding the difference between characteristic functions of sums of dependent and independent random variables is stated in [12] without proof.

Lemma 3.1. Let be random variables satisfying , . Let be absolutely continuous integrable functions. Then one has the identity

Proof. Let for . Therefore, for , we have by integration by parts. Let . Then, for , we derive from (3.2) that The general results then follow by induction.

The following lemma compares with the Lindeberg condition.

Lemma 3.2. Let be a sequence of identically distributed random variables and satisfying, for all , expressions (2.1) or (2.2), . Assume that , , and for some . Let . Then, there exists a sequence of reals such that

Proof. Without loss of generality, we can assume that are centered at , that is, . In fact, let , and then . By the assumptions (2.1) and (2.2), we have Therefore, satisfies For , if (2.1) holds, then ; if (2.2) holds, then . Furthermore, by assumptions, we have and . Hence, we may assume that without loss of generality.
Choose so that . Define for . We will show that the constructed sequence is what we want.
Since and ’s are identically distributed, by virtue of the bounds in (2.1) and (2.2) we obtain where and .
By our assumptions, we have as , which verifies (3.4). Let . We have Since and , the quantity before the integration in (3.11) tends to zero. Consequently, (3.5) readily holds since the integration in (3.11) is bounded using our assumptions.
Now we want to verify (3.6). Note that By the Markovian inequality (see e.g, [1]), as . We have where the first integration in (3.14) tends to involving (3.13). Therefore, the second integration in (3.14) tends to zero, which, together with (3.12), finally verifies (3.6).

Proof of Theorem 2.1. We assume that . For large enough , we have In what follows, we suppose that is an integer for notation convenience. In the sequel, we take a specific sequence as in Lemma 3.2, and then (3.15) amounts to .
We define a sequence of random variables as follows: Hence, we have and for . The inequality (2.3) then implies that, for large enough , For , denote . Involving Lemma 3.2, it follows that as .
Next, we calculate the difference between characteristic functions of sums of independent and dependent random variables. Taking and in Lemma 3.1, where , we obtain where the second inequality comes from (3.17) and is a positive constant. Since and as , (3.18) tends to as .
Define a sequence of independent random variables such that has the same distribution with . By our construction, for sufficiently large , Involving Lemma 3.2, it yields that satisfies the Lindeberg condition. Accordingly, also satisfies the Lindeberg condition. Thus, the central limit theorem is true for since (3.18) approaches , which implies that the central limit theorem holds for , that is, as . Taking into account that and the definition of , by (3.18) we know that On the other hand, it is obvious that The first quantity on the right-hand side of (3.22) is up-bounded by as , using (3.15). Combining (3.5), (3.22), and (3.23), we obtain as . Therefore, (3.21) and (3.24) imply that which concludes the proof of Theorem 2.1.

Proof of Theorem 2.3. We define , , and as in the proof of Theorem 2.1. We obtain where is a positive constant. Since and as , the expression (3.26) tends to as . The remaining part of the proof follows as in the proof of Theorem 2.1.