Abstract

The goal of this paper is to build average convergence and almost sure convergence for ND (negatively dependent) sequences of random variables under sublinear expectation space. By using the basic definition of sublinear expectation space, Markov inequality, and inequality, we extend average convergence and almost sure convergence theorems for ND sequences of random variables under sublinear expectation space, and we provide a way to learn this subject.

1. Introduction

Classical probability theorems are widely used in many fields, which only hold on some occasions of model certainty. However, there are uncertainties, such as measures of risk, nonlinear stochastic calculus, and statistics in the process of finance. At this time, nonadditive probabilities and nonadditive expectations are useful tools for studying uncertainties and nonlinear stochastic calculus in the process of finance. In order to solve similar problems, Professor Shige Peng [13] proposed -expectation and G-expectation theory in 2008, so that the sublinear expectation space has attracted a lot of scholars’ attention.

The limit theorem of nonadditive probability or nonlinear expectation is a challenging question of interest. Under the framework of Peng, many limit theorems are gradually established, such as Zhang [48] studied some inequalities under sublinear expectation spaces, some limit theorems for sublinear expectation spaces, and Marcinkiewiczs strong law of large numbers for nonlinear expectations; Bayraktar and Munk [9] acquired an -stable limit theorem under sublinear expectation; Xu and Zhang [10] achieved three-series theorem for independent random variables under sublinear expectations with applications; Wu and Jiang [11] researched strong law of large numbers and Chover’s law of the iterated logarithm under sublinear expectations. In the last two years, the research of the convergence under the sublinear expectation space is still very hot. Ze and Zhou [12] discussed convergence of random variables under sublinear expectations; Gao et al. [13] researched a strong law of large number for negatively dependent and nonidentical distributed random variables in the framework of sublinear expectation; Wu and Lu [14] acquired another form of Chover’s law of the iterated logarithm under sublinear expectations; Chen and Zhang [15] studied an elementary proof of Peng’s central limit theorem under sublinear expectations.

Complete convergence is a strong convergence, which was first proposed by Hsu and Robbins [16] in 1947. In the classical probability space, complete convergence, almost sure convergence, average convergence, and convergence of probability have been widely used, and many scholars have studied these [1720]. In the probability space, these convergence problems have been studied more thoroughly. The average convergence and almost sure convergence is needed to be made perfect under sublinear expectation. Because of the expectation subaddition, it has brought us some difficulties in dealing with these problems. We mainly establish the average convergence and almost sure convergence for ND random variables under sublinear expectation and generalize them [21] to the sublinear expectation space.

This paper is divided into four parts. The first part mainly introduces the background of sublinear expectation spaces and some existing research results. The second part mainly introduces definitions and lemmas of sublinear expectation spaces that we need to use. The third part mainly describes theorems and remarks. The fourth part mainly explains the proof process of the theorems.

2. Preliminaries

We use the framework and notions of Peng [2]. Let be a given measurable space, and let be a linear space of real functions defined on such that if , then for each , where denotes the linear space of (local Lipschitz) functions satisfyingfor some depending on . is considered as a space of random variables. In this case, we denote .

In this paper, we define as positive constants, and the positive constants represented by different places are different.

Definition 1. (see [2]). A sublinear expectation on is a function satisfying the following properties: for all , we have the following:(a)Monotonicity: if, then (b)Constant preserving: (c)Subadditivity: , whenever is not of the form or (d)Positive homogeneity: Here, , and the triple is called a sublinear expectation space.
Given a sublinear expectation , let us denote the conjugate expectation of byFrom the definition, it is easily shown that, for all ,

Definition 2. (see [1]). If , for the function , there are the following:(1)(2)Then, call as the capacity. If for any and , there is , and then it is said that is subadditive. In the sublinear expected space , the definition of upper capacity and lower capacity iswhere is the complement of .
has subadditivity, andIf , thenfrom this, for , we can get the inequalityIn addition to the inequality, the inequality is also used in the following chapters. Let be random variables, thenand among them,

Definition 3. (see [4]).(1), If there is ,It is said that can be added several times.(2), for , there isIt is said that can be added several times.
Under normal circumstances, has no countable additivity, so it is necessary to define the external capacity .

Definition 4. (see [4]). For ,From the definition, it is known that can be added several times, and has the following properties:(a)If can be added several times, then (b)If there is for , then ; furthermore, if can be added several times, then it is necessary for has .(c)When , there is , that is, is the largest and has a capacity of countable additivity.

Definition 5. (see [5], negative dependence). In a sublinear expectation space , a random vector is said to have negative dependence (ND) to another random vector under if for each pair of test functions and , we have , whenever either are coordinate-wise nondecreasing or are coordinate-wise nonincreasing with .
ND random variables: a sequence of random variables in a sublinear expectation space is said to be ND if is ND to for each .
Let be a sequence of ND random variables and are nondecreasing (resp., nonincreasing) functions, then is also sequence of ND random variables.

Definition 6. Let be a sequence of random variables in , we have several different types of convergence:(1)(Ze [12]) A sequence of random variables is said to completely converge to if any , which is denoted by .(2)(Wu [11]) A sequence of random variables is said to converge to almost surely (a.s. ), denoted by a.s. as if . can be replaced by and , respectively. By and , for any , it is obvious that a.s. implies a.s. , but a.s. does not imply a.s. . Furthermore,In the probability space, can get . But does not necessarily hold under sublinear expectation, that is, to say, . We can actually get ; because of , we cannot define with .(3)(Ze [12]) A sequence of random variables is said to converge to if , which is denoted by .(4)(Ze [12]) A sequence of random variables is said to converge to in capacity, if any , which is denoted by .By Borel–Cantelli’s Lemma, we can get .
By Markov inequality, we can obtain .

Lemma 1. (see [4], Borel–Cantelli’s lemma). Let be a sequence of events in . Suppose that is a countably subadditive capacity. If , then

Lemma 2 (see [5], Corollary 2.2; [6], Theorem 1). Let be a sequence of random variables in with . Suppose that is negatively dependent to for each . Then,
For for all We know that the indicative function may be discontinuous, is defined on under sublinear expectations space, so the constructor is needed to correct the discontinuity of the indicative function. Now, we define , for ; let be a nonincreasing function such that for all and if if . Then,

3. Main Results

Theorem 1. Suppose that and are positive integer sequences, is an array of row-wise ND random variables and . Let and be two increasing sequences of positive constants with and as . For some , satisfyingwe can getthat is, to say,

Remark 1. We extend the main conclusions of Shen’s [21] article to sublinear expectation spaces because the inequality of Lemma 2 is about ND random variables, and the conclusion of Shen [21] is not extended to END random variables. Condition (19) is different from of Shen’s [21] because the indicator function does not necessarily exist in the sublinear expected space, so it needs to be replaced by the function.

Theorem 2. Assume that and are positive integer sequences, is an array of row-wise ND random variables, and is countably subadditive. Let and be two increasing sequences of positive constants with as and . For some , satisfying (18) andwe can haveIn particular, if , then

Remark 2. The almost sure convergence under the sublinear expectation space is defined by the convergence of the capacity; capacity is divided into upper capacity and lower capacity; the almost sure convergence of the upper capacity can be pushed almost sure converges of the lower capacity; otherwise, it does not hold. We prove that almost sure convergence under sublinear expectation space is the proof of the upper capacity of almost sure convergence. In order to adapt to the sublinear expectation space and better prove Theorem 2, combined with the function, we change of Shen’s [21] to formula (22). In the sublinear expectation space, almost sure convergence is different from the probability space. Generally speaking, the limit does not exist. Only under condition can there be a limit. Therefore, our conclusion is divided into three parts, namely, formulas (23)–(25).

4. Proof

Proof of Theorem 1. For convenience, denotes that there exists a constant such that for sufficiently large. For an array of row-wise ND random variables , to ensure the truncated random variables are also ND, we demand that truncated functions belong to . For all , we define thatBy (17), we can easily draw thatWe knowPaying attention to for , it is easy to see thatIf we want to get (21) in , we first show that as , according to , it suffices to show as . Noting that as and , by (15) and (18), Therefore, as . Next, we will prove as ; similar to (28), using inequality, we haveFor , combining (15), (27), and (19), we can obtainWe know is an array of row-wise ND random variables, using instead of in . There isFor , it is easy to see and for ; according to inequality, (19), (27), and (33), we haveFor , note that , (19), and (27). We conclude thatFrom (30), (32), (34), and (35), we can easily get , and as in , soFinally, we have to prove as . It shall be noted that is an array of row-wise ND random variables, using instead of in (37), so we can getAccording to and condition of , So, we have , as . Combining (28), we can get (21). In other words, Theorem 1 is proved.

Proof of Theorem 2. In the process of Theorem 2, we still use the mark of Theorem 1. In order to prove the establishment of (23), we need to showWe consider , using Markov inequality and (16). Noting that , and (18), for some , any , we haveWe know is countably subadditive, combining Lemma 1 (Borel–Cantellis Lemma). Let , we have . Now, it suffices to verify thatBy (22) and (27), we conclude thatHence, there exists , we use Markov inequality, (15) of Lemma 2 and inequality because of (22) and (27); then, for any ,We know is countably subadditive and arbitrary of , which together with Lemma 1 (Borel–Cantellis Lemma) implies .
Finally, we prove . Similarly, combining (43), we obtain . Then, we obtain (23). We use instead of in (23), so we can get (24). When , there is (25), that is, Theorem 2 is proved.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This paper was supported by the National Natural Science Foundation of China (12061028) and the Support Program of the Guangxi China Science Foundation (2018GXNSFAA281011 and 2018GXNSFAA294131).