Discrete Dynamics in Nature and Society

Discrete Dynamics in Nature and Society / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 5529109 | https://doi.org/10.1155/2021/5529109

Qunying Wu, "Complete Convergence for END Random Variables under Sublinear Expectations", Discrete Dynamics in Nature and Society, vol. 2021, Article ID 5529109, 10 pages, 2021. https://doi.org/10.1155/2021/5529109

Complete Convergence for END Random Variables under Sublinear Expectations

Academic Editor: Maria Alessandra Ragusa
Received01 Feb 2021
Revised25 Feb 2021
Accepted24 Apr 2021
Published10 May 2021

Abstract

In this paper, the complete convergence theorems of partial sums and weighted sums for extended negatively dependent random variables in sublinear expectation spaces have been studied and established. Our results extend the corresponding results of classical probability spaces to the case of sublinear expectation spaces.

1. Introduction

The classical probability and statistics modeling is based on the linear basis of expectation and probability, and their important theoretical basis is the additivity of probability and expectation. However, in many application fields, because many uncertain factors can not be well solved, in order to better simulate this uncertainty phenomenon, we need new tools. The sublinear expectation space established by Peng ([1], [2]) belongs to this kind of tool. Sublinear expectation space is very useful in the simulation of uncertainty. They are widely used in statistics, finance, and insurance and have been studied by many probability statisticians since it came into being. For example, Peng ([15]) gave the basic structure, basic properties, and central limit theorem of sublinear expected space. Denis and Martini ([6]) studied a theoretical framework for the pricing of contingent claims in the presence of model uncertainty. Limit theorems for sublinear expectations have raised a large number of issues of interest recently. Marinacci ([7]) and Zhang (2016a [8], 2016b [9], 2016c [10]) established some inequalities and almost sure convergence theorems. Hu (2016 [11]), Chen (2016 [12]), Wu and Jiang (2018 [13]), and Wu and Lu (2020 [14]) also studied strong limit theorem and law of the iterated logarithm.

The concept of complete convergence of sequence of random variables is introduced by Hsu and Robbins (1947 [15]). In view of Borel–Cantelli lemma, complete convergence means almost sure convergence. Therefore, complete convergence is one of the most important problems in probability limit theory. Many of its related results have been obtained in the probability space, for example, Wu (2010 [16]), Hu et al. (2012 [17]), Wang et al. (2014 [18]), Wang et al. (2015 [19]), Liu et al. (2015 [20]), Chen and Sung (2016 [21]), Tan et al. (2016 [22]), Wu and Jiang (2016 [23]), and Shen et al. (2017 [24]). Recently, Yu and Wu (2018 [25]), Wang and Wu (2019 [26]), and Deng and Wang (2020 [27]) obtained complete convergence theorem under sublinear expectations. In this paper, we study the complete convergence for extended negatively dependent random variables under sublinear expectation. As a result, the corresponding complete convergence theorems of the probability space have been generalized to the sublinear expectation space context.

Because sublinear expectation space relaxes the additivity of probability and expectation of traditional probability space and expands its application scope, and it is of great theoretical and practical significance to extend the limit theorem of traditional probability to space to sublinear expectation space. At the same time, because the sublinear expectation is nonadditive, the study of its limit theory becomes very complex and difficult.

In the sublinear expectation space, many basic concepts need to be defined, such as sublinear expectation, capacity, independence, and identical distribution. This section gives a brief introduction to these concepts, which can be seen in detail in the works of Peng (1997 [1], 1999 [2]).

Let be a given measurable space and let be a linear space of real functions defined on such that if , then for each , where denotes the linear space of local Lipschitz functions.

Definition 1. A sublinear expectation on is a function satisfying the following properties: for all , we have(i)Monotonicity: if , then (ii)Constant preserving: (iii)Subadditivity: (iv)Positive homogeneity: The triple is called a sublinear expectation space.
The conjugate expectation of by .

Definition 2. Let . A function is called a capacity ifIt is called to be subadditive if for all with . In the sublinear space , we denote a pair of capacities bywhere denotes an indicator function and is the complement set of . By definition of and , it is obvious that is subadditive, and for any .
The corresponding Choquet integrals/expectations are defined bywhere is replaced by and , respectively.
Some basic useful properties of the sublinear expected space are given below, which are described in detail in the works of Wu and Jiang (2018 [13]).

Proposition 1. (i)For all ,(ii)If , then for any .(iii)(iv)Markov inequality: :(v)Hlder inequality: , satisfying :(vi)Jensen inequality: :

Definition 3 (see [2, 8]). (i)Identical distribution: let and be two random variables defined, respectively, in sublinear expectation spaces and . They are called identically distributed ifwhenever the subexpectations are finite. A sequence of random variables is said to be identically distributed if, for each , and are identically distributed.(ii)Extended negatively dependent: a sequence of random variables is said to be upper (resp. lower) extended negatively dependent if there is some dominating constant such thatwhenever the nonnegative functions , are all nondecreasing (resp. all nonincreasing). They are called extended negatively dependent if they are both upper extended negatively dependent and lower extended negatively dependent.It is obvious that if is a sequence of upper (resp. lower) extended negatively dependent random variables and functions are all nondecreasing (resp. all nonincreasing), then is also a sequence of upper (resp. lower) extended negatively dependent random variables.
In the following, let be a sequence of random variables in , and . The symbol stands for a generic positive constant which may differ from one place to another. Let denote that there exists a constant such that for sufficiently large .
To prove our results, we need the following two lemmas.

Lemma 1 (see Theorem 3.1 in [8]). Let be a sequence of upper extended negatively dependent random variables in with . Then, for any ,

Lemma 2. Suppose .(i)Then, for any ,(ii)If , then, for any constant and integer ,

Proof. (i)For any , it can be obtained by the definition of and the substitution of integral variables thatBecause is nonnegative and monotonically decreasing, according to the integral convergence method, we can obtainAlso, note thatThis implies thatHence,From combination (16), we get (12) setup.(ii)Note that, for any integer ,Hence, by (12), implies (13).
By and , we haveThis is equivalent toTherefore, (14) holds.

3. Complete Convergence Theorems

In the sublinear expectation space, we define that completely converges as follows.

Definition 4. A sequence of random variables is said to completely converge to , denoted by as if for any .
The purpose of this paper is to extend corresponding results from the probability space to sublinear expectation space. Our results are as follows.

Theorem 1. Let be a sequence of upper extended negatively dependent and identically distributed random variables. Let, for some ,Then,Furthermore, if is extended negatively dependent, thenIn particular, if is extended negatively dependent and , then

Remark 1. It is known by Lemma 4.5 (iii) in [9] that if or is countably subadditive, then .

Theorem 2. Let be a sequence of extended negatively dependent and identically distributed random variables. Let, for some ,Then,

Remark 2. Theorems 1 and 2 extend the corresponding complete convergence results from the probability space to sublinear expectation space.

Proof. of Theorem 1. By the Jensen inequality, for . Without loss of generality, we can assume that .
For upper extended negatively dependent random variables , in order to ensure that the truncated random variables are also upper extended negatively dependent, we need that truncated functions belong to and are nondecreasing. Let for any ; for any , letThen, is also a sequence of upper extended negatively dependent random variables by and being nondecreasing. DefineThen,where .
Therefore, in order to prove (26), it suffices to verify that, for any ,It shall be noted that, in the probability space, there is an equality: ; however, in the sublinear expectation space, is defined through continuous functions in and the indicator function may be discontinuous. Therefore, the expression may not exist. This needs to modify the indicator function by functions in . To this end, we define the function as follows.
For , let be a nonincreasing function such that for all and if and if . Then,Note thatBy (23), (24) and (36) follow thatNow, we prove that (34). Let . Then, . Fix . Let . By , (24), and Jensen inequality, we obtainIt follows thatHence, by , and , we obtainSince is also upper extended negatively dependent, it follows from (25) thatBy the Markov inequality,Let and .
If , i.e., , then, by the definition of , we have and . Hence,If , i.e., , then, by the definition of , we have and . Note that for sufficiently large from . Hence,That is that (34) holds fromNext, we prove that . By (24),where . It shall be noted that the identical distribution is defined under , not under (see Definition 3). identical distribution implies for , but does not imply . Therefore, in the calculation of , we need to convert to . Thus, by (5) and (37),Therefore, from (12) and (23),Lastly, we prove that . SinceThe last inclusion relation above holds that if there are only less than 4 indices , such that , then, in the sum , there are only less than 4 indices , such that , and each term , so . This is in contradiction with .
Therefore, by the definition of extended negatively dependent (5), (24), and (37), we haveThus,from (14). Together with (50), we obtain (35). This completes the proof of (26).
Furthermore, if is extended negatively dependent, then ; also, satisfying the conditions of Theorem 1 and considering instead of in (26), for any , we can obtain by ,That is, (27) is established.

Proof. of Theorem 2. We first prove thatFor any , let , for all , and letThen, is also a sequence of upper extended negatively dependent random variables by and being nondecreasing. Note thatIn order to prove (55), it suffices to verify thatWe first prove that (59). For any , by (37), we haveFor every , there exists a such that ; thus, for any , by , inequality, (37) and (60), and (61), we obtainHence, by (5) and (37) and , ,Therefore, by from , for , we obtainNext, we estimate . Noting that, by (13),It follows thatfrom and the Kronecker lemma. This combination, (59) and (64), was established.
Since is also upper extended negatively dependent with . It follows from Lemma 1, (63), and (13) that