Abstract

Let for each 𝑛𝑋𝑛 be an 𝑑-valued locally square integrable martingale w.r.t. a filtration (𝑛(𝑡),𝑡+) (probability spaces may be different for different 𝑛). It is assumed that the discontinuities of 𝑋𝑛 are in a sense asymptotically small as 𝑛 and the relation 𝖤(𝑓(𝑧𝑋𝑛(𝑡))|𝑛(𝑠))𝑓(𝑧𝑋𝑛(𝑡))𝖯0 holds for all 𝑡>𝑠>0, row vectors 𝑧, and bounded uniformly continuous functions 𝑓. Under these two principal assumptions and a number of technical ones, it is proved that the 𝑋𝑛's are asymptotically conditionally Gaussian processes with conditionally independent increments. If, moreover, the compound processes (𝑋𝑛(0),𝑋𝑛) converge in distribution to some (𝑋,𝐻), then a sequence (𝑋𝑛) converges in distribution to a continuous local martingale 𝑋 with initial value 𝑋 and quadratic characteristic 𝐻, whose finite-dimensional distributions are explicitly expressed via those of (𝑋,𝐻).

1. Introduction

The theory of functional limit theorems for martingales may appear finalized in the monographs [1, 2]. This paper focuses at two points, where the classical results can be refined.

(1) The convergence in distribution to a local martingale with 𝒢-conditional increments has been studied hitherto in the model, where the 𝜎-algebra 𝒢 enters the setting along with the prelimit processes. This assumption is worse than restrictive—it is simply unnatural when one studies the convergence in distribution, not in probability. In the present paper, conditions ensuring asymptotic conditional independence of increments for a sequence of locally square integrable martingales are formulated in terms of quadratic characteristics of the prelimit processes (Theorem 4.5). Our approach to the proving of this property is based on the idea to combine the Stone-Weierstrass theorem (actually its slight modification—Lemma 2.2) with an elementary probabilistic result—Lemma 2.4, which issues in Corollaries 2.7 and 2.8. These corollaries, as well as Lemma 2.4 itself and the cognate Lemma 2.5, will be our tools.

(2) The main object of study in [1, 2] is semimartingale. So, some specific for local martingales facts are passed by. Thus, Theorem VI.6.1 and Corollary VI.6.7 in [2] assert that under appropriate assumptions about semimartingales 𝑍𝑛, the relation 𝑍𝑛law𝑍,() where 𝑍 also is a semimartingale, entails the stronger one (Z𝑛,[𝑍𝑛])law(𝑍,[𝑍]) (below, the notation of convergence in law will be changed). For locally square integrable martingales, one can modify the problem as follows. Let relation (*) be fulfilled. What extra assumptions ensure that 𝑍 is a continuous local martingale and 𝑍𝑛,𝑍𝑛law(𝑍,𝑍)?() There is neither an answer nor even the question in [1, 2]. A simple set of sufficient conditions is provided by Corollary 5.2 (weaker but not so simple conditions are given by Corollary 5.5). Recalling that the quadratic variation of a continuous local martingale coincides with its quadratic characteristic, we see that the last two relations imply together asymptotic proximity of [𝑍𝑛] and 𝑍𝑛. Actually, this conclusion requires even less conditions than in Corollary 5.2. They are listed in Corollary 5.3.

The main results of the paper are, in a sense, converse to Corollaries 5.2 and 5.5. They deal with the problem: what conditions should be adjoined to 𝑍𝑛law𝐻 in order to ensure (**), where 𝑍 is a continuous local martingale with quadratic characteristic 𝐻? If the assumptions about the prelimit processes do not guarantee that 𝐻 performing as 𝑍 determines the distribution of 𝑍, then results of this kind assert existence of convergent subsequences but not convergence of the whole sequence (Theorems 5.1 and 5.4). Combining Theorems 5.4 with 4.5, we obtain Theorem 5.6 asserting that the whole sequence converges to a continuous local martingale whose finite-dimensional distributions are explicitly expressed via those of its initial value and quadratic characteristic. The expression shows that the limiting process has conditionally independent increments—but this conclusion is nothing more than a comment to the theorem.

The proving of the main results needs a lot of preparation. Those technical results which do not deal with the notion of martingale are gathered in Section 2 (excluding Section 2.1), and the more specialized ones are placed in Section 3. The rationale in Sections 3 and 4 would be essentially simpler if we confined ourselves to quasicontinuous processes (for a locally square integrable martingale, this property is tantamount to continuity of its quadratic characteristic). To dispense with this restriction, we use a special technique sketched in Section 2.1.

All vectors are thought of, unless otherwise stated, as columns. The tensor square 𝑥𝑥 of 𝑥𝑑 will be otherwise written as 𝑥2. We use the Euclidean norm || of vectors and the operator norm of matrices. The symbols 𝑑,𝔖, and 𝔖+ signify: the space of 𝑑-dimensional row vectors, the class of all symmetric square matrices of a fixed size (in our case—𝑑) with real entries, and its subclass of nonnegative (in the spectral sense) matrices, respectively.

By Cb(𝑋), we denote the space of complex-valued bounded continuous functions on a topological space 𝑋. If 𝑋=𝑘 and the dimension 𝑘 is determined by the context or does not matter, then we write simply Cb.

Our notation of classes of random processes follows [3]. In particular, (𝔽) and (𝔽) signify the class of all martingales with respect to a filtration (= flow of 𝜎-algebras) 𝔽((𝑡),𝑡+) and its subclass of uniformly integrable martingales. An 𝔽-martingale 𝑈 will be called: square integrable if 𝖤|𝑈(𝑡)|2< for all 𝑡 and uniformly square integrable if sup𝑡𝖤|𝑈(𝑡)|2<. The classes of such processes will be denoted 2(𝔽) and 2(𝔽), respectively. The symbol 𝔽 will be suppressed if the filtration either is determined by the context or does not matter. If 𝒰 is a class of 𝔽-adapted process, then by 𝒰 we denote the respective local class (see [2, Definition I.1.33], where the notation 𝒰loc is used). Members of , and 2 are called local martingales and locally (better local) square integrable martingales, respectively. All processes, except quadratic variations and quadratic characteristics, are implied 𝑑-valued, where 𝑑 is chosen arbitrarily and fixed.

The integral 𝑡0𝜑(𝑠)d𝑋(𝑠) will be written shortly (following [1, 2]) as 𝜑𝑋(𝑡) if this integral is pathwise (i.e., 𝑋 is a process of locally bounded variation) or 𝜑𝑋(𝑡) if it is stochastic. We use properties of stochastic integral and other basic facts of stochastic analysis without explanations, relegating the reader to [14]. The quadratic variation (see the definition in Section 2.3 [3] or Definition I.4.45 together with Theorem I.4.47 in [2]) of a random process 𝜉 and the quadratic characteristic of 𝑍2 will be (and already were) denoted [𝜉] and 𝑍, respectively. They take values in 𝔖+, which, of course, does not preclude to regard them as 𝑑2-valued random processes.

2. Some Technical Results

The Stone-Weierstrass theorem (see, e.g., [5]) concerns compact spaces only. In the following two, its minor generalizations (for real-valued and complex-valued functions, resp.) both the compactness assumption and the conclusion (that the approximation is uniform on the whole space) are weakened. They are proved likewise their celebrated prototype if one argues for the restrictions of continuous functions to some compact subset fixed beforehand.

Lemma 2.1. Let 𝔄 be an algebra of real-valued bounded continuous functions on a topological space 𝑇. Suppose that 𝔄 separates points of 𝑇 and contains the module of each its member and the unity function. Then, for any real-valued bounded continuous function 𝐹, compact set 𝐵𝑇, and positive number 𝜀, there exists a function 𝐺𝔄 such that 𝐺𝐹 and max𝑥𝐵|𝐹(𝑥)𝐺(𝑥)|<𝜀.

Lemma 2.2. Let 𝔄 be an algebra of complex-valued bounded continuous functions on a topological space 𝑇. Suppose that 𝔄 separates points of 𝑇, and contains the conjugate of each its member and the unity function. Then for any complex-valued bounded continuous function 𝐹, compact set 𝐵𝑇, and positive number 𝜀 there exists a function 𝐺𝔄 such that 𝐺𝐹 and max𝑥𝐵|𝐹(𝑥)𝐺(𝑥)|<𝜀.

We consider henceforth sequences of random processes or random variables given, maybe, on different probability spaces. So, for the 𝑛th member of a sequence, 𝖯 and 𝖤 should be understood as 𝖯𝑛 and 𝖤𝑛. In what follows, “u.i.” means “uniformly integrable”.

Lemma 2.3. In order that a sequence of random variables be u.i., it is necessary and sufficient that each its subsequence contain a u.i. subsequence.

Proof. Necessity is obvious; let us prove sufficiency.
Suppose that a sequence (𝛼𝑛) is not u.i. Then, there exists 𝑎>0 such that for all 𝑁>0lim𝑛𝖤||𝛼𝑛||𝐼||𝛼𝑛||>𝑁2𝑎.(2.1) Consequently, there exists an increasing sequence (𝑛𝑘) of natural numbers such that 𝖤𝛽𝑘𝐼{𝛽𝑘>𝑘}𝑎, where 𝛽𝑘=|𝛼𝑛𝑘|. Then, for any infinite set 𝐽 and 𝑁>0, we have lim𝑘,𝑘𝐽𝖤𝛽𝑘𝐼𝛽𝑘>𝑁𝑎,(2.2) which means that the subsequence (𝛼𝑛𝑘) does not contain u.i. subsequences.

Lemma 2.4. Let for each 𝑛𝜉𝑛1,,𝜉𝑛𝑝 be random variables given on a probability space (Ω𝑛,𝑛,𝖯𝑛), and 𝑛 a sub-𝜎-algebra of 𝑛. Suppose that for each 𝑗{1,,𝑝}, 𝖤𝜉𝑛𝑗𝑛𝜉𝖯𝑛𝑗0as𝑛,(2.3) and for any 𝐽{1,,𝑝} the sequence (𝑗𝐽𝜉𝑛𝑗,𝑛) is u.i. Then, 𝖤𝑝𝑗=1𝜉𝑛𝑗𝑛𝑝𝑗=1𝜉𝖯𝑛𝑗0.(2.4)

Proof. Denote 𝜂𝑛𝑗=𝖤(𝜉𝑛𝑗𝑛). By the second assumption, the sequences (𝜉𝑛𝑗,𝑛),(𝜂𝑛𝑗,𝑛),𝑗=1,,𝑝, are stochastically bounded, which together with the first assumption yields 𝑗𝐽𝜂𝑛𝑗𝑗𝐽𝜉𝖯𝑛𝑗0,(2.5) for any 𝐽{1,,𝑝}. Hence, writing the identity 𝖤𝜉𝑛1𝜉𝑛2𝑛𝜉=𝖤𝑛1𝜂𝑛1𝜉𝑛2𝜂𝑛2𝑛+𝜂𝑛1𝜂𝑛2,(2.6) and using both assumptions, we get (2.4) for 𝑝=2. For arbitrary 𝑝, this relation is proved by induction whose step coincides, up to notation, with the above argument.

The proof of the next statement is similar.

Lemma 2.5. Let 𝛼𝑛 and 𝛽𝑛 be random variables given on a probability space (Ω𝑛,𝑛,𝖯𝑛) and 𝑛 a sub-𝜎-algebra of 𝑛. Suppose that 𝛼𝑛𝛼𝖤𝑛𝑛𝖯0,(2.7) and the sequences (𝛼𝑛), (𝛽𝑛) and (𝛼𝑛𝛽𝑛) are u.i. Then, 𝖤𝛼𝑛𝛽𝑛𝑛𝛼𝑛𝖤𝛽𝑛𝑛𝖯0.(2.8)

Lemma 2.6. Let 𝑛Ξ𝑛 be an 𝑘-valued random variable given on a probability space (Ω𝑛,𝑛,𝖯𝑛), and 𝑛 a sub-𝜎-algebra of 𝑛. Suppose that lim𝑁lim𝑛𝖯||Ξ𝑛||>𝑁=0,(2.9) and the relation 𝖤𝐹Ξ𝑛𝑛Ξ𝐹𝑛𝖯0(2.10) holds for all 𝐹 from some class of complex-valued bounded continuous functions on 𝑘 which separates points of the latter. Then, it holds for all 𝐹Cb.

Proof. Let 𝔄 denote the class of all complex-valued bounded continuous functions on 𝑘 satisfying (2.10). Obviously, it is linear. By Lemma 2.4, it contains the product of any two its members. So, 𝔄 is an algebra. By assumption, it separates points of 𝑘. The other two conditions of Lemma 2.2 are satisfied trivially. Thus, that lemma asserts that for any 𝐹Cb,𝑁>0 and 𝜀>0, there exists a function 𝐺𝔄 such that 𝐺𝐹 and max|𝑥|𝑁|𝐹(𝑥)𝐺(𝑥)|<𝜀. Then, ||𝐹Ξ𝑛Ξ𝐺𝑛||𝐼||Ξ𝑛||||𝐹Ξ𝑁<𝜀,𝑛Ξ𝐺𝑛||𝐼||Ξ𝑛||>𝑁2𝐹𝐼||Ξ𝑛||.>𝑁(2.11) By the choice of 𝐺𝖤𝐺Ξ𝑛𝑛Ξ𝐺𝑛𝖯0,(2.12) whence by the dominated convergence theorem 𝖤||𝖤𝐺Ξ𝑛𝑛Ξ𝐺𝑛||0.(2.13) Writing the identity 𝖤𝐹Ξ𝑛𝑛Ξ𝐹𝑛𝐹Ξ=𝖤𝑛Ξ𝐺𝑛𝐼||Ξ𝑛||𝑁𝑛𝐹Ξ+𝖤𝑛Ξ𝐺𝑛𝐼||Ξ𝑛||>𝑁𝑛𝐺Ξ+𝖤𝑛𝑛Ξ𝐺𝑛+𝐺Ξ𝑛Ξ𝐹𝑛𝐼||Ξ𝑛||+𝐺Ξ𝑁𝑛Ξ𝐹𝑛𝐼||Ξ𝑛||,>𝑁(2.14) we get from (2.11)–(2.13) lim𝑛𝖤||𝖤𝐹Ξ𝑛𝑛Ξ𝐹𝑛||2𝜀+4𝐹lim𝑛𝖯||Ξ𝑛||,>𝑁(2.15) which together with (2.9) and due to arbitrariness of 𝜀 proves (2.10).

Corollary 2.7. Let for each 𝑛𝜁𝑛1,,𝜁𝑛𝑝 be 𝑑-valued random variables given on a probability space (Ω𝑛,𝑛,𝖯𝑛) and 𝑛 a sub-𝜎-algebra of 𝑛. Suppose that the relations lim𝑁lim𝑛𝖯||𝜁𝑛𝑗||𝖤𝑔𝜁>𝑁=0,(2.16)𝑛𝑗𝑛𝜁𝑔𝑛𝑗𝖯0as𝑛(2.17) hold for all 𝑗{1,,𝑝} and 𝑔 from some class 𝔉 of complex-valued bounded continuous functions on 𝑑 which separates points of the latter. Then, 𝖤𝐹𝜁𝑛1,,𝜁𝑛𝑝𝑛𝜁𝐹𝑛1,,𝜁𝑛𝑝𝖯0,(2.18) for all 𝐹Cb(𝑝𝑑).

Proof. Denote Ξ𝑛=(𝜁𝑛1,,𝜁𝑛𝑝). Condition (2.17) implies by Lemma 2.4 that relation (2.10) is valid for all 𝐹 of the kind 𝐹(𝑥1,,𝑥𝑝)=𝑝𝑖=1𝑔𝑖(𝑥𝑖), where 𝑔𝑖𝔉. Obviously, such functions separate points of 𝑝𝑑. Furthermore, condition (2.16) where 𝑗 runs over {1,,𝑝} is tantamount to (2.9). It remains to refer to Lemma 2.6.

Corollary 2.8. Let for each 𝑛𝐾𝑛 be an 𝔖-valued random process given on a probability space (Ω𝑛,𝑛,𝖯𝑛), 𝑛 a sub-𝜎-algebra of 𝑛, and 𝜁𝑛0 an 𝑛-measurable 𝑚-valued random variable. Suppose that the relations lim𝑁lim𝑛𝖯𝐾𝑛𝖤𝑓(𝑡)>𝑁=0,𝑧𝐾𝑛(𝑡)𝑧𝑛𝑓𝑧𝐾𝑛(𝑡)𝑧𝖯0,(2.19) and (2.16) hold for 𝑗=0, all 𝑡>0 and any bounded uniformly continuous function 𝑓 on . Then, for any 𝑙,𝑠𝑙>>𝑠1>0 and 𝐹Cb(𝑚×𝔖𝑙) the relation 𝖤𝐹𝜁𝑛0,𝐾𝑛𝑠1,,𝐾𝑛𝑠𝑙𝑛𝜁𝐹𝑛0,𝐾𝑛𝑠1,,𝐾𝑛𝑠𝑙𝖯0(2.20) is valid.

Recall that for any 𝐵𝔖𝐵=max𝑥𝑆𝑑1||𝑥||,𝐵𝑥(2.21) where 𝑆𝑑1 is the unit sphere in 𝑑.

Lemma 2.9. For any symmetric matrices 𝐵1 and 𝐵2, max𝑥𝑆𝑑1||𝑥𝐵1𝑥𝑥𝐵2𝑥||𝐵1𝐵2.(2.22)

Proof. It suffices to note that the left-hand side of the equality does not exceed max𝑥,𝑦𝑆𝑑1|𝑥𝐵1𝑥𝑦𝐵2𝑦|.

Let 𝑋,𝑋1,𝑋2 be 𝑑-valued random processes with trajectories in the Skorokhod space D (= càdlàg processes on +). We write 𝑋𝑛D𝑋 if the induced by the processes 𝑋𝑛 measures on the Borel 𝜎-algebra in D weakly converge to the measure induced by 𝑋. If herein 𝑋 is continuous, then we write 𝑋𝑛C𝑋. We say that a sequence (𝑋𝑛) is relatively compact (r.c.) in D (in C) if each its subsequence contains, in turn, a subsequence converging in the respective sense. The weak convergence of finite-dimensional distributions of random processes, in particular the convergence in distribution of random variables, will be denoted d. Likewise d= means equality of distributions.

Denote Π(𝑡,𝑟)={(𝑢,𝑣)2(𝑣𝑟)+𝑢𝑣𝑡},Δ𝒰(𝑓;𝑡,𝑟)=sup(𝑢,𝑣)Π(𝑡,𝑟)||||𝑓(𝑣)𝑓(𝑢)(𝑓D,𝑡>0,𝑟>0).(2.23) Proposition VI.3.26 (items (i), (ii)) [2] together with VI.3.9 [2] asserts that a sequence (𝜉𝑛) of càdlàg random processes is r.c. in C if and only if for all positive 𝑡 and 𝜀lim𝑁lim𝑛𝖯sup𝑠𝑡||𝜉𝑛||(𝑠)>𝑁=0,lim𝑟0lim𝑛𝖯Δ𝒰𝜉𝑛;𝑡,𝑟>𝜀=0.(2.24) Hence, two consequences are immediate.

Corollary 2.10. Let (𝜉𝑛) and (Ξ𝑛) be sequences of 𝑑-valued and 𝑚-valued, respectively, càdlàg processes such that (Ξ𝑛) is r.c. in C,|𝜉𝑛(0)||Ξ𝑛(0)| and for any 𝑣>𝑢0||𝜉𝑛(𝑣)𝜉𝑛||||Ξ(𝑢)𝑛(𝑣)Ξ𝑛||(𝑢).(2.25) Then, the sequence (𝜉𝑛) is also r.c. in C.

Corollary 2.11. Let (𝜉𝑛) and (𝜁𝑛) be r.c. in C sequences of càdlàg processes taking values in 𝑑 and 𝑝, respectively. Suppose also that for each 𝑛𝜉𝑛 and 𝜁𝑛 are given on a common probability space. Then the sequence of 𝑑+𝑝-valued processes (𝜉𝑛,𝜁𝑛) is also r.c. in C.

Lemma 2.12. Let (𝜂𝑙𝑛,𝑙,𝑛), (𝜂𝑙), and (𝜂𝑛) be sequences of càdlàg random processes such that for any positive 𝑡 and 𝜀lim𝑙lim𝑛𝖯sup𝑠𝑡||𝜂𝑙𝑛(𝑠)𝜂𝑛||(𝑠)>𝜀=0,(2.26) for each 𝑙𝜂𝑙𝑛D𝜂𝑙as𝑛,(2.27) the sequence (𝜂𝑙) is r.c. in D. Then, there exists a random process 𝜂 such that 𝜂𝑙D𝜂.

Proof. Let 𝜌 be a bounded metric in D metrizing Skorokhod's 𝒥-convergence (see, e.g., [2, VI.1.26]). Then, condition (2.26) with arbitrary 𝑡 and 𝜀 implies that lim𝑙lim𝑛𝜂𝖤𝜌𝑙𝑛,𝜂𝑛=0.(2.28) Hence, by the triangle inequality, we have lim𝑚𝑘lim𝑛𝜂𝖤𝜌𝑚𝑛,𝜂𝑘𝑛=0.(2.29)
Let 𝐹 be a uniformly continuous with respect to 𝜌 bounded functional on D. Denote 𝐴=sup𝑥D|𝐹(𝑥)|, 𝜗(𝑟)=sup𝑥,𝑦D𝜌(𝑥,𝑦)<𝑟|𝐹(𝑥)𝐹(𝑦)|. Then, 𝜗(0+)=0 and for any 𝑟>0𝖤||𝐹𝜂𝑚𝑛𝜂𝐹𝑘𝑛||𝜌𝜂𝐴𝖯𝑚𝑛,𝜂𝑘𝑛>𝑟+𝜗(𝑟),(2.30) which together with (2.29) yields lim𝑚𝑘lim𝑛||𝜂𝖤𝐹𝑚𝑛𝜂𝖤𝐹𝑘𝑛||=0.(2.31) By condition (2.27), lim𝑛||𝜂𝖤𝐹𝑚𝑛𝜂𝖤𝐹𝑘𝑛||=||𝖤𝐹(𝜂𝑚𝜂)𝖤𝐹𝑘||,(2.32) which jointly with (2.31) proves fundamentality and, therefore, convergence of the sequence (𝖤𝐹(𝜂𝑙),𝑙). Now, the desired conclusion emerges from relative compactness of (𝜂𝑙) in D.

Corollary 2.13. Let the conditions of Lemma 2.12 be fulfilled. Then, 𝜂𝑛D𝜂, where 𝜂 is the existing by Lemma 2.12 random process such that 𝜂𝑙D𝜂.

Proof. Repeating the derivation of (2.31) from (2.29), we derive from (2.28) the relation lim𝑙lim𝑛||𝜂𝖤𝐹𝑙𝑛𝜂𝖤𝐹𝑛||=0.(2.33) It remains to write |𝖤𝐹(𝜂𝑛)𝖤𝐹(𝜂)||𝖤𝐹(𝜂𝑛)𝖤𝐹(𝜂𝑙𝑛)|+|𝖤𝐹(𝜂𝑙𝑛)𝖤𝐹(𝜂𝑙)|+|𝖤𝐹(𝜂𝑙)𝖤𝐹(𝜂)|.

Corollary 2.14. Let (𝜂𝑙𝑛), (𝜂𝑙), and (𝜂𝑛) be sequences of càdlàg random processes such that for any 𝑡+ and 𝜀>0 (2.26) holds; for each 𝑙 relation (2.27) is valid; the sequence (𝜂𝑙) is r.c. in C. Then, there exists a random process 𝜂 such that 𝜂𝑙C𝜂 and 𝜂𝑛C𝜂.

Below, 𝒰 is the symbol of the locally uniform (i.e., uniform in every interval) convergence.

Lemma 2.15. Let 𝑋,𝑋1,𝑋2 be càdlàg random processes such that 𝑋𝑛C𝑋. Then, 𝐹(𝑋𝑛)d𝐹(𝑋) for any 𝒰-continuous functional 𝐹 on D.

Proof. Lemma VI.1.33 and Corollary VI.1.43 in [2] assert completeness and separability of the metric space (D,𝜌), where 𝜌 is the metric used in the proof of Lemma 2.12. Then, it follows from the assumptions of the lemma by Skorokhod's theorem [6] that there exist given on a common probability space càdlàg random processes 𝑋,𝑋1,𝑋2 such that 𝑋d=𝑋 (so that 𝑋 is continuous), 𝑋𝑛d=𝑋𝑛 and 𝜌(𝑋𝑛,𝑋)0 a.s. By the choice of 𝜌, the last relation is tantamount to 𝑋𝑛𝒥𝑋 a.s. Hence, and from continuity of 𝑋, we get by Proposition VI.1.17 [2] 𝑋𝑛𝒰𝑋a.s. and, therefore, by the choice of 𝐹,𝐹(𝑋𝑛)𝐹(𝑋) a.s. It remains to note that 𝐹(𝑋𝑛)d=𝐹(𝑋𝑛),𝐹(𝑋)d=𝐹(𝑋).

2.1. Forestopping of Random Processes

Let 𝔽 be a filtration on some probability space, 𝑆 an 𝔽-adapted random process, and 𝜏 a stopping time with respect to 𝔽. We put 𝑆(0)=𝑆(0) and denote 𝑆𝜏(𝑡)=𝑆(𝑡𝜏),𝜏𝑆(𝑡)=𝑆(𝑡)𝐼[0,𝜏[(𝑡)+𝑆(𝜏)𝐼[𝜏,[(𝑡),(2.34)𝜏(𝑡)=(𝑡)(𝜏), 𝜏𝔽=(𝜏(𝑡),𝑡+). Obviously,𝜏(𝑆𝜏)=𝜏𝑆,(2.35)𝜏𝑆=𝜏[𝑆],(2.36) provided [𝑆] exists. In case 𝜏 is 𝔽-predictable, the operation 𝑆𝜏𝑆 was called in [7] the forestopping. The following three statements were proved in [7].

Lemma 2.16. Let a random process 𝑈 and a stopping time 𝜏 be 𝔽-predictable. Then, the process 𝜏𝑈 is 𝜏𝔽-predictable.

Theorem 2.17. Let 𝑋 be an 𝔽-martingale and 𝜏 an 𝔽-predictable stopping time. Then, 𝜏𝑋 is a 𝜏𝔽-martingale. If 𝑋 is uniformly integrable, then so is 𝜏𝑋.

Lemma 2.18. Let 𝑉 be an 𝑑-valued right-continuous 𝔽-predictable random process and 𝐴 a closed set in 𝑑. Then, the stopping time inf{𝑡𝑉(𝑡)𝐴} is 𝔽-predictable.

The operation of forestopping was used prior to [7] by Barlow [8] who took the assertion of Theorem 2.17 (which he did not even formulate) for granted.

We will need some subtler properties of this operation.

Lemma 2.19. Let 𝑈 be a starting from zero locally square integrable martingale with respect to 𝔽, 𝑁 a positive number, and 𝜎 an 𝔽-predictable stopping time such that 𝜎inf{𝑡tr𝑈(𝑡)𝑁}.(2.37) Then, 𝖤sup𝑠+|𝜎𝑈(s)|24𝑁.

Proof. Predictability of 𝜎 implies by Theorem 2.1.13 [3] that there exists a sequence (𝜎𝑛) of stopping times such that 𝜎{𝜎>0}𝑛𝜎<𝜎,(2.38)𝑛𝜎a.s.(2.39)
By the choice of 𝑈 there exists a sequence (𝜏𝑘) of stopping times such that 𝜏𝑘𝑈a.s,(2.40)𝑘𝑈𝜏𝑘2(𝔽).(2.41) Then, sup𝑠𝜎𝑛||||𝑈(𝑠)=lim𝑘sup𝑠𝜎𝑛𝜏𝑘||||.𝑈(𝑠)(2.42) Herein, obviously, sup𝑠𝜎𝑛𝜏𝑘||||𝑈(𝑠)=sup𝑠𝜎𝑛||𝑈𝑘||.(𝑠)(2.43)
From (2.41) we have by Doob's inequality 𝖤sup𝑠𝜎𝑛||𝑈𝑘||(𝑠)2||𝑈4𝖤𝑘𝜎𝑛||2.(2.44) Noting that: (1) for any 𝑥𝑑|𝑥|2=tr𝑥𝑥, (2) 𝖤𝑈𝑘(𝜎𝑛)𝑈𝑘(𝜎𝑛)=𝖤𝑈𝑘(𝜎𝑛), we may rewrite the last inequality in the form 𝖤sup𝑠𝜎𝑛||𝑈𝑘||(𝑠)24𝖤tr𝑈𝑘𝜎𝑛.(2.45) Writing 𝑈𝑘𝜎𝑛=𝑈𝜏𝑘𝜎𝑛=𝑈𝜏𝑘𝜎𝑛𝜏=𝑈𝑘𝜎𝑛,(2.46) we get from (2.37) and (2.38) tr𝑈𝑘(𝜎𝑛)<𝑁, which together with (2.45) results in 𝖤sup𝑠𝜎𝑛|𝑈𝑘(𝑠)|2<4𝑁. Then, from (2.42) and (2.43), we have by Fatou's theorem 𝖤sup𝑠𝜎𝑛||||𝑈(𝑠)24𝑁.(2.47) The assumption 𝑈(0)=0 yields 𝖤sup𝑠<𝜎||||𝑈(𝑠)2=𝖤sup𝑠<𝜎||||𝑈(𝑠)2𝐼{𝜎>0}.(2.48) Relations (2.38) and (2.39) imply that sup𝑠<𝜎||||𝑈(𝑠)2𝐼{𝜎>0}=lim𝑛sup𝑠𝜎𝑛||||𝑈(𝑠)2𝐼{𝜎>0},(2.49) which together with (2.47) yields by Fatou's theorem 𝖤sup𝑠<𝜎|𝑈(𝑠)|2𝐼{𝜎>0}4𝑁. It remains to note that sup𝑠+|𝜎𝑈(𝑠)|=sup𝑠<𝜎|𝑈(𝑠)|𝐼{𝜎>0} in view of (2.34).

Lemma 2.20. Let 𝑈 be a locally square integrable martingale with respect to 𝔽 such that 𝖤||||𝑈(0)2<,(2.50) and for any 𝑡𝖤max𝑠𝑡||||Δ𝑈(𝑠)2<.(2.51) Let, further, 𝑁 be a positive number and 𝜎 a predictable time satisfying condition (2.37). Then, 𝑈𝜎2(𝔽).

Proof. In view of (2.50) it suffices to show that (𝑈𝑈(0))𝜎2(𝔽). In other words, we may consider that 𝑈(0)=0. Then condition (2.51) and the evident inequality sup𝑠𝑡||𝑈𝜎||(𝑠)sup𝑠𝑡||𝜎𝑈||(𝑠)+max𝑠𝑡||||Δ𝑈(𝑠)(2.52) imply by Lemma 2.19 that for any 𝑡𝖤sup𝑠𝑡||𝑈𝜎||(𝑠)2<.(2.53) It remains to prove that for all 𝑡2>𝑡10, 𝖤𝑈𝜎𝑡2𝑡1=𝑈𝜎𝑡1.(2.54)
Taking a sequence (𝜏𝑛) with properties (2.40) and (2.41), we write 𝖤𝑈𝑡2𝜎𝜏𝑘𝑡1=𝑈𝜎𝑡1𝜎𝜏𝑘.(2.55) To deduce (2.54) from this inequality and (2.40), it suffices to note that ||𝑈𝑡𝜎𝜏𝑘||sup𝑠𝑡||𝑈𝜎||,(𝑠)(2.56) so that (2.53) provides uniform integrability of the sequence (𝑈(𝑡2𝜎𝜏𝑘),𝑘).

Corollary 2.21. Under the conditions of Lemma 2.20  𝜎𝑈2(𝜎𝔽).

Theorem 2.22. Let 𝑈 be a locally square integrable martingale with respect to 𝔽 satisfying conditions (2.50) and (2.51), 𝑁 a positive number, and 𝜎 a predictable time satisfying condition (2.37). Then, 𝜎𝑈=𝜎𝑈.

Proof. Denote 𝑋=(𝑈2𝑈)𝜎(𝑈𝜎)2𝑈𝜎,𝑌=𝜎𝑈2𝜎𝑈𝜎(𝑈2𝑈). It suffices to show that 𝑌 is a 𝜎𝔽-martingale. To deduce this fact from Theorem 2.17, we note that, firstly, 𝑋(𝔽) by construction and Lemma 2.20, and, secondly, 𝑌=𝜎𝑋 by construction of both processes and because of (2.35).

3. Martingale Preliminaries

The next statement is obvious.

Lemma 3.1. Let (𝑀𝑙) be a sequence of martingales such that 𝑀𝑙d𝑀,(3.1) and for any 𝑡 the sequence (|𝑀𝑙(𝑡)|) is uniformly integrable. Then, 𝑀 is a martingale.

Lemma 3.2. Let (𝑀𝑙) be a sequence of martingales such that (3.1) holds and sup𝑙,𝑡+𝑀𝖤tr𝑙(𝑡)<.(3.2) Then, sup𝑡𝖤|𝑀(𝑡)|2<.

Proof. By condition (3.2) and the definition of quadratic characteristic, there exists a constant 𝐶 such that 𝖤|𝑀𝑙(𝑡)|2𝐶 for all 𝑡 and 𝑙. Hence, and from (3.1), we have by Fatou's theorem (applicable due to the above-mentioned Skorokhod's principle of common probability space) 𝖤|𝑀(𝑡)|2𝐶.

Corollary 3.3. Let a sequence (𝑀𝑙) of square integrable martingales satisfy conditions (3.1) and (3.2). Then, 𝑀 is a uniformly integrable martingale.

Lemma 3.4. Let 𝑌 be a local martingale and 𝐾 be an 𝔖-valued random process. Suppose that they are given on a common probability space and (𝑌,𝐾)d=(𝑌,[𝑌]). Then for any 𝑡𝐾(𝑡)=[𝑌](𝑡) a.s.

Proof. By assumption, 𝑛𝑖=1𝑌𝑡𝑖𝑡𝑌𝑖12𝐾(𝑡)d=𝑛𝑖=1𝑌𝑡𝑖𝑡𝑌𝑖12[𝑌](𝑡),(3.3) for all 𝑛,𝑡 and 𝑡0<𝑡1<𝑡𝑛. Hence, recalling the definition of quadratic variation, we get [𝑌](𝑡)𝐾(𝑡)d=0.

We shall identify indistinguishable processes, writing simply 𝜉=𝜂 if 𝖯{𝑡+,𝜉(𝑡)=𝜂(𝑡)}=1. Theorem 2.3.5 [3] asserts that[𝑌]=𝑌,(3.4) for a continuous local martingale 𝑌. Hence, and from Lemma 3.4, we have

Corollary 3.5. Let 𝑌 be a continuous local martingale and 𝐾 an 𝔖-valued random process. Suppose that they are given on a common probability space and (𝑌,𝐾)d=(𝑌,𝑌). Then, 𝐾=𝑌.

Proof. Lemma 3.4 and formula (3.4) yield 𝖯{𝑡+,𝐾(𝑡)=𝑌(𝑡)}=1. Continuity of both processes enables us to substitute + by +.

Lemma 3.6. Let 𝑈 be a locally square integrable martingale. Then, 𝑈 is an increasing process.

Proof. For any 𝑥𝑑, the process 𝑥𝑈 is a numeral locally square integrable martingale and, therefore, the process 𝑥𝑈 increases. It remains to note that 𝑥𝑈=𝑥𝑈𝑥 and to recall formula (2.21).

Lemma 3.7. Let 𝑍1 and 𝑍2 be locally square integrable martingales with respect to a common filtration. Then, 𝑍1,𝑍2+𝑍2,𝑍12𝑍1𝑍2.(3.5)

Proof. For 𝑑=1 (then 𝑍2,𝑍1=𝑍1,𝑍2), this is the Kunita-Watanabe inequality [3, page 118]. In the general case, we take an arbitrary vector 𝑥𝑆𝑑1 and write 𝑥𝑍1,𝑍2+𝑍2,𝑍1𝑥𝑥=2𝑍1,𝑥𝑍22𝑥𝑍1𝑥𝑍2=2𝑥𝑍1𝑥𝑥𝑍2𝑥,(3.6) hereupon the required conclusion ensues from (2.21) and Lemma 2.9.

Lemma 3.8. Let 𝑈1 and 𝑈2 be locally square integrable martingales with respect to some common filtration. Then, for any 𝑡>0sup𝑠𝑡𝑈1(𝑠)𝑈2(𝑠)𝑈1𝑈2(𝑡)+2𝑈1𝑈2(𝑡)𝑈2.(𝑡)(3.7)

Proof. Writing the identities 𝑈1=𝑈1𝑈2+𝑈2=𝑈1𝑈2+𝑈1𝑈2,𝑈2+𝑈2,𝑈1𝑈2+𝑈2,(3.8) we deduce from Lemma 3.7 that 𝑈1(𝑠)𝑈2(𝑠)𝑈1𝑈2(𝑠)+2𝑈2𝑈1(𝑠)𝑈2.(𝑠)(3.9) It remains to note that the right-hand side increases in 𝑠 by Lemma 3.6.

For a function 𝑓D we denote Δ𝑓(𝑡)=𝑓(𝑡)𝑓(𝑡).

Let us introduce the conditions:(RC) The sequence (tr𝑌𝑛) is r.c. in C.(UI1) The sequence (|𝑌𝑛(𝑡)𝑌𝑛(0)|2) is u.i.(UI2) For any 𝑧𝑑 the sequence (tr𝑧𝑌𝑛(𝑡)) is u.i.(UI3) The sequence (sup𝑠𝑡|𝑌𝑛(𝑠)𝑌𝑛(0)|2) is u.i.

Lemma 3.9. Let (𝑌𝑛) be a sequence of local square integrable martingales satisfying the conditions: (RC), lim𝐿lim𝑛𝖯||𝑌𝑛||(0)>𝐿=0,(3.10) and, for each 𝑡>0, the condition max𝑠𝑡||Δ𝑌𝑛||(𝑠)𝖯0.(3.11) Then, (𝑌𝑛) is r.c. in C.

Proof. It follows from (RC) and (3.10) by Rebolledo's theorem [2, VI.4.13] that (𝑌𝑛) is r.c. in D. Hereon, the desired conclusion follows from Proposition VI.3.26 (items (i) and (iii)) [2] with account of VI.3.9 [2].

Combining Lemma 3.9 with Corollary 2.11, we get

Corollary 3.10. Under the assumptions of Lemma 3.9, the sequence of compound processes (𝑌𝑛,𝑌𝑛) is r.c. in C.

Some statements below deal with random processes on [0,𝑡], not on +. In this case, the time variable is denoted by 𝑠 and C means C[0,𝑡] instead of C(+).

Lemma 3.11. Let (𝑌𝑛) be a r.c. in C and satisfying condition (UI1) sequence of martingales on [0,𝑡]. Then, for any 𝑧𝑑, the sequence ([𝑧𝑌𝑛](𝑡)) is u.i.

Proof. The obvious equality [𝑉𝑉(0)]=[𝑉] allows us to consider that 𝑌𝑛(0)=0. Then, condition (UI1) together with Doob's inequality yields sup𝑛𝖤sup𝑠𝑡|𝑌𝑛(𝑠)|2<,(3.12) whence sup𝑛𝖤max𝑠𝑡||Δ𝑌𝑛||(𝑠)<.(3.13)
By assumption, for any infinite set 𝐽0, there exist an infinite subset 𝐽𝐽0 and a random process 𝑌 such that 𝑌𝑛C𝑌as𝑛,𝑛𝐽.(3.14) Condition (UI1) implies that 𝑌 is a square integrable martingale and for any 𝑧𝑑, 𝖤𝑧𝑌𝑛(𝑡)2𝖤(𝑧𝑌(𝑡))2as𝑛,𝑛𝐽.(3.15)
From (3.14) and (3.13), we have by Corollary VI.6.7 [2] 𝑧𝑌𝑛C[]𝑧𝑌as𝑛,𝑛𝐽.(3.16) Hence, and from (3.15), recalling that for any -valued 𝑀2 one has 𝖤(𝑀𝑀(0))2=𝖤[𝑀][3, Theorem 2.2.4], we get 𝖤𝑧𝑌𝑛[](𝑡)𝖤𝑧𝑌(𝑡)as𝑛,𝑛𝐽.(3.17) Comparing this relation with (3.16), we conclude that the sequence ([𝑧𝑌𝑛](𝑡),𝑛𝐽) is uniformly integrable. Hence, in view of arbitrariness of 𝐽0, we deduce by Lemma 2.3 uniform integrability of ([𝑧𝑌𝑛](𝑡),𝑛).

Lemma 3.12. Let (𝑌𝑛) be a sequence of martingales on [0,𝑡] satisfying condition (UI1) and (UI2). Suppose that there exists an 𝑑×𝔖+-valued random process (𝑌,𝐾) such that 𝑌𝑛,𝑌𝑛C(𝑌,𝐾).(3.18) Then, firstly, 𝑌𝑛𝑌𝑛C𝑂,(3.19) where 𝑂 is the null matrix, 𝑌 is a continuous martingale, and, secondly, (𝑌,𝐾)d=(𝑌,𝑌).(3.20)

Proof. For the same reason as in the proof of Lemma 3.11, we may consider that 𝑌𝑛(0)=0. Then, as was shown above, condition (UI1) implies (3.13). Combining the latter with 𝑌𝑛C𝑌,(3.21) (a part of (3.18)), we get by Corollary VI.6.7 [2] that 𝑌𝑛,𝑌𝑛C[𝑌](𝑌,).(3.22) From (3.18) and (3.22), we get by Corollary 2.11 that for any infinite set 𝐽0, there exist an infinite subset 𝐽𝐽0 and an 𝔖+×𝔖+-valued random process (𝑄𝐽,𝑅𝐽) such that 𝑌𝑛,𝑌𝑛𝑄𝐽,𝑅𝐽as𝑛,𝑛𝐽.(3.23) (Of course 𝑄𝐽d=𝐾,𝑅𝐽d=[𝑌].)
Denote 𝑍𝑛=[𝑌𝑛]𝑌𝑛. This is a martingale by Lemma 10.4 in [4]. Relation (3.23) implies that 𝑍𝑛C𝑄𝐽𝑅𝐽as𝑛,𝑛𝐽.(3.24) For any 𝑧𝑑, the sequence (𝑧𝑍𝑛(𝑡)𝑧) is, by Lemma 3.11 and condition (UI2), u.i. So, relation (3.24) implies by Lemma 3.1 that 𝑧(𝑄𝐽𝑅𝐽)𝑧 is a martingale. Also, it implies its continuity. Relation (3.23) shows that the processes 𝑧𝑄𝐽𝑧 and 𝑧𝑅𝐽𝑧 increase and start from zero. So, 𝑧(𝑄𝐽𝑅𝐽)𝑧 starts from zero and has finite variation in [0,𝑡]. These four properties of 𝑄𝐽𝑅𝐽 imply together that 𝑄𝐽(𝑠)𝑅𝐽(𝑠)=𝑂 for all 𝑠[0,𝑡]. Thus, any subsequence (𝑍𝑛,𝑛𝐽0) contains, in turn, a subsequence (𝑍𝑛,𝑛𝐽) such that 𝑍𝑛𝑂as𝑛,𝑛𝐽. This proves (3.19).
From (3.22) and (3.19), we have (𝑌𝑛,𝑌𝑛)C(𝑌,[𝑌]). And this is, in view of (3.4), tantamount to 𝑌𝑛,𝑌𝑛C(𝑌,𝑌).(3.25) Comparing this relation with (3.18), we arrive at (3.20).

Remark 3.13. The second conclusion of Lemma 3.12 implies by Corollary 3.5 that 𝐾=𝑌.

Corollary 3.14. Let a sequence (𝑌𝑛) of martingales on [0,𝑡] satisfy conditions (RC), (3.11), (UI1), and (UI2). Then, relation (3.19) holds.

Proof. By Corollary 3.10 for any infinite set 𝐽0, there exist an infinite set 𝐽𝐽0 and an 𝑑×𝔖+-valued random process (𝑌,𝐾) such that relation (3.18) holds as 𝑛,𝑛𝐽. Then, by Lemma 3.12, so does (as 𝑛,𝑛𝐽) (3.19). Due to arbitrariness of 𝐽0 this relation holds when 𝑛 ranges over , too.

Corollary 3.15. Let (𝑌𝑛) be a sequence of martingales on + satisfying conditions (RC) and for all 𝑡>0, conditions (3.11), (UI1), (UI2). Then, relation (3.19) holds.

Lemma 3.16. Let a sequence (𝑌𝑛) of martingales on + satisfy conditions (RC) and, for any 𝑡>0, (3.11) and (UI3). Then, relation (3.19) holds.

Proof. Denote 𝜎𝑁𝑛=inf{𝑠tr𝑌𝑛(𝑠)𝑁},𝑌𝑁𝑛=𝜎𝑁𝑛𝑌𝑛. Obviously, 𝜎𝑁𝑛<𝑡tr𝑌𝑛(𝑡)𝑁.(3.26) By Corollary 2.21  𝑌𝑁𝑛 is a square integrable martingale with respect to 𝜎𝑁𝑛𝔽𝑛. By Theorem 2.22, 𝑌𝑁𝑛=𝜎𝑁𝑛𝑌𝑛.(3.27) Condition (RC) implies relative compactness of the sequence (𝑌𝑁𝑛,𝑛). By construction |𝑌𝑁𝑛(𝑡)𝑌𝑁𝑛(0)|sup𝑠𝑡|𝑌𝑛(𝑡)𝑌𝑛(0)|. So, condition (UI3) implies that for any 𝑡 and 𝑁 the sequence (|𝑌𝑁𝑛(𝑡)𝑌𝑁𝑛(0)|2,𝑛) is u.i. Thus, Corollary 3.15 asserts that for any 𝑁𝑌𝑁𝑛𝑌𝑁𝑛C𝑂as𝑛.(3.28) Equalities (3.27) and (2.36) yield the relation sup𝑠𝑡𝑌𝑁𝑛𝑌(𝑠)𝑛+𝑌(𝑠)𝑁𝑛(𝑠)𝑌𝑛𝜎(𝑠)>0𝑁𝑛,<𝑡(3.29) which together with (3.26), (3.28) and (RC) entails (3.19).

Corollary 3.15 and Lemma 3.16 are only the steps towards the final result about asymptotic proximity of quadratic variations and quadratic characteristics—Corollary 5.3.

Corollary 3.17. Let a sequence (𝑌𝑛) of martingales on + satisfy conditions (RC) and for any 𝑡>0, (3.11) and (UI3). Suppose also that there exists an 𝑑×𝔖+-valued random process (𝑌,𝐾) such that relation (3.18) is valid. Then, 𝑌 is a continuous martingale, and (3.20) holds.

Proof. Lemma 3.16 asserts (3.19). The implications ((3.21) and (UI1) (3.22)); ((3.22) and (3.19) (3.25)), were established in the proof of Lemma 3.12.

4. Sequences of Martingales with Asymptotically Conditionally Independent Increments

Lemma 4.1. Let for each 𝑛𝑀𝑛 be an -valued square integrable martingale on [0,𝑡] with respect to a flow (𝒢𝑛(𝑠),𝑠[0,𝑡]) and 𝑛 a sub-𝜎-algebra of 𝒢𝑛(0). Suppose that conditions (UI1) and (RC) are fulfilled for 𝑌𝑛=𝑀𝑛, 𝑀𝑛(0)=0,(4.1)max𝑠𝑡||Δ𝑀𝑛||(𝑠)𝖯0(4.2) and there exists a nonrandom number 𝑁 such that for all 𝑛𝑀𝑛(𝑡)𝑁.(4.3) Then, 𝖤𝑒𝑖𝑀𝑛(𝑡)+𝑀𝑛(𝑡)/2𝑛𝖯1.(4.4)

Proof. Conditions (RC) (𝑌𝑛=𝑀𝑛), (4.1), and (4.2) entail, by Lemma 3.9, relative compactness of (𝑀𝑛) in C.
Denote 𝑇𝑛=𝑀𝑛/2, 𝜉𝑛=𝑒𝑖𝑀𝑛+𝑇𝑛, 𝑋𝑛=([𝑀𝑛]𝑀𝑛)/2, 𝛾𝑛=𝜉𝑛𝑋𝑛, 𝜂𝑛=𝑠𝑡𝜉𝑛𝑒(𝑠)Δ𝑇𝑛(𝑠)+𝑖Δ𝑀𝑛(𝑠)1Δ𝑇𝑛(𝑠)𝑖Δ𝑀𝑛1(𝑠)+2Δ𝑀𝑛(𝑠)2.(4.5) Condition (RC) (𝑌𝑛=𝑀𝑛) implies that max𝑠𝑡Δ𝑇𝑛(𝑠)𝖯0.(4.6) In view of (4.1) 𝜉𝑛(0)=1. Then, by Itô's formula 𝜉𝑛(𝑡)=1+𝑖𝜉𝑛𝑀𝑛(𝑡)+𝜉𝑛𝑇𝑛1(𝑡)2𝜉𝑛𝑀𝑐𝑛+(𝑡)𝑠𝑡𝜉𝑛𝑒(𝑠)𝑖Δ𝑀𝑛(𝑠)+Δ𝑇𝑛(𝑠)1𝑖Δ𝑀𝑛(𝑠)Δ𝑇𝑛.(𝑠)(4.7) Hence, recalling that 𝑀𝑐𝑛(𝑡)=[𝑀𝑛](𝑡)𝑠𝑡(Δ𝑀𝑛(𝑠))2, we get 𝜉𝑛(𝑡)=1+𝑖𝜉𝑛𝑀𝑛(𝑡)𝜉𝑛𝑋𝑛(𝑡)+𝜂𝑛.(4.8)
By the definition of 𝜉𝑛 and by condition (4.3), sup𝑠𝑡||𝜉𝑛||(𝑠)𝑒𝑁/2.(4.9) Consequently, 𝖤𝜉𝑛𝑀𝑛(𝑡)𝑛=0,(4.10) and 𝖤|𝜉𝑛𝑀𝑛(𝑡)|2=𝖤(|𝜉𝑛|2𝑀𝑛(𝑡)). The right-hand side of the last equality being less than 𝑒𝑁𝑁, the sequence (𝜉𝑛𝑀𝑛(𝑡)) is u.i., and so is ([𝑀𝑛](𝑡)) by Lemma 3.11 whose conditions (those not postulated) we have verified. This together with (4.9) and (4.3) implies uniform integrability of (𝜉𝑛𝑋𝑛(𝑡)). Now, (4.8) and inequality (4.9) show that (𝜂𝑛) has this property, too.
By construction and Lemma 10.4 in [4], 𝑋𝑛 is a martingale. Then, it follows from (4.9) that 𝖤(𝜉𝑛𝑋𝑛(𝑡)𝑛)=0, which together with (4.10) and (4.8) yields 𝖤𝜉𝑛(𝑡)𝑛𝜂=1+𝖤𝑛𝑛.(4.11) So, it suffices to show that 𝜂𝑛𝖯0.(4.12)
Obviously, for any real 𝑎 and 𝑏𝑒𝑎+𝑏𝑖𝑎=(𝑒𝑎1𝑎)𝑒𝑏𝑖𝑒+𝑎𝑏𝑖1+𝑒𝑏𝑖,||𝑒𝑎||1𝑎|𝑎|2𝑒|𝑎|,||𝑒𝑏𝑖||||𝑏||,||||𝑒1𝑏𝑖𝑏1𝑏𝑖+22||||||𝑏||3.(4.13) Hence, from (4.5), (4.9), we get ||𝜂𝑛||𝑒𝑁𝑒𝑒𝑁𝑠𝑡Δ𝑇𝑛(𝑠)2+max𝑠𝑡||Δ𝑀𝑛||(𝑠)𝑠𝑡Δ𝑇𝑛(𝑠)+𝑠𝑡Δ𝑀𝑛(𝑠)2.(4.14) Now, (4.12) ensues from (4.6), (4.3), (4.2), and stochastic boundedness of the sequence ([𝑀𝑛](𝑡)).

Lemma 4.2. Let for each 𝑛,𝑀𝑛 be an -valued starting from zero locally square integrable martingale with respect to some flow (𝒢𝑛(𝑡),𝑡+) and 𝑛 a sub-𝜎-algebra of 𝒢𝑛(0). Suppose that condition (RC) is fulfilled for 𝑌𝑛=𝑀𝑛; 𝖤max𝑠𝑡Δ𝑀𝑛(𝑠)20,(4.15) for any 𝑡>0; there exists a nonrandom function 𝑞++ such that 𝑀𝑛(𝑡)𝑞(𝑡),(4.16) for all 𝑛 and 𝑡. Then, for any 𝑡, relation (4.4) holds.

Proof. Let us denote, only in this proof, 𝜏𝑁𝑛=inf{𝑡|𝑀𝑛(𝑡)|𝑁}, 𝑀𝑁𝑛(𝑡)=𝑀𝑛(𝑡𝜏𝑁𝑛), 𝑇𝑁𝑛(𝑡)=𝑇𝑛(𝑡𝜏𝑁𝑛) (so that 𝑀𝑁𝑛2 and 𝑀𝑁𝑛=2𝑇𝑁𝑛),𝜉𝑁𝑛=𝑒𝑖𝑀𝑁𝑛+𝑇𝑁𝑛. The evident inequality sup𝑠𝑡||𝑀𝑁𝑛||(𝑠)𝑁+max𝑠𝑡||Δ𝑀𝑛||,(𝑠)(4.17) and condition (4.15) show us that for any positive 𝑡 and 𝑁, the sequence (𝑀𝑁𝑛(𝑡)2,𝑛) is u.i.
By assumption, there exists a sequence (𝜎𝑘) of stopping times such that 𝜎𝑘 a.s. and for each 𝑘,𝑀𝜎𝑘2. Then, for any 𝑡>𝑠0, 𝑁>0 and 𝑛, 𝑘, 𝖤𝑀𝑁𝑛𝑡𝜎𝑘𝒢𝑛𝑀(𝑠)=𝖤𝜎𝑘𝑡𝜏𝑁𝑛𝒢𝑛(𝑠)=𝑀𝜎𝑘𝑠𝜏𝑁𝑛=𝑀𝑁𝑛𝑠𝜎𝑘.(4.18) Writing |𝑀𝑁𝑛(𝑡𝜎𝑘)|sup𝑠𝑡|𝑀𝑁𝑛(𝑠)|, we deduce from (4.17) and (4.15) uniform integrability of the sequence (𝑀𝑁𝑛(𝑡𝜎𝑘),𝑘). So, letting 𝑘 in (4.18), we get 𝖤𝑀𝑁𝑛(𝑡)𝒢𝑛(𝑠)=𝑀𝑁𝑛(𝑠),(4.19) that is, 𝑀𝑁𝑛 is a martingale. It is square integrable because of (4.17) and (4.15). Thus, for any 𝑁 and 𝑡 the sequence (𝑀𝑁𝑛,𝑛) satisfies all the conditions of Lemma 4.1 which, therefore, asserts that 𝖤𝜉𝑁𝑛(𝑡)𝑛𝖯1as𝑛.(4.20) Here, in |𝜉𝑛(𝑡)||𝜉𝑁𝑛(𝑡)|𝑒𝑞(𝑡) because of (4.16), so ||𝜉𝑛(𝑡)𝜉𝑁𝑛||(𝑡)2𝑒𝑞(𝑡)𝐼𝜏𝑁𝑛.𝑡(4.21) Obviously, {𝜏𝑁𝑛𝑡}{sup𝑠𝑡|𝑀𝑛(𝑠)|𝑁}. From (4.16), we have by the Lenglart-Rebolledo inequality lim𝑁lim𝑛𝖯sup𝑠𝑡||𝑀𝑛||(𝑠)𝑁=0.(4.22) The last three relations imply that lim𝑁lim𝑛𝖤||𝜉𝑛(𝑡)𝜉𝑁𝑛||(𝑡)=0,(4.23) which together with (4.20) yields (4.4).

Lemma 4.3. Let for each 𝑛,𝑀𝑛,𝑀1𝑛,𝑀2𝑛  be -valued starting from zero locally square integrable martingales with respect to a flow (𝒢𝑛(𝑡),𝑡+) and 𝑛 a sub-𝜎-algebra of 𝒢𝑛(0). Suppose that for all 𝑚, 𝑡>0, 𝜀>0, lim𝑛𝖤max𝑢𝑡Δ𝑀𝑚𝑛(𝑢)2=0,(4.24)lim𝑙lim𝑛𝖯𝑀𝑙𝑛𝑀𝑛(𝑡)>𝜀=0,(4.25) there exists a nonrandom function 𝑞++ such that 𝑀𝑛(𝑡)𝑀𝑚𝑛(𝑡)𝑞(𝑡),(4.26) for all 𝑚,𝑛, and 𝑡; for each 𝑚, the sequence (𝑀𝑚𝑛,𝑛) is r.c. in C. Then, for any 𝑡 relation (4.4) holds.

Proof. Denote 𝑇𝑚𝑛=𝑀𝑚𝑛/2. By Lemma 4.2  𝖤(𝑒𝑖𝑀𝑚𝑛(𝑡)+𝑇𝑚𝑛(𝑡)𝑛)𝖯1 as 𝑛. So, in view of (4.26), it suffices to prove that for any 𝜀>0, lim𝑙lim𝑛𝖤|||𝑒𝑖𝑀𝑙𝑛(𝑡)+𝑇𝑙𝑛(𝑡)𝑒𝑖𝑀𝑛(𝑡)+𝑇𝑛(𝑡)|||=0.(4.27) Conditions (4.25) and (4.26) imply by Lemma 3.8 that for all positive 𝑡 and 𝜀lim𝑙lim𝑛𝖯||𝑇𝑙𝑛(𝑡)𝑇𝑛||(𝑡)>𝜀=0.(4.28) Furthermore, (4.25) together with the Lenglart-Rebolledo inequality and the assumed equalities 𝑀𝑛(0)=0=𝑀𝑙𝑛(0) yields lim𝑙lim𝑛𝖯||𝑀𝑙𝑛(𝑡)𝑀𝑛||(𝑡)>𝜀=0,(4.29) which jointly with the previous relation and condition (4.25) entails (4.27).

Lemma 4.4. Let for each 𝑛,𝑀𝑛 be an -valued starting from zero locally square integrable martingale with respect to a flow (𝒢𝑛(𝑡),𝑡+) and 𝑛 a sub-𝜎-algebra of 𝒢𝑛(0). Suppose that conditions (RC) (𝑌𝑛=M𝑛), (4.24) (for all 𝑚 and 𝑡) and (4.25) (for all 𝑡 and 𝜀) are fulfilled; for all 𝑚 and 𝑢2>𝑢10, 𝑀𝑚𝑛𝑢2𝑀𝑚𝑛𝑢1𝑀𝑛𝑢2𝑀𝑛𝑢1;(4.30) for any 𝑡>0 and bounded uniformly continuous function 𝑓+, 𝖤𝑓𝑀𝑛(𝑡)𝑛𝑓𝑀𝑛(𝑡)𝖯0.(4.31) Then, for any 𝑡, 𝖤𝑒𝑖𝑀𝑛(𝑡)𝑛𝑒𝑀𝑛𝖯(𝑡)/20.(4.32)

Proof. (1) Let us fix 𝑡 and denote 𝛼𝑛=𝑒𝑇𝑛(𝑡), 𝛽𝑛=𝑒𝑖𝑀𝑛(𝑡) so that 𝜉𝑛(𝑡)=𝛼𝑛𝛽𝑛. If there exists a nonrandom constant 𝑁 such that 𝑀𝑛(𝑠)𝑁 for all 𝑛 and 𝑠, then all the conditions of Lemma 4.3 are fulfilled, and therefore, 𝖤𝛼𝑛𝛽𝑛𝑛𝖯1.(4.33) Also, under this assumption 𝛼𝑛=𝑔𝑁(𝑇𝑛(𝑡)), where 𝑔𝑁(𝑥)=𝑒𝑥[𝑁], 𝑥[𝑁]=𝑁𝑥.𝑁|𝑥|(4.34) So, substituting 𝑓(𝑥)=𝑔𝑁(2𝑥) to (4.31), we obtain (2.7), whence by Lemma 2.5, relation (2.8) follows. Juxtaposing it with (4.33), we get 𝛼𝑛𝖤(𝛽𝑛𝑛)1𝖯0. Dividing both sides of this relation by 𝛼𝑛(1), we arrive at (4.32).
(2) Let us waive the extra assumption.
Denote 𝜎𝑘𝑛=inf{𝑠𝑀𝑛(𝑠)𝑘}, 𝑇𝑘𝑛(𝑠)=𝑇𝑛(𝑠)𝐼[0,𝜎𝑘𝑛[(𝑠)+𝑇𝑛𝜎𝑘𝑛𝐼[𝜎𝑘𝑛,[𝑇(𝑠),(4.35)𝑚𝑘𝑛(𝑠)=𝑇𝑚𝑛(𝑠)𝐼[0,𝜎𝑘𝑛[(𝑠)+𝑇𝑚𝑛𝜎𝑘𝑛𝐼[𝜎𝑘𝑛,[(𝑠),(4.36) and likewise with 𝑀 instead of 𝑇. Lemma 2.18 asserts predictability of 𝜎𝑘𝑛. By construction and condition (4.33), 𝜎𝑘𝑛inf{𝑠𝑀𝑚𝑛(𝑠)𝑘}. Thus, Theorem 2.22 asserts that 𝑀𝑘𝑛 and 𝑀𝑚𝑘𝑛 are square integrable martingales and 𝑀𝑘𝑛=2𝑇𝑘𝑛,𝑀𝑚𝑘𝑛=2𝑇𝑚𝑘𝑛. Consequently, for any 𝑡2>𝑡1>0𝑀𝑚𝑘𝑛𝑡2𝑀𝑚𝑘𝑛𝑡1𝑀𝑘𝑛𝑡2𝑀𝑘𝑛𝑡1.(4.37)
In view of (4.35) and (4.34), 𝑇𝑛(𝑡)[𝑘]𝑇𝑘𝑛(𝑡)=𝑘𝑇𝑛𝜎𝑘𝑛𝑘𝑇𝑛𝜎𝑘𝑛𝑇𝑛𝜎𝑘𝑛𝐼[𝜎𝑘𝑛,[(𝑡),(4.38) whence |||𝑇𝑛(𝑡)[𝑘]𝑇𝑘𝑛|||(𝑡)Δ𝑇𝑛𝜎𝑘𝑛𝑡max𝑠𝑡Δ𝑇𝑛(𝑠).(4.39) Here, in condition (4.6) is fulfilled because of (RC).
Let 𝑓 be a bounded uniformly continuous function. Then, 𝖤𝑓𝑇𝑛(𝑡)[𝑘]𝑛𝑇𝑓𝑛(𝑡)[𝑘]𝖯0,(4.40) by condition (4.30); 𝑓𝑇𝑘𝑛(𝑡)𝑓(𝑇𝑛(𝑡))[𝑘]𝖯0,(4.41) on the strength of (4.39), (4.6) and uniform continuity of 𝑓. From the second relation, we get, since 𝑓 is bounded, 𝖤𝑓𝑇𝑘𝑛𝑇(𝑡)𝑓𝑛(𝑡)[𝑘]𝑛𝖯0.(4.42) These three relations together yield 𝖤𝑓𝑇𝑘𝑛(𝑡)𝑛𝑇𝑓𝑘𝑛(𝑡)𝖯0.(4.43) Thus, the sequences (𝑀𝑘𝑛,𝑛) and (𝑀𝑚𝑘𝑛,𝑛) satisfy all the conditions of the lemma plus the above extra assumption. Then, according to item (1)𝖤(𝑒𝑖𝑀𝑘𝑛(𝑡)𝑛)𝑒𝑀𝑘𝑛𝖯(𝑡)/20 as 𝑛. Hence, and from (RC), relation (4.32) emerges by the same argument as (4.4) was derived from (4.20) and (4.22).

Theorem 4.5. Let for each 𝑛𝑋𝑛,𝑋1𝑛,𝑋2𝑛 be locally square integrable martingales with respect to a flow 𝔽𝑛. Suppose that the sequence (tr𝑋𝑛) is r.c. in C and for all 𝑚,𝑡>𝑠>0,𝜀>0,𝑡2>𝑡10,𝑧𝑑 and bounded uniformly continuous functions 𝑓+lim𝑛𝖤max𝑢𝑡||Δ𝑋𝑚𝑛||(𝑢)2=0,(4.44)lim𝑙lim𝑛𝖯𝑋tr𝑙𝑛𝑋𝑛(𝑡)>𝜀=0,(4.45)𝑧𝑋𝑚𝑛𝑡2𝑧𝑋𝑚𝑛𝑡1𝑧𝑋𝑛𝑡2𝑧𝑋𝑛𝑡1𝖤𝑓,(4.46)𝑧𝑋𝑛(𝑡)𝑛(𝑠)𝑓𝑧𝑋𝑛(𝑡)𝖯0.(4.47) Then, (1) for any 𝑝,𝑙, 𝑡𝑝>>𝑡1>𝑡0𝑠>0, 𝑧1,,𝑧𝑝𝑑, and 𝑠𝑙>>𝑠1>0𝖤𝑖exp𝑝𝑗=1𝑧𝑗𝑋𝑛𝑡𝑗𝑋𝑛𝑡𝑗1𝑛1(𝑠)exp2𝑝𝑗=1𝑧𝑗𝑋𝑛𝑡𝑗𝑋𝑛𝑡𝑗1𝑧𝑗𝖯0(4.48)(2) under the extra assumption that the sequence (𝑋𝑛(0)) is stochastically bounded, 𝖤𝑖exp𝑝𝑗=1𝑧𝑗𝑋𝑛𝑡𝑗𝑋𝑛𝑡𝑗1𝐹𝑋𝑛(0),𝑋𝑛𝑠1,,𝑋𝑛𝑠𝑙𝑛1(𝑠)exp2𝑝𝑗=1𝑧𝑗𝑋𝑛𝑡𝑗𝑋𝑛𝑡𝑗1𝑧𝑗𝐹𝑋𝑛(0),𝑋𝑛𝑠1,,𝑋𝑛𝑠𝑙𝖯0,(4.49) for all 𝑝,𝑙, 𝑡𝑝>>𝑡1>𝑡0𝑠>0, 𝑧1,,𝑧𝑝𝑑,𝑠𝑙>>𝑠1>0, and 𝐹Cb(𝑑×𝔖𝑙).

Proof. The relative compactness condition implies that for any 𝑡, the sequence (𝑋𝑛(𝑡)) is stochastically bounded. Then, it follows from (4.47) by Corollary 2.8 that for any 𝑡>𝑡𝑠>0, 𝜀>0,𝑧𝑑 and 𝜑Cb(2+)𝖤||𝖤𝜑𝑧𝑋𝑛(𝑡),𝑧𝑋𝑛𝑡𝑛(𝑠)𝜑𝑧𝑋𝑛(𝑡),𝑧𝑋𝑛𝑡||0.(4.50) If, moreover, the sequence (𝑋𝑛(0)) is stochastically bounded, then the same corollary asserts that, in the notation of formula (4.49), 𝖤𝐹𝑋𝑛(0),𝑋𝑛𝑠1,,𝑋𝑛𝑠𝑙𝑛𝑋(𝑠)𝐹𝑛(0),𝑋𝑛𝑠1,,𝑋𝑛𝑠𝑙𝖯0.(4.51)
Let us fix 𝑗 and denote 𝑀𝑛(𝑢)=𝑧𝑗𝑋𝑛(𝑡𝑗1+𝑢)𝑧𝑗𝑋𝑛(𝑡𝑗1) (likewise with a superscript), 𝒢𝑛(𝑢)=𝑛(𝑡𝑗1+𝑢). Then, 𝑀𝑛𝑧(𝑢)=𝑗𝑋𝑛𝑡𝑗1𝑧+𝑢𝑗𝑋𝑛𝑡𝑗1,(4.52)𝑀𝑛𝑡𝑗𝑡𝑗1=𝑧𝑗𝑋𝑛𝑡𝑗𝑧𝑗𝑋𝑛𝑡𝑗1,𝑀𝑛𝑡𝑗𝑡𝑗1=𝑧𝑗𝑋𝑛𝑡𝑗𝑧𝑗𝑋𝑛𝑡𝑗1,max𝑢𝑡||Δ𝑀𝑚𝑛||(𝑢)=max𝑢𝑡𝑗1+𝑡||𝑧𝑗Δ𝑋𝑚𝑛||,𝑀(𝑢)𝑙𝑛𝑀𝑛𝑧(𝑢)=𝑗𝑋𝑙𝑛𝑋𝑛𝑡𝑗1𝑧+𝑢𝑗𝑋𝑙𝑛𝑋𝑛𝑡𝑗1.(4.53) So we have the implications: (4.44) (4.24); (4.45) (4.25). Setting in (4.50) 𝑡=𝑡𝑗1+𝑢, 𝑡=𝑡𝑗1, 𝑧=𝑧𝑗, 𝜑(𝑥,𝑦)=𝑓(𝑥𝑦) (𝑓Cb(+)), and taking to account (4.52), we get (4.31) with 𝑛=𝒢𝑛(0). Equality (4.52) shows that the sequence (𝑀𝑛) is r.c. in C, since (𝑋𝑛) has this property. The similar equality for 𝑀𝑚𝑛 and condition (4.46) imply (4.30). Thus, Lemma 4.4 asserts that for any 𝑡 relation (4.32) with 𝑛=𝒢𝑛(0) holds. Putting 𝑡=𝑡𝑗𝑡𝑗1, we convert it to 𝖤𝑒𝑖𝑧𝑗(𝑋𝑛(𝑡𝑗)𝑋𝑛(𝑡𝑗1))𝑛𝑡𝑗1𝑒𝑧𝑗(𝑋𝑛(𝑡𝑗)𝑋𝑛(𝑡𝑗1))𝑧𝑗𝖯/20.(4.54) Denote the left-hand side of this relation by 𝜘𝑛. Inequality |𝜘𝑛|2 allows to rewrite it in the form 𝖤|𝜘𝑛|0. Consequently, for any 𝑠[0,𝑡𝑗1]𝖤𝑒𝑖𝑧𝑗(𝑋𝑛(𝑡𝑗)𝑋𝑛(𝑡𝑗1))𝑛𝑒(𝑠)𝖤𝑧𝑗(𝑋𝑛(𝑡𝑗)𝑋𝑛(𝑡𝑗1))𝑧𝑗/2𝑛(𝑠)𝖯0.(4.55) Hence, and from (4.50) (𝜑(𝑥,𝑦)=𝑓(𝑥𝑦)), we have for 𝑗=1,,𝑝𝖤𝑒𝑖𝑧𝑗(𝑋𝑛(𝑡𝑗)𝑋𝑛(𝑡𝑗1))𝑛(𝑠)𝑒𝑧𝑗(𝑋𝑛(𝑡𝑗)𝑋𝑛(𝑡𝑗1))𝑧𝑗𝖯/20.(4.56) Now, (4.48) emerges from Lemma 2.4.
Relation (4.49) follows from (4.48) and (4.51) by Lemma 2.5.

Remark 4.6. Relation (4.49) implies that every partial limit (with respect to the weak convergence in law) of a sequence (𝑋𝑛) is a process with conditionally independent increments.

The following result can facilitate the verification of condition (4.47).

Lemma 4.7. Let for each 𝑛𝑄𝑛 be an 𝔖-valued or 𝑘-valued random process adapted to a flow 𝔽𝑛 on a probability space (Ω𝑛,𝑛,𝖯𝑛). Suppose that there exists a sequence (Λ𝑛) of scalar random processes such that, for any 𝑛 and 𝑢>0,Λ𝑛(𝑢) is an 𝑛(0)-measurable positive random variable; for all 𝑡>𝑠>0, 𝑄𝑛Λ(𝑡)𝑛(𝑡)Λ𝑛(𝑄𝑠)𝑛(𝑠)𝖯0.(4.57) Then, for all for 𝑡>𝑠>0 and bounded uniformly continuous functions 𝑔 on 𝔖 (or on 𝑘), 𝖤𝑔𝑄𝑛(𝑡)𝑛𝑄(𝑠)𝑔𝑛(𝑡)𝖯0.(4.58)

Proof. Denote 𝜆𝑛(𝑡,𝑠)=Λ𝑛(𝑡)/Λ𝑛(𝑠). Condition (4.57) implies that 𝖯||𝑄𝑛(𝑡)𝜆𝑛(𝑡,𝑠)𝑄𝑛||(𝑠)>𝜀𝑛𝖯0,(4.59) for any 𝑡>𝑠>0, 𝜀>0 and sequence (𝑛) whose 𝑛th member is a sub-𝜎-algebra of 𝑛. Hence, and from the evident inequality 𝖤||𝑔𝑄𝑛𝜆(𝑡)𝑔𝑛(𝑡,𝑠)𝑄𝑛||(𝑠)𝑛2𝑔𝖯𝑄𝑛(𝑡)𝜆𝑛(𝑡,𝑠)𝑄𝑛(𝑠)>𝜀𝑛+sup𝐴𝐵𝜀||||,𝑔(𝐴)𝑔(𝐵)(4.60) we get by the choice of 𝑔𝖤𝑔𝑄𝑛𝜆(𝑡)𝑔𝑛(𝑡,𝑠)𝑄𝑛(𝑠)𝑛𝖯0.(4.61) Setting here at first 𝑛=𝑛(𝑠) and then 𝑛=𝑛(𝑡), subtracting the second relation from the first and recalling that the random variable 𝜆𝑛(𝑡,𝑠)𝑄𝑛(𝑠) is, by the assumptions about 𝑄𝑛 and Λ𝑛, 𝑛(𝑠)-measurable, we arrive at (4.58).

Example 4.8. Let 𝑛(𝑡)=(𝑛𝑡) and 𝑄𝑛(𝑡)=𝑛1𝑅(𝑛𝑡), where 𝑅 is an 𝔽-adapted random process (so that 𝑄𝑛 is 𝔽𝑛-adapted). Writing 𝑄𝑛𝑡(𝑡)𝑠𝑄𝑛(𝑠)=𝑡𝑅(𝑛𝑡)𝑛𝑡𝑅(𝑛𝑡)𝑛𝑠,(4.62) we see that condition (4.57) will be fulfilled with Λ𝑛(𝑡)=𝑡 if we demand that 𝑡1𝑅(𝑡) tend in probability to some limit as 𝑡.

5. The Convergence Theorems

Theorem 5.1. Let (𝑌𝑛) be a sequence of local square integrable martingales satisfying conditions (RC), (3.10), and, for each 𝑡, the condition 𝖤max𝑠𝑡||Δ𝑌𝑛||(𝑠)20.(5.1) Then, for any infinite set 𝐽0 there exist an infinite set 𝐽𝐽0 and a continuous local martingale 𝑌𝐽 such that 𝑌𝑛,𝑌𝑛C𝑌𝐽,𝑌𝐽as𝑛,𝑛𝐽.(5.2)

Proof. (1) Denote 𝜏𝑙𝑛=inf{𝑡|𝑌𝑛(𝑡)𝑌𝑛(0)|𝑙}, 𝑌𝑙𝑛(𝑡)=𝑌𝑛(𝑡𝜏𝑙𝑛),𝐾𝑛=𝑌𝑛, 𝐾𝑙𝑛=𝑌𝑙𝑛 (so that 𝐾𝑙𝑛(𝑡)=𝐾𝑛(𝑡𝜏𝑙𝑛)), 𝜂𝑛=𝑌𝑛,𝐾𝑛,𝜂𝑙𝑛=𝑌𝑙𝑛,𝐾𝑙𝑛,(5.3) regarding 𝜂𝑛 and 𝜂𝑙𝑛 as 𝑑+𝑑2-valued processes.
Conditions (RC), (3.10) and (5.1) imply by Corollary 3.10 that the sequence (𝜂𝑛) is r.c. in C. Then, by Corollaries 2.10 and 2.11, for any 𝑙 the sequence of compound processes (𝜂1𝑛,,𝜂𝑙𝑛,𝐾𝑛) is r.c. in C, too. Hence, using the diagonal method, we deduce that for any infinite set 𝐽0, there exist an infinite set 𝐽𝐽0 and random processes 𝑌1,𝐾1,𝑌2,𝐾2 such that for all 𝑙𝜂1𝑛,,𝜂𝑙𝑛,𝐾𝑛C𝜂1,,𝜂𝑙,𝐾as𝑛,𝑛𝐽,(5.4) where 𝜂𝑖=𝑌𝑖,𝐾𝑖.(5.5) The distribution of the right-hand side of (5.4) may depend on 𝐽, so the minute notation would be something like (𝜂𝐽,1,,𝜂𝐽,𝑙,𝐾𝐽). We suppress, “for technical reasons”, the superscript 𝐽, keeping, however, it in mind.
(2) By the definition of 𝑌𝑙𝑛, sup𝑠𝑡||𝑌𝑙𝑛(𝑠)𝑌𝑙𝑛||(0)𝑙+max𝑠𝑡||Δ𝑌𝑛||,(𝑠)(5.6) which together with (5.1) shows that for any 𝑙 and 𝑡, the sequence (sup𝑠𝑡|𝑌𝑙𝑛(𝑠)𝑌𝑙𝑛(0)|2,𝑛) is uniformly integrable. Then, it follows from (5.3)–(5.5) by Corollary 3.17 and Remark 3.13 that 𝑌𝑙 is a continuous martingale and 𝐾𝑙=𝑌𝑙.(5.7)
(3) Writing sup𝑠𝑡||𝜂𝑙𝑛(𝑠)𝜂𝑛||𝜏(𝑠)>0𝑙𝑛<𝑡sup𝑠𝑡||𝑌𝑛(𝑠)𝑌𝑛||,(0)𝑙(5.8) and recalling that (𝑌𝑛) is r.c. in C, we arrive at (2.26).
(4) Note that the processes 𝜂1,𝜂2 are given, in view of (5.4), on a common probability space. Let us show that lim𝑙sup𝑖>𝑙𝜂𝖤𝜚𝑖,𝜂𝑙=0,(5.9) where 𝜚 is the metric in D defined by 𝜚(𝑓,𝑞)=𝑚=12𝑚1sup𝑠𝑚||||.𝑓(𝑠)𝑔(𝑠)(5.10)
From (5.4), we have by Lemma 2.15sup𝑠𝑚||𝜂𝑖𝑛(𝑠)𝜂𝑙𝑛||(𝑠)dsup𝑠𝑚||𝜂𝑖(𝑠)𝜂𝑙||(𝑠)as𝑛,𝑛𝐽,(5.11) for all natural 𝑚,𝑖 and 𝑙. Then, Alexandrov's theorem asserts that for any 𝜀>0, 𝖯sup𝑠𝑚||𝜂𝑖(𝑠)𝜂𝑙||(𝑠)>𝜀lim𝑛,𝑛𝐽𝖯sup𝑠𝑚||𝜂𝑖𝑛(𝑠)𝜂𝑙𝑛||,(𝑠)>𝜀(5.12) which together with the definitions of 𝜂𝑘𝑛, lim and lim yields, for 𝑖>𝑙, 𝖯sup𝑠𝑚||𝜂𝑖(𝑠)𝜂𝑙||(𝑠)>𝜀lim𝑛,𝑛𝐽𝖯sup𝑠𝑚||𝑌𝑛||.(𝑠)>𝑙(5.13) Hence, and from the evident inequality 𝖤(1𝛾)𝜀+𝖯{𝛾>𝜀},(5.14) where 𝛾 is an arbitrary nonnegative random variable, we get for 𝑖>𝑙, 𝖤1sup𝑠𝑚||𝜂𝑖(𝑠)𝜂𝑙||(𝑠)𝜀+lim𝑛,𝑛𝐽𝖯sup𝑠𝑚||𝑌𝑛||.(𝑠)𝑙(5.15)
By the Lenglart-Rebolledo inequality, 𝖯sup𝑠𝑚||𝑌𝑛||𝑎(𝑠)𝑙𝑙2+𝖯tr𝐾𝑛,(𝑚)𝑎(5.16) for any 𝑎>0. Relation (5.4) implies, by Alexandrov's theorem, that lim𝑛,𝑛𝐽𝖯tr𝐾𝑛(𝑚)𝑎𝖯{tr𝐾(𝑚)𝑎},(5.17) which together with (5.10)–(5.16) yields sup𝑖>𝑙𝜂𝖤𝜚𝑖,𝜂𝑙𝑎𝜀+𝑙2+𝑚=12𝑚𝖯{tr𝐾(𝑚)𝑎}.(5.18) Hence, letting 𝑙, then 𝑎 finally 𝜀0, we obtain (5.9).
(5) Obviously, 𝜚 metrizes the 𝒰-convergence and the metric space (C,𝜚) is complete. Relation (5.9) means that the sequence (𝜂𝑙) of C-valued random elements is fundamental in probability. Then, by the Riesz theorem, each of its subsequences contains a subsequence converging w.p.1. The limits of every two convergent subsequences coincide w.p.1 because of (5.9). So, there exists a C-valued random element (= continuous random process) 𝜂 such that lim𝑙𝜂𝖤𝜚𝑙,𝜂=0.(5.19) And this is a fortified form of the relation 𝜂𝑙C𝜂.(5.20) In particular, the sequence (𝜂𝑙) is r.c. in C (which can be proved directly, but such proof does not guarantee that partial limits are given on the same probability space that the prelimit processes are).
(6) Relation (5.4) together with the conclusions of items (3) and (5) shows that all the conditions of Corollary 2.14 (with the range of 𝑛 restricted to 𝐽) are fulfilled (and even overfulfilled: relation (5.20) proved above without recourse to Corollary 2.14 contains both an assumption and a conclusion of the latter). So, Corollary 2.14 asserts, in addition to (5.20), that 𝜂𝑛C𝜂as𝑛,𝑛𝐽.(5.21) This pair of relations can be rewritten, in view of (5.3) and (5.5), in the form 𝑌𝑙,𝐾𝑙C(𝑌,𝐾),(5.22)𝑌𝑛,𝐾𝑛C(𝑌,𝐾)as𝑛,𝑛𝐽,(5.23) where (𝑌,𝐾) is a synonym of 𝜂. We wish to stress again that, firstly, all the processes in (5.22) are given on a common probability space and, secondly, they depend on the choice of 𝐽.
(7) Let us show that 𝑌 is a local martingale.
Denote 𝜎𝑚=inf{𝑡tr𝐾(𝑡)𝑚}, and 𝑀𝑚(𝑡)=𝑌(𝑡𝜎𝑚), 𝑀𝑙𝑚(𝑡)=𝑌𝑙(𝑡𝜎𝑚). Equalities (5.19), (5.10), and (5.5) yield lim𝑙𝑀𝖤𝜚𝑙𝑚,𝑀𝑚=0,(5.24) whence 𝑀𝑙𝑚d𝑀𝑚as𝑙.(5.25) On the strength of (5.7), 𝑀𝑙𝑚(𝑡)=K𝑙𝑡𝜎𝑚.(5.26) By the construction of the processes 𝑌𝑙𝑛 and 𝐾𝑙𝑛 for any 𝑠+ and 𝑛, the sequence (tr𝐾𝑙𝑛(𝑠),𝑙) increases. Then, due to (5.4) so does (tr𝐾𝑙(𝑠),𝑙). Hence, we have with account of (5.19), (5.10), and (5.5) tr𝐾𝑙(𝑠)tr𝐾(𝑠),(5.27) for all 𝑠 and 𝑙. Comparing this with (5.26), we see that 𝑀𝖤tr𝑙𝑚(𝑡)𝖤tr𝐾𝑡𝜎𝑚.(5.28) But tr𝐾 is a continuous increasing process, so tr𝐾(𝜎𝑚)=𝑚,tr𝐾(𝑡𝜎𝑚)𝑚. Now, it follows from (5.25) and (5.28) by Corollary 3.3 that 𝑀𝑚 is a uniformly integrable martingale. Thus, the sequence (𝜎𝑚) localizes 𝑌.
(8) Relation 𝑌𝑙C𝑌 (a part of (5.22)) where the prelimit processes are, according to item (2), continuous martingales implies by Corollary VI.6.7 [2] that 𝑌𝑙,𝑌𝑙C[𝑌](𝑌,).(5.29) Comparing this with (5.22), we get with account of (3.4) (𝑌,𝐾)d=(𝑌,𝑌), hereupon Corollary 3.5 asserts that 𝐾=𝑌.

Corollary 5.2. Let (𝑌𝑛) be a sequence of local square integrable martingales satisfying conditions (RC), (3.21), and, for all 𝑡>0, (5.1). Then, 𝑌 is a continuous local martingale and relation (3.25) holds.

Proof. Let 𝐽0 be an arbitrary infinite set of natural numbers. Then, Theorem 5.1 whose condition (3.10) is covered by (3.21) asserts existence of an infinite set 𝐽𝐽0 and a continuous local martingale 𝑌𝐽 such that (5.2) holds. By assumption, the distribution of 𝑌𝐽 and, consequently, of (𝑌𝐽,𝑌𝐽) does not depend on 𝐽, which allows to delete the superscript in (5.2). Hence, taking to account arbitrariness of 𝐽0, we conclude that (5.2) holds for 𝐽=.

Corollary 5.3. Let a sequence (𝑌𝑛) of locally square integrable martingales satisfy conditions (RC) and, for all 𝑡>0, (5.1). Then, relation (3.19) holds.

Proof. It was shown in items (1) and (2) of the proof of Theorem 5.1 that for each 𝑙, the sequence (𝑌𝑙𝑛,𝑛) satisfies all the conditions of Lemma 3.16 which, therefore, asserts that [𝑌𝑙𝑛]𝑌𝑙𝑛C𝑂 as 𝑛. Hence, by the same argument as in item (3), relation (3.19) follows.

Theorem 5.4. Let for each 𝑛𝑋𝑛,𝑋1𝑛,𝑋2𝑛 be locally square integrable martingales with respect to a common filtration. Suppose that for all 𝑚, 𝑡>0 and 𝜀>0 conditions (4.44) and (4.45) are fulfilled, and lim𝐿sup𝑙lim𝑛𝖯||𝑋𝑙𝑛||(0)>𝐿=0,(5.30)lim𝐿sup𝑙lim𝑛𝖯𝑋tr𝑙𝑛(𝑡)>𝐿=0,(5.31)lim𝑟0sup𝑙lim𝑛𝖯sup(𝑡1,𝑡2)Π(𝑡,𝑟)𝑋tr𝑙𝑛𝑡2𝑋tr𝑙𝑛𝑡1>𝜀=0,(5.32)lim𝑙lim𝑛𝖯||𝑋𝑙𝑛(0)𝑋𝑛||(0)>𝜀=0.(5.33) Then, for any infinite set 𝐽0, there exist an infinite set 𝐽𝐽0 and a continuous local martingale 𝑋 such that 𝑋𝑛,𝑋𝑛C(𝑋,𝑋)as𝑛,𝑛𝐽.(5.34)

Note that relation (5.34) is, up to notation, a duplicate of (5.2). So, the superscript 𝐽 on the right-hand side is tacitly implied (but suppressed because the conditions of the theorem contain another superscript).

Proof. Conditions (5.31) and (5.32) imply that for each 𝑚, the sequence (𝑋𝑚𝑛,𝑛) is r.c. in C. Then, it follows from (4.44) and (5.30) by Lemma 3.9 that the sequence (𝑋𝑚𝑛,𝑛) is r.c. in C. So, there exist an infinite set 𝐽𝑚𝐽𝑚1 and a random process 𝑋𝑚 such that 𝑋𝑚𝑛C𝑋𝑚as𝑛,𝑛𝐽𝑚.(5.35) Consequently, if we denote by 𝐽 the set whose 𝑚th member is that of 𝐽𝑚, then for each 𝑚, 𝑋𝑚𝑛C𝑋𝑚as𝑛,𝑛𝐽.(5.36) And this together with (4.44) and relative compactness of (𝑋𝑚𝑛,𝑛) implies by Corollary 5.2 that 𝑋𝑚 is a continuous local martingale and 𝜂𝑚𝑛𝑋𝑚𝑛,𝑋𝑚𝑛C𝜂𝑚(𝑋𝑚,𝑋𝑚)as𝑛,𝑛𝐽.(5.37) Then, it follows from (5.30)–(5.32) that lim𝐿sup𝑙𝖯||𝑋𝑙||(0)>𝐿=0,lim𝐿sup𝑙𝖯𝑋tr𝑙(𝑡)>𝐿=0,lim𝑟0sup𝑙𝖯sup𝑡1,𝑡2Π(𝑡,𝑟)𝑋tr𝑙𝑡2𝑋tr𝑙𝑡1>𝜀=0,(5.38) and therefore, the sequences (𝑋𝑙), (𝑋𝑙) and (𝜂𝑙) are r.c. in C.
Conditions  (5.33) and (4.45) imply by the Lenglart-Rebolledo inequality that for all positive 𝑡 and 𝜀lim𝑙lim𝑛𝖯sup𝑠𝑡||𝑋𝑙𝑛(𝑠)𝑋𝑛||(𝑠)>𝜀=0.(5.39) Conditions(5.31) and (4.45) imply by Lemma 3.8 that lim𝑙lim𝑛𝖯sup𝑠𝑡𝑋𝑙𝑛(𝑠)𝑋𝑛(𝑠)>𝜀=0,(5.40) which together with the previous relation yields (2.26). Then, Corollary 2.14 asserts existence of a random process 𝜂(𝑋,𝐻) such that 𝑋𝑙,𝑋𝑙C𝑋(𝑋,𝐻),(5.41)𝑛,𝑋𝑛C(𝑋,𝐻)as𝑛,𝑛𝐽.(5.42) The ensuing relation 𝑋𝑙C𝑋, continuity (due to (5.37)) of all 𝑋𝑙 and relative compactness of (𝑋𝑙) imply by Corollary 5.2 that 𝑋 is a continuous local martingale and (𝑋𝑙,𝑋𝑙)C(𝑋,𝑋). Comparing this with (5.41), we get (𝑋,𝐻)d=(𝑋,𝑋), which converts (5.42) to (5.34).

Repeating the deduction of Corollary 5.2 from Theorem 5.1, we get from Theorem 5.4 the following conclusion.

Corollary 5.5. Let for each 𝑛𝑋𝑛,𝑋1𝑛,𝑋2𝑛 be locally square integrable martingales with respect to a common filtration. Suppose that, they have the same initial value; conditions (4.44), (4.45), (5.31), and (5.32) are fulfilled for all 𝑚 and 𝑡; there exists a random process 𝑋 such that 𝑋𝑛C𝑋. Then, 𝑋 is a continuous local martingale and (𝑋𝑛,𝑋𝑛)C(𝑋,𝑋).

Theorem 5.6. Let for each 𝑛𝑋𝑛,𝑋1𝑛,𝑋2𝑛 be locally square integrable martingales with respect to a flow 𝔽𝑛. Suppose that, conditions (4.44)–(4.47) and (5.30)–(5.33) are fulfilled for all 𝑚,𝑡>𝑠0,>0,𝑡2>𝑡10,𝑧𝑑, and bounded uniformly continuous functions 𝑓+; there exist an 𝑑-valued random variable 𝑋 and an 𝔖+-valued random process 𝐻 such that 𝑋𝑛(0),𝑋𝑛C𝑋,𝐻.(5.43) Then, (1) for any 𝑝,𝑙,𝑡𝑝>>𝑡00,𝑠𝑙>>𝑠1>0,𝑧1,𝑧𝑝𝑑 and bounded continuous function 𝐹𝑑×𝔖𝑙+𝖤𝑖exp𝑝𝑗=1𝑧𝑗𝑋𝑛𝑡𝑗𝑋𝑛𝑡𝑗1𝐹𝑋𝑛(0),𝑋𝑛𝑠1,,𝑋𝑛𝑠𝑙1𝖤exp2𝑝𝑗=1𝑧𝑗𝐻𝑡𝑗𝑡𝐻𝑗1𝑧𝑗𝐹𝑋𝑠,𝐻1𝑠,,𝐻𝑙,(5.44) (2) there exists a continuous local martingale 𝑋 with initial value 𝑋 and quadratic characteristic 𝐻 such that (𝑋𝑛,𝑋𝑛)C(𝑋,𝐻) and 𝖤𝑖exp𝑝𝑗=1𝑧𝑗𝑋𝑡𝑗𝑡𝑋𝑗1𝐹𝑠𝑋(0),𝑋1𝑠,,𝑋𝑙1=𝖤exp2𝑝𝑗=1𝑧𝑗𝐻𝑡𝑗𝑡𝐻𝑗1𝑧𝑗F𝑋𝑠,𝐻1𝑠,,𝐻𝑙,(5.45) for any 𝑝,𝑙,𝑡𝑝>>𝑡00,𝑠𝑙>>𝑠1>0,𝑧1,𝑧𝑝𝑑 and bounded continuous function 𝐹𝑑×𝔖𝑙+.

Proof. Since the assumptions of this theorem contain those of Theorem 5.4, the conclusion of the latter is valid. It implies, in particular, that the sequence (𝑋𝑛) is r.c. in C. So, firstly, the assumptions of Theorem 4.5 are also fulfilled (and therefore the conclusions are valid), and, secondly, the relation lim𝑡0lim𝑛𝖯||𝑋𝑛(𝑡)𝑋𝑛||(0)>𝜀=0(5.46) holds.
If 𝑡0>0, then Theorem 4.5 asserts relation (4.49) which together with (5.43) yields, by the dominated convergence theorem, (5.44). Relation (5.46) and continuity (due to (5.43)) of 𝐻 enable us to let 𝑡00 in (5.44), thus waiving the interim assumption 𝑡0>0.
Combining (4.49) with the conclusion of Theorem 5.4 and with the dominated convergence theorem, we see that for any infinite set 𝐽0, there exist an infinite set 𝐽𝐽0 and a continuous local martingale 𝑋 such that for all 𝑝,𝑙,𝑡𝑝>>𝑡0>0,𝑠𝑙>>𝑠1>0,𝑧1,𝑧𝑝𝑑 and bounded continuous function 𝐹𝑑×𝔖𝑙+𝖤𝑖exp𝑝𝑗=1𝑧𝑗𝑋𝑛𝑡𝑗𝑋𝑛𝑡𝑗1𝐹𝑋𝑛(0),𝑋𝑛𝑠1,,𝑋𝑛𝑠𝑙1𝖤exp2𝑝𝑗=1𝑧𝑗𝑡𝑋𝑗𝑡𝑋𝑗1𝑧𝑗𝐹𝑠𝑋(0),𝑋1𝑠,,𝑋𝑙,(5.47) as 𝑛,𝑛𝐽. The comparison of (5.43) and (5.34) shows that the right-hand side of (5.47) equals 𝖤1exp2𝑝𝑗=1𝑧𝑗𝐻𝑡𝑗𝑡𝐻𝑗1𝑧𝑗𝐹𝑋𝑠,𝐻1𝑠,,𝐻𝑙,(5.48) and, therefore, does not depend on the choice of 𝐽0 and 𝐽. So, (5.47) holds as 𝑛 ranges over , too. This together with (5.44) proves the second statement under the extra assumption 𝑡0>0 which can be waived exactly as above.

Corollary 5.7. Let the conditions of Theorem 5.6 be fulfilled. Then, 𝑋 has conditionally with respect to 𝒢𝜎(𝑋,𝑋()) independent increments.