Abstract

A continuous time random walk is a random walk subordinated to a renewal process used in physics to model anomalous diffusion. In this paper, we establish Chover-type laws of the iterated logarithm for continuous time random walks with jumps and waiting times in the domains of attraction of stable laws.

1. Introduction

Let {𝑌𝑖,𝐽𝑖} be a sequence of independent and identically distributed random vectors, and write 𝑆(𝑛)=𝑌1+𝑌2+⋯+𝑌𝑛 and 𝑇(𝑛)=𝐽1+𝐽2+⋯+𝐽𝑛. Let 𝑁𝑡=max{𝑛≥0∶𝑇(𝑛)≤𝑡} the renewal process of 𝐽𝑖. A continuous time random walk (CTRW) is defined by 𝑁𝑋(𝑡)=𝑆𝑡=𝑁𝑡𝑖=1𝑌𝑖.(1.1) In this setting, 𝑌𝑖 represents a particle jump, and 𝐽𝑖>0 is the waiting time preceding that jump, so that 𝑆(𝑛) represents the particle location after 𝑛 jumps and 𝑇(𝑛) is the time of the 𝑛th jump. Then 𝑁𝑡 is the number of jumps by time 𝑡>0, and the CTRW 𝑋(𝑡) represents the particle location at time 𝑡>0, which is a random walk subordinated to a renewal process.

It should be mentioned that the subordination scheme of CTRW processes is going back to Fogedby [1] and that it was expanded by Baule and Friedrich [2] and Magdziarz et al. [3]. It should also be mentioned that the theory of subordination holds for nonhomogeneous CTRW processes, that were introduced in the following works: Metzler et al. [4, 5] and Barkai et al. [6].

The CTRW is useful in physics for modeling anomalous diffusion. Heavy-tailed particle jumps lead to superdiffusion, where a cloud of particles spreads faster than the classical Brownian motion, and heavy-tailed waiting times lead to subdiffusion. CTRW models and the associated fractional diffusion equations are important in applications to physics, hydrology, and finance; see, for example, Berkowitz et al. [7], Metzler and Klafter [8], Scalas [9], and Meerchaert and Scalas [10] for more information. In applications to hydrology, the heavy tailed particle jumps capture the velocity irregularities caused by a heterogeneous porous media, and the waiting times model particle sticking or trapping. In applications to finance, the particle jumps are price changes or log returns, separated by a random waiting time between trades.

If the jumps 𝑌𝑖 belong to the domain of attraction of a stable law with index 𝛼, (0<𝛼<2), and the waiting times 𝐽𝑖 belong to the domain of attraction of a stable law with index 𝛽, (0<𝛽<1), Becker-Kern et al. [11] and Meerschaert and Scheffler [12] showed that as ğ‘â†’âˆž, 𝑐−𝛽/𝛼[]𝑋(𝑐𝑡)⟹𝐴(𝐸(𝑡))(1.2) a non-Markovian limit with scaling 𝐴(𝐸(𝑐𝑡))d=𝑐𝛽/𝛼𝐴(𝐸(𝑡)), where 𝐴(𝑡) is a stable Lévy motion and 𝐸(𝑡) is the inverse or hitting time process of a stable subordinator. Densities of the CTRW scaling limit 𝐴(𝐸(𝑡)) solve a space-time fractional diffusion equation that also involves a fractional time derivative of order 𝛽; see Meerschaert and Scheffler [13], Becker-Kern et al. [11], and Meerschaert and Scheffler [12] for complete details. Becker-Kern et al. [14], Meerschaert and Scheffler [15], and Meerschaert et al. [16] discussed the related limit theorems for CTRWs based on two time scales, triangular arrays and dependent jumps, respectively. The aim of the present paper is to investigate the laws of the iterated logarithm for CTRWs. We establish Chover-type laws of the iterated logarithm for CTRWs with jumps and waiting times in the domains of attraction of stable laws.

Throughout this paper we will use 𝐶 to denote an unspecified positive and finite constant which may be different in each occurrence and use “i.o.” to stand for “infinitely often” and “a.s." to stand for “almost surely” and “𝑢(𝑥)∼𝑣(𝑥)” to stand for “lim𝑢(𝑥)/𝑣(𝑥)=1”. Our main results read as follows.

Theorem 1.1. Let {𝑌𝑖} be a sequence of i.i.d. nonnegative random variables with a common distribution 𝐹, and let {𝐽𝑖}, independent of {𝑌𝑖}, be a sequence of i.i.d. nonnegative random variables with a common distribution 𝐺. Assume that 1−𝐹(𝑥)∼𝑥−𝛼𝐿(𝑥), 0<𝛼<2, where 𝐿 is a slowly varying function, and that 𝐺 is absolutely continuous and 1−𝐺(𝑥)∼𝐶𝑥−𝛽, 0<𝛽<1. Let {𝐵(𝑛)} be a sequence such that 𝑛𝐿(𝐵(𝑛))/𝐵(𝑛)𝛼→𝐶 as ğ‘›â†’âˆž. Then one has limsupğ‘¡â†’âˆžî‚€î€·ğµî€·ğ‘¡ğ›½î€¸î€¸âˆ’1𝑋(𝑡)1/(loglog𝑡)=𝑒1/𝛼a.s.(1.3)

The following is an immediate consequence of Theorem 1.1.

Corollary 1.2. If the tail distribution of 𝑌𝑖 satisfies 𝑃(𝑌1>𝑥)∼𝐶𝑥−𝛼 in Theorem 1.1, then one has limsupğ‘¡â†’âˆžî€·ğ‘¡âˆ’ğ›½/𝛼𝑋(𝑡)1/(loglog𝑡)=𝑒1/𝛼a.s.(1.4)

In the course of our arguments we often make statements that are valid only for sufficiently large values of some index. When there is no danger of confusion, we omit explicit mention of this proviso.

2. Chung Type LIL for Stable Summands

In this section we consider a Chung-type law of the iterated logarithm for sums of random variables in the domain of attraction of a stable law, which will take a key role to show Theorem 1.1. When 𝐽𝑖 has a symmetric stable distribution function 𝐺 characterized by 𝐸exp𝑖𝑡𝐽𝑖=exp−|𝑡|𝛽for𝑡∈ℝ,(2.1)0<𝛽<2. Chover [17] established that limsupğ‘›â†’âˆž||𝑛−1/𝛽||𝑇(𝑛)1/(loglog𝑛)=𝑒1/𝛽a.s.(2.2) We call (2.2) as Chover's law of the iterated logarithm. Since then, several papers have been devoted to develop Chover's LIL; see, for example, Hedye [18–20], Pakshirajan and Vasudeva [21], Vasudeva [22], Qi and Cheng [23], Scheffler [24], Chen [25], and Peng and Qi [26] for reference. For some reason the obvious corresponding statement for the “lim  inf” result does not seem to have been recorded, and it is the purpose of this section to do so and may be of independent interest.

Theorem 2.1. Let {𝐽𝑖} be a sequence of i.i.d. nonnegative random variables with a common distribution 𝐺(𝑥), and let 𝑉(𝑥)=inf{𝑦>0∶1−𝐺(𝑦)≤1/𝑥}. Assume that 𝐺 is absolutely continuous and 1−𝐺(𝑥)∼𝑥−𝛽𝑙(𝑥), 0<𝛽<1, where 𝑙 is a slowly varying function. Then one has liminfğ‘›â†’âˆžî€·ğ‘‰(𝑛)−1𝑇(𝑛)1/(loglog𝑛)=1a.s.(2.3)

In order to prove Theorem 2.1, we need some lemmas.

Lemma 2.2. Let ℎ(𝑥) be a slowly varying function. Then, if ğ‘¦ğ‘›â†’âˆž, ğ‘§ğ‘›â†’âˆž, one has for any given 𝜏>0, limğ‘§ğ‘›âˆ’ğœâ„Žî€·ğ‘¦ğ‘›ğ‘§ğ‘›î€¸â„Žî€·ğ‘¦ğ‘›î€¸=0,limğ‘§ğœğ‘›â„Žî€·ğ‘¦ğ‘›ğ‘§ğ‘›î€¸â„Žî€·ğ‘¦ğ‘›î€¸=∞.(2.4)

Proof. See Seneta [27].

Lemma 2.3. Let {𝐽𝑖} be a sequence of i.i.d. nonnegative random variables with a common distribution 𝐺 and let 𝑀(𝑛)=max{𝐽1,𝐽2,…,𝐽𝑛}. Assume that 𝐺 is absolutely continuous and 1−𝐺(𝑥)∼𝑥−𝛽𝑙(𝑥), 0<𝛽<1, where 𝑙 is a slowly varying function. Then one has for some given small 𝑡>0limğ‘›â†’âˆžğ¸ğ‘’ğ‘¡ğ‘‡(𝑛)/𝑀(𝑛)=𝑒𝑡∫1−𝑡10𝑒𝑡𝑥𝑥−𝛽.−1𝑑𝑥(2.5)

Proof. We will follow the argument of Lemma 2.1 in Darling [28]. Without loss of generality we can assume 𝐽1=max{𝐽1,𝐽2,…,𝐽𝑛}=𝑀(𝑛) since each 𝐽𝑖 has a probability of 1/𝑛 of being the largest term, and 𝑃(𝐽𝑖=𝐽𝑗)=0 for 𝑖≠𝑗 since 𝐺(𝑥) is presumed continuous.
For notational simplicity we will use the tail distribution 𝐺(𝑥)=1−𝐺(𝑥)=𝑃(𝐽1>𝑥) and denote by 𝑔(𝑥) the corresponding density, so that ∫𝐺(𝑥)=âˆžğ‘¥ğ‘”(𝑧)𝑑𝑧. Then, the joint density of 𝐽1,𝐽2,…,𝐽𝑛, given 𝐽1=𝑀(𝑛), is 𝑔𝑥1,𝑥2,…,𝑥𝑛=𝑥𝑛𝑔1𝑔𝑥2𝑥⋯𝑔𝑛if𝑥1=max𝑖𝑥𝑖,0otherwise.(2.6) Thus 𝐸𝑒𝑡𝑇(𝑛)/𝑀(𝑛)=⋯𝑒𝑡(𝑥1+𝑥2+⋯+𝑥𝑛)/𝑥1𝑔𝑥1,𝑥2,…,𝑥𝑛𝑑𝑥1𝑑𝑥2⋯𝑑𝑥𝑛=ğ‘›ğ‘’ğ‘¡î€œâˆž0𝑦0⋯𝑦0𝑒𝑡(𝑥2+𝑥3+⋯+𝑥𝑛)/𝑦𝑔𝑥2𝑔𝑥3𝑥⋯𝑔𝑛𝑔(𝑦)𝑑𝑥2𝑑𝑥3⋯𝑑𝑥𝑛𝑑𝑦=ğ‘›ğ‘’ğ‘¡î€œâˆž0𝑦0𝑒𝑡𝑥/𝑦𝑔(𝑥)𝑑𝑥𝑛−1𝑔(𝑦)𝑑𝑦.(2.7) Let us put 𝜙(𝑦,𝑡)=𝑦10𝑒𝑡𝑥𝑔(𝑥𝑦)𝑑𝑥(2.8) so that 𝐸𝑒𝑡𝑇(𝑛)/𝑀(𝑛)=ğ‘›ğ‘’ğ‘¡î€œâˆž0(𝜙(𝑦,𝑡))𝑛−1𝑔(𝑦)𝑑𝑦.(2.9) It follows from Doeblin's theorem that if 𝜆>0, 𝐺(𝜆𝑦)=𝜆−𝛽𝐺(𝑦)(1+𝑜(1))(2.10) for 𝑦≥𝑦0 with some large 𝑦0>0. Then, for 𝑦≤𝑦0, we can choose 𝑡>0 small enough such that 𝑡<−log𝐺(𝑦0) since 𝐺 has regularly varying tail distribution, so that 𝜙(𝑦,𝑡)≤𝑒𝑡𝐺𝑦0<1.(2.11) It follows that 𝑛𝑒𝑡𝑦00(𝜙(𝑦,𝑡))𝑛−1𝑔(𝑦)𝑑𝑦⟶0.(2.12) Consider the case 𝑦≥𝑦0. By a slight transformation we find that 𝜙(𝑦,𝑡)=1−𝐺(𝑦)+𝑡10𝑒𝑡𝑥𝐺(𝑥𝑦)−𝐺(𝑦)𝑑𝑥=1−𝐺(𝑦)+𝑡𝐺(𝑦)(1+𝑜(1))10𝑒𝑡𝑥𝑥−𝛽−1𝑑𝑥.(2.13) Putting 𝜂=𝜂(𝑡)=𝑡10𝑒𝑡𝑥𝑥−𝛽−1𝑑𝑥,(2.14) we have 𝜂<1 since 0<𝛽<1 and 𝑡 is small. Thus 𝜙(𝑦,𝑡)=1−𝐺(𝑦)(1−𝜂)+𝑜𝐺(𝑦).(2.15) By (2.9) and making the change of variable 𝑛𝐺(𝑦)=𝑣 to give 𝐸𝑒𝑡𝑇(𝑛)/𝑀(𝑛)=𝑒𝑡𝑛0𝑣1−𝑛1(1−𝜂)+𝑣𝑜𝑛𝑛−1ğ‘‘ğ‘£âŸ¶ğ‘’ğ‘¡î€œâˆž0𝑒−𝑣(1−𝜂)=𝑒𝑑𝑣𝑡,1−𝜂(2.16) which yields the desired result.

The following large deviation result for stable summands is due to Heyde [19].

Lemma 2.4. Let {𝜉𝑖} be a sequence of i.i.d. nonnegative random variables with a common tail distribution satisfying 𝑃(𝜉1>𝑥)âˆ¼ğ‘¥âˆ’ğ‘Ÿâ„Ž(𝑥), 0<𝑟<2, where ℎ is a slowly varying function. Let {𝜆𝑛} be a sequence such that ğ‘›â„Ž(𝜆𝑛)/𝜆𝑟𝑛→𝐶 as ğ‘›â†’âˆž, and let {𝑥𝑛} be a sequence with ğ‘¥ğ‘›â†’âˆž as ğ‘›â†’âˆž. Then 0<liminfğ‘›â†’âˆžğ‘¥ğ‘Ÿğ‘›â„Žî€·ğœ†ğ‘›î€¸â„Žî€·ğ‘¥ğ‘›ğœ†ğ‘›î€¸ğ‘ƒîƒ©ğ‘›î“ğ‘–=1𝜉𝑖>𝑥𝑛𝜆𝑛≤limsupğ‘›â†’âˆžğ‘¥ğ‘Ÿğ‘›â„Žî€·ğœ†ğ‘›î€¸â„Žî€·ğ‘¥ğ‘›ğœ†ğ‘›î€¸ğ‘ƒîƒ©ğ‘›î“ğ‘–=1𝜉𝑖>𝑥𝑛𝜆𝑛<∞.(2.17)

Now we can show Theorem 2.1.

Proof of Theorem 2.1. In order to show (2.3), it is enough to show that for all 𝜀>0liminfğ‘›â†’âˆž(log𝑛)𝜀𝑉(𝑛)−1𝑇(𝑛)≥1a.s.,(2.18)liminfğ‘›â†’âˆž(log𝑛)−𝜀𝑉(𝑛)−1𝑇(𝑛)≤1a.s.(2.19)
We first show (2.18). Let 𝑛𝑘=[𝜃𝑘], 1<𝜃<2. Put again 𝐺(𝑥)=1−𝐺(𝑥)=𝑃(𝐽1>𝑥). Let 𝐺∗ be the inverse of 𝐺. Obverse that 𝐺∗(𝑦)∼𝑦−1/𝛽𝐻(1/𝑦), 0<𝑦≤1, where 𝐻 is a slowly varying function and 𝑉(𝑛)=𝐺∗(1/𝑛)∼𝑛1/𝛽𝐻(𝑛), so that 𝑉𝑛𝑘𝑉𝑛𝑘+1⟶𝜃−1/𝛽(2.20)log𝑛𝑘−𝜀𝑉𝑛𝑘𝐺∗log𝑛𝑘𝛽𝜀/2𝑛𝑘−1∼log𝑛𝑘−𝜀𝑛𝑘1/𝛽𝐻𝑛𝑘𝑛𝑘1/𝛽(log𝑛)−𝜀/2𝐻𝑛𝑘log𝑛𝑘−𝛽𝜀/2=log𝑛𝑘−𝜀/2𝐻𝑛𝑘𝐻𝑛𝑘log𝑛𝑘−𝛽𝜀/2⟶0,(2.21) by Lemma 2.2. Let 𝑈,𝑈1,𝑈2,…,𝑈𝑛 be i.i.d. random variables with the distribution of 𝑈 Uniform over (0,1), and let 𝑀∗(𝑛)=max{𝑈1,𝑈2,…,𝑈𝑛}. Then, from the fact that 𝐺(𝐽𝑛) is a Uniform (0,1) random variable, we note that 𝑀∗(𝑛)d=𝐺(𝑀(𝑛)), 𝑛≥1. From (2.21), 𝐽𝑖 nonnegative, and 𝐺 and 𝐺∗ nonincreasing, it follows that 𝑃𝑇𝑛𝑘≤log𝑛𝑘−𝜀𝑉𝑛𝑘𝑀𝑛≤𝑃𝑘≤log𝑛𝑘−𝜀𝑉𝑛𝑘≤𝑃(𝐺∗𝐺𝑀𝑛𝑘≤𝐺∗log𝑛𝑘𝛽𝜀/2𝑛𝑘−1=𝑃(𝐺𝑀𝑛𝑘≥log𝑛𝑘𝛽𝜀/2𝑛𝑘−1=𝑃1−𝑀∗𝑛𝑘≥log𝑛𝑘𝛽𝜀/2𝑛𝑘−1𝑀=𝑃∗𝑛𝑘≤1−log𝑛𝑘𝛽𝜀/2𝑛𝑘−1=𝑃𝑈≤1−log𝑛𝑘𝛽𝜀/2𝑛𝑘−1𝑛𝑘−≤explog𝑛𝑘𝛽𝜀/2.(2.22) Hence, the sum of the left hand side of the previously mentioned probability is finite; by the Borel-Cantelli lemma, we get liminfğ‘˜â†’âˆžî€·log𝑛𝑘𝜀𝑉𝑛𝑘−1𝑇𝑛𝑘≥1a.s.(2.23) Thus, by (2.20) we have liminfğ‘›â†’âˆž(log𝑛)𝜀𝑉(𝑛)−1𝑇(𝑛)≥liminfğ‘˜â†’âˆžmin𝑛𝑘≤𝑛≤𝑛𝑘+1(log𝑛)𝜀𝑉(𝑛)−1𝑇(𝑛)≥liminfğ‘˜â†’âˆžîƒ©ğ‘‰î€·ğ‘›ğ‘˜î€¸ğ‘‰î€·ğ‘›ğ‘˜+1log𝑛𝑘𝜀𝑉𝑛𝑘−1𝑇𝑛𝑘≥𝜃−1/𝛽a.s.(2.24) Therefore, by the arbitrariness of 𝜃>1, (2.18) holds.
We now show (2.19). Let 𝑛𝑘=[𝑒𝑘1+𝛿], 𝛿>0. For notational simplicity, we introduce the following notations: 𝜁𝑘=𝑇𝑛𝑘−𝑛𝑘−1𝑀𝑛𝑘−𝑛𝑘−1,𝐸𝑘=𝑇𝑛𝑘𝑛−𝑇𝑘−1≤log𝑛𝑘𝜀𝑉𝑛𝑘,𝐸𝑘=𝑇𝑛𝑘−1≥𝜀log𝑛𝑘𝜀𝑉𝑛𝑘,𝐹𝑘=𝑀𝑛𝑘−𝑛𝑘−1≤loglog𝑛𝑘(1−𝜀)/𝛽𝑉𝑛𝑘,𝑂𝑘=𝜁𝑘≥log𝑛𝑘𝜀loglog𝑛𝑘−(1−𝜀)/𝛽.(2.25) By Lemma 2.3, we have 𝑃𝑂𝑘≤exp−𝑡log𝑛𝑘𝜀loglog𝑛𝑘−1−𝜀/𝛽𝐸𝑒𝑡𝜁𝑘≤𝐶exp−𝑡log𝑛𝑘𝜀loglog𝑛𝑘−(1−𝜀)/𝛽.(2.26) Thus, we get ∑𝑃(𝑂𝑘)<∞.
Observe again that 𝐺∗(𝑦)∼𝑦−1/𝛽𝐻(1/𝑦) and 𝑉(𝑛)∼𝑛1/𝛽𝐻(𝑛), so that 𝑉𝑛𝑘𝑉𝑛𝑘−1≥𝑒(1/𝛽)𝑘𝛿,(2.27)loglog𝑛𝑘(1−𝜀)/𝛽𝑉𝑛𝑘𝐺∗loglog𝑛𝑘(1−𝜀)𝑛𝑘−1∼loglog𝑛𝑘2(1−𝜀)/𝛽𝐻𝑛𝑘𝐻loglog𝑛𝑘−(1−𝜀)ğ‘›ğ‘˜î‚âŸ¶âˆž,(2.28) by Lemma 2.2. Thus, we note 𝑃𝐹𝑘≥𝑃(𝐺∗𝐺𝑀𝑛𝑘−𝑛𝑘−1≤𝐺∗loglog𝑛𝑘(1−𝜀)𝑛𝑘−1=𝑃(𝐺𝑀𝑛𝑘−𝑛𝑘−1≥loglog𝑛𝑘(1−𝜀)𝑛𝑘−1=𝑃1−𝑀∗𝑛𝑘−𝑛𝑘−1≥loglog𝑛𝑘(1−𝜀)𝑛𝑘−1𝑀=𝑃∗𝑛𝑘−𝑛𝑘−1≤1−loglog𝑛𝑘(1−𝜀)𝑛𝑘−1=𝑃𝑈≤1−loglog𝑛𝑘(1−𝜀)𝑛𝑘−1𝑛𝑘−𝑛𝑘−1=1−loglog𝑛𝑘(1−𝜀)𝑛𝑘−1𝑛𝑘−𝑛𝑘−1≥exp−𝐶loglog𝑛𝑘(1−𝜀/2),(2.29) which yields easily ∑𝑃(𝐹𝑘)=∞. Hence, since 𝑃(𝐸𝑘)≥𝑃(𝐹𝑘)−𝑃(𝑂𝑘), we get ∑𝑃(𝐸𝑘)=∞. Since 𝐸𝑘 are independent, by the Borel-Cantelli lemma, we get liminfğ‘˜â†’âˆžî€·log𝑛𝑘−𝜀𝑉𝑛𝑘−1𝑇𝑛𝑘𝑛−𝑇𝑘−1≤1a.s.(2.30)
By applying Lemma 2.4 and (2.27) and some simple calculation, we have easily that ∑𝐸𝑃(𝑘)<∞, so that limsupğ‘˜â†’âˆžî€·log𝑛𝑘−𝜀𝑉𝑛𝑘−1𝑇𝑛𝑘−1=0a.s.,(2.31) which, together with (2.30), implies liminfğ‘˜â†’âˆžî€·log𝑛𝑘−𝜀𝑉𝑛𝑘−1𝑇𝑛𝑘≤1a.s.(2.32) This yields (2.19). The proof of Theorem 2.1 is now completed.

3. Proof of Theorem 1.1

Proof of Theorem 1.1. We have to show that for all 𝜀>0limsupğ‘¡â†’âˆž(log𝑡)−(1+𝜀)/𝛼𝐵𝑡𝛽−1𝑋(𝑡)≤1a.s.,(3.1)limsupğ‘¡â†’âˆž(log𝑡)−(1−𝜀)/𝛼𝐵𝑡𝛽−1𝑋(𝑡)≥1a.s.(3.2)
We first show (3.1). Let 𝑡𝑘=𝜃𝑘, 1<𝜃<2. For notational simplicity, we introduce the following notations: 𝑄𝑘=log𝑡𝑘−(1+𝜀)/𝛼𝐵𝑡𝛽𝑘−1𝑆𝑁𝑡𝑘,≥1𝑈(𝑥)=(log𝑥)−𝜌𝑥1/𝛽,𝛾1𝜀(𝑥)=sup{𝑦∶𝑈(𝑦)≤𝑥},𝜌=,𝑄5𝛽𝑘=log𝑡𝑘−(1+𝜀)/𝛼𝐵𝑡𝛽𝑘−1𝑆𝛾1𝑡𝑘,𝑅≥1𝑘=𝑁𝑡𝑘≥𝛾1𝑡𝑘.(3.3)
By (2.18), we have 𝑃𝑅𝑘𝑇𝛾i.o.=𝑃1𝑡𝑘≤𝑡𝑘𝑇𝑡i.o.=𝑃𝑘≤log𝑡𝑘−𝜌𝑉𝑡𝑘i.o.=0.(3.4)
Put 𝐹(𝑥)=1−𝐹(𝑥)=𝑃(𝑌1>𝑥). Let 𝐹∗ be the inverse of 𝐹. Recall that 𝐹∗(𝑦)∼𝑦−1/𝛼𝐻(1/𝑦), 0<𝑦≤1, where 𝐻 is a slowly varying function, so that 𝐵(𝑛)=𝐹∗(𝐶/𝑛)∼𝐶𝑛1/𝛼𝐻(𝑛) and 𝐵𝑡𝛽𝑘𝐵𝑡𝛽𝑘−1⟶𝜃𝛽/𝛼.(3.5)
Note that 𝑈log𝑡𝑘𝜀/4𝑡𝛽𝑘∼log𝑡𝑘𝜀/(4𝛽)𝑡𝑘loglog𝑡𝑘𝜀/4𝑡𝛽𝑘−𝜌𝛾≥𝑈1𝑡𝑘=𝑡𝑘.(3.6) Thus, by noting 𝑈 increasing, log𝑡𝑘𝜀/(4𝛼)𝑡𝑘𝛽/𝛼≥𝛾1𝑡𝑘1/𝛼.(3.7) Hence, by Lemma 2.2, log𝑡𝑘𝜀/(2𝛼)𝐵𝑡𝛽𝑘𝐵𝛾1𝑡𝑘≥𝐶log𝑡𝑘𝜀/(2𝛼)𝑡𝑘𝛽/𝛼𝐻𝑡𝛽𝑘1/𝛼𝛾1𝑡𝑘1/𝛼𝐻𝛾1𝑡𝑘1/𝛼≥1.(3.8) Thus, by (3.8) and Lemma 2.4, we have ğ‘ƒî‚€î‚ğ‘„ğ‘˜î‚âŽ›âŽœâŽœâŽğ‘†î€·ğ›¾â‰¤ğ‘ƒ1î€·ğ‘¡ğ‘˜â‰¥âŽ›âŽœâŽœâŽî€·î€¸î€¸log𝑡𝑘(1+𝜀)/𝛼𝐵𝑡𝛽𝑘𝐵𝛾1î€·ğ‘¡ğ‘˜âŽžâŽŸâŽŸâŽ ğµî€·ğ›¾î€¸î€¸1î€·ğ‘¡ğ‘˜âŽžâŽŸâŽŸâŽ î‚€ğ‘†î€·ğ›¾î€¸î€¸â‰¤ğ‘ƒ1𝑡𝑘≥log𝑡𝑘(1+𝜀/2)/𝛼𝐵𝛾1𝑡𝑘≤𝐶log𝑡𝑘−(1+𝜀/4).(3.9) Therefore, ∑𝑄𝑃(𝑘)<∞. By the Borel-Cantelli lemma, we get 𝑄𝑃(𝑘i.o.)=0.
Observe that ğ‘ƒîƒ©âˆžîšğ‘˜=𝑛𝑄𝑘=ğ‘ƒâˆžîšğ‘˜=ğ‘›ğ‘„ğ‘˜âˆ©âˆžî™ğ‘˜=𝑛𝑅𝑐𝑘+ğ‘ƒâˆžîšğ‘˜=ğ‘›ğ‘„ğ‘˜âˆ©îƒ©âˆžî™ğ‘˜=ğ‘›ğ‘…ğ‘ğ‘˜îƒªğ‘îƒªîƒ©â‰¤ğ‘ƒâˆžîšğ‘˜=𝑛𝑄𝑘+ğ‘ƒâˆžîšğ‘˜=𝑛𝑅𝑘,(3.10) where 𝐸𝑐 stands for the complement of 𝐸. Thus, letting ğ‘›â†’âˆž, we have 𝑃𝑄𝑘𝑄i.o.≤𝑃𝑘𝑅i.o.+𝑃𝑘i.o.=0,(3.11) which implies that limsupğ‘˜â†’âˆžî€·log𝑡𝑘−(1+𝜀)/𝛼𝐵𝑡𝛽𝑘−1𝑋𝑡𝑘≤1a.s.(3.12) Thus, by (3.5), we have limsupğ‘¡â†’âˆž(log𝑡)−(1+𝜀)/𝛼𝐵𝑡𝛽−1𝑋(𝑡)≤limsupğ‘˜â†’âˆžmax𝑡𝑘−1<𝑡≤𝑡𝑘(log𝑡)−(1+𝜀)/𝛼𝐵𝑡𝛽−1𝑋(𝑡)≤𝜃𝛽/𝛼limsupğ‘˜â†’âˆžî€·log𝑡𝑘−(1+𝜀)/𝛼𝐵𝑡𝛽𝑘−1𝑋𝑡𝑘≤𝜃𝛽/𝛼a.s.(3.13) This yields (3.1) immediately by letting 𝜃↓1.
We now show (3.2). Let 𝑡𝑘=𝑒𝑘1+𝛿, 𝛿>0. To show (3.2), it is enough to prove limsupğ‘˜â†’âˆžî€·log𝑡𝑘−(1−𝜀)/𝛼𝐵𝑡𝛽𝑘−1𝑋𝑡𝑘≥1a.s.(3.14)
Put Λ𝑘=log𝑡𝑘−(1−𝜀)/𝛼𝐵𝑡𝛽𝑘−1𝑆𝑁𝑡𝑘,𝑈≥11(𝑥)=(log𝑥)𝜌𝑥1/𝛽,𝛾2(𝑥)=sup𝑦∶𝑈1𝜀(𝑦)≤𝑥,𝜌=,𝑊5𝛽𝑘=log𝑡𝑘−(1−𝜀)/𝛼𝐵𝑡𝛽𝑘−1𝑆𝛾2𝑡𝑘𝛾−𝑆2𝑡𝑘−1,𝑅≥1𝑘=𝑁𝑡𝑘≥𝛾2𝑡𝑘.(3.15)
By (2.19), we have 𝑃𝑅𝑘𝑇𝛾i.o.=𝑃2𝑡𝑘≤𝑡𝑘𝑇𝑡i.o.=𝑃𝑘≤log𝑡𝑘𝜀𝑡𝑘1/𝛽i.o.=1.(3.16)
Note that 𝑈1log𝑡𝑘−𝜀/4𝑡𝛽𝑘∼log𝑡𝑘−𝜀/(4𝛽)𝑡𝑘loglog𝑡𝑘−𝜀/4𝑡𝛽𝑘𝜌≤𝑈1𝛾2𝑡𝑘=𝑡𝑘.(3.17) Thus, by noting 𝑈1 increasing, log𝑡𝑘−𝜀/(4𝛼)𝑡𝑘𝛽/𝛼≤𝛾2𝑡𝑘1/𝛼.(3.18) Hence, by Lemma 2.2, log𝑡𝑘−𝜀/(2𝛼)𝐵𝑡𝛽𝑘𝐵𝛾2𝑡𝑘≤𝐶log𝑡𝑘−𝜀/(2𝛼)𝑡𝑘𝛽/𝛼𝐻𝑡𝛽𝑘1/𝛼𝛾2𝑡𝑘1/𝛼𝐻𝛾2𝑡𝑘1/𝛼⟶0.(3.19) Similarly, by noting 𝑡𝑘/𝑡𝑘−1→∞, one can have 𝐵𝛾2𝑡𝑘𝐵𝛾2𝑡𝑘−𝛾2𝑡𝑘−1⟶1.(3.20) Thus, by Lemma 2.4, we have ğ‘ƒî€·ğ‘Šğ‘˜î€¸âŽ›âŽœâŽœâŽğ‘†î€·ğ›¾â‰¥ğ‘ƒ2𝑡𝑘−𝛾2𝑡𝑘−1≥⎛⎜⎜⎝log𝑡𝑘(1−𝜀)/𝛼𝐵𝑡𝛽𝑘𝐵𝛾2î€·ğ‘¡ğ‘˜âŽžâŽŸâŽŸâŽ ğµî€·ğ›¾î€¸î€¸2î€·ğ‘¡ğ‘˜âŽžâŽŸâŽŸâŽ î‚€ğ‘†î€·ğ›¾î€¸î€¸â‰¥ğ‘ƒ2𝑡𝑘−𝛾2𝑡𝑘−1≥log𝑡𝑘(1−𝜀/2)/𝛼𝐵𝛾2𝑡𝑘≥𝐶log𝑡𝑘−(1−𝜀/4).(3.21) Therefore, ∑𝑃(𝑊𝑘)=∞. Since the events {𝑊𝑘} are independent, by the Borel-Cantelli lemma, we get 𝑃(𝑊𝑘i.o.)=1.
Now, observe that ğ‘ƒîƒ©âˆžîšğ‘›=ğ‘šÎ›ğ‘˜îƒªîƒ©â‰¥ğ‘ƒâˆžîšğ‘›=ğ‘šî‚€Î›ğ‘˜âˆ©î‚ğ‘…ğ‘˜î‚îƒªîƒ©â‰¥ğ‘ƒâˆžîšğ‘›=𝑚log𝑡𝑘−(1−𝜀)/𝛼𝐵𝑡𝛽𝑘−1𝑆𝛾2𝑡𝑘≥1Ã—ğ‘ƒâˆžî™ğ‘›=ğ‘šî‚ğ‘…ğ‘˜îƒªîƒ©â‰¥ğ‘ƒâˆžîšğ‘›=ğ‘šğ‘Šğ‘˜îƒªîƒ©Ã—ğ‘ƒâˆžî™ğ‘›=𝑚𝑅𝑘.(3.22) Therefore, by letting ğ‘šâ†’âˆž, we get 𝑃Λ𝑘≥𝑃𝑊i.o.𝑘𝑊i.o.−𝑃𝑘𝑃𝑅i.o.𝑘i.o.=1,(3.23) which implies (3.14). The proof of Theorem 1.1 is now completed.

Remark 3.1. By the proof Theorem 1.1, (1.3) can be modified as follows: limsupğ‘¡â†’âˆž(log𝑡)−1/𝛼𝐵𝑡𝛽−1𝑋(𝑡)=1a.s.(3.24) That is to say that the form of (1.3) is no rare and the variables (𝐵(𝑡𝛽))−1𝑋(𝑡) must be cut down additionally by the factors (log𝑡)−1/𝛼 to achieve a finite lim  sup.

Acknowledgments

The authors wish to express their deep gratitude to a referee for his/her valuable comments on an earlier version which improve the quality of this paper. K. S. Hwang is supported by the Korea Research Foundation Grant Funded by Korea Government (MOEHRD) (KRF-2006-353-C00004), and W. Wang is supported by NSFC Grant 11071076.