Abstract

This paper considers necessary and sufficient conditions for the solution of a stochastically and deterministically perturbed Volterra equation to converge exponentially to a nonequilibrium and nontrivial limit. Convergence in an almost sure and 𝑝th mean sense is obtained.

1. Introduction

In this paper, we study the exponential convergence of the solution of ξ‚€ξ€œπ‘‘π‘‹(𝑑)=𝐴𝑋(𝑑)+𝑑0𝐾(π‘‘βˆ’π‘ )𝑋(𝑠)𝑑𝑠+𝑓(𝑑)𝑑𝑑+Ξ£(𝑑)𝑑𝐡(𝑑),𝑑>0,(1.1a)𝑋(0)=𝑋0,(1.1b)𝑋 to a nontrivial random variable. Here the solution 𝑛 is an [0,∞)-dimensional vector-valued function on 𝐴, 𝑛×𝑛 is a real 𝐾-dimensional matrix, 𝑛×𝑛 is a continuous and integrable [0,∞)-dimensional matrix-valued function on 𝑓, 𝑛 is a continuous [0,∞)-dimensional vector-valued function on Ξ£, 𝑛×𝑑 is a continuous [0,∞)-dimensional matrix-valued function on 𝐡(𝑑)=(𝐡1(𝑑),𝐡2(𝑑),…,𝐡𝑑(𝑑)), and 𝑋0 where each component of the Brownian motion is independent. The initial condition π‘…ξ…žξ€œ(𝑑)=𝐴𝑅(𝑑)+𝑑0𝐾(π‘‘βˆ’π‘ )𝑅(𝑠)𝑑𝑠,𝑑>0,(1.2a)𝑅(0)=𝐼,(1.2b) is a deterministic constant vector.

The solution of (1.1a)-(1.1b) can be written in terms of the solution of the resolvent equation 𝑅𝑅 where the matrix-valued function π‘…βˆž is known as the resolvent or fundamental solution. In [1], the authors studied the asymptotic convergence of the solution π‘…βˆ’π‘…βˆž of (1.2a)-(1.2b) to a nontrivial limit π‘…βˆ’π‘…βˆž. It was found that π‘₯ξ…žξ€œ(𝑑)=𝐴π‘₯(𝑑)+𝑑0𝐾(π‘‘βˆ’π‘ )π‘₯(𝑠)𝑑𝑠+𝑓(𝑑),𝑑>0,(1.3a)π‘₯(0)=π‘₯0,(1.3b) being integrable and the kernel being exponentially integrable were necessary and sufficient for exponential convergence. This built upon a result of Murakami [2] who considered the exponential convergence of the solution to a trivial limit and a result of Krisztin and TerjΓ©ki [3] who obtained necessary and sufficient conditions for the integrability of 𝑓. A deterministically perturbed version of (1.2a)-(1.2b), π‘…βˆ’π‘…βˆžξ‚€ξ€œπ‘‘π‘‹(𝑑)=𝐴𝑋(𝑑)+𝑑0𝐾(π‘‘βˆ’π‘ )𝑋(𝑠)𝑑𝑠𝑑𝑑+Ξ£(𝑑)𝑑𝐡(𝑑),𝑑>0,(1.4a)𝑋(0)=𝑋0,(1.4b)was also studied in [1]. It was shown that the exponential decay of the tail of the perturbation [0,∞) combined with the integrability of 𝑝 and the exponential integrability of the kernel were necessary and sufficient conditions for convergence to a nontrivial limit.

The case where (1.2a)-(1.2b) is stochastically perturbed π‘…βˆ’π‘…βˆžβˆ«π‘‘β†¦βˆžπ‘‘π‘“(𝑠)𝑑𝑠 has been considered. Various authors including Appleby and Freeman [4], Appleby and Riedle [5], Mao [6], and Mao and Riedle [7] have studied convergence to equilibrium. In particular the paper by Appleby and Freeman [4] considered the speed of convergence of solutions of (1.4a)-(1.4b) to equilibrium. It was shown that under the condition that the kernel does not change sign on ℝ then (i) the almost sure exponential convergence of the solution to zero, (ii) the ℝ𝑛th mean exponential convergence of the solution to zero, and (iii) the exponential integrability of the kernel and the exponential square integrability of the noise are equivalent.

Two papers by Appleby et al. [8, 9] considered the convergence of solutions of (1.4a)-(1.4b) to a nonequilibrium limit in the mean square and almost sure senses, respectively. Conditions on the resolvent, kernel, and noise for the convergence of solutions to an explicit limiting random variable were found. A natural progression from this work is the analysis of the speed of convergence.

This paper examines (1.1a)-(1.1b) and builds on the results in [1, 8, 9]. The analysis of (1.1a)-(1.1b) is complicated, particularly in the almost sure case, due to presence of both a deterministic and stochastic perturbation. Nonetheless, the set of conditions which characterise the exponential convergence of the solution of (1.1a)-(1.1b) to a nontrivial random variable is found. It can be shown that the integrability of 𝑛, the exponential integrability of the kernel, the exponential square integrability of the noise combined with the exponential decay of the tail of the deterministic perturbation, ℝ, are necessary and sufficient conditions for exponential convergence of the solution to a nontrivial random limit.

2. Mathematical Preliminaries

In this section, we introduce some standard notation as well as giving a precise definition of (1.1a)-(1.1b) and its solution.

Let πžπ‘– denote the set of real numbers and let 𝑖 denote the set of ℝ𝑛-dimensional vectors with entries in ‖𝐴‖. Denote by 𝐴=(π‘Ž1,…,π‘Žπ‘›) the ‖‖𝐴‖‖2=𝑛𝑖=1π‘Ž2𝑖=tr𝐴𝐴𝑇,(2.1)th standard basis vector in tr. Denote by ℝ𝑛×𝑛 the standard Euclidean norm for a vector 𝑛×𝑛 given by𝐼where diag(π‘Ž1,π‘Ž2,…,π‘Žπ‘›) denotes the trace of a square matrix.

Let 𝑛×𝑛 be the space of π‘Ž1,π‘Ž2,…,π‘Žπ‘› matrices with real entries where 0 is the identity matrix. Let 𝐴=(π‘Žπ‘–π‘—)βˆˆβ„π‘›Γ—π‘‘ denote the β€–β‹…β€– matrix with the scalar entries ‖‖𝐴‖‖2=𝑛𝑑𝑖=1𝑗=1||π‘Žπ‘–π‘—||2.(2.2) on the diagonal and β„‚ elsewhere. For 𝑧 the norm denoted by β„‚ is defined byRe𝑧

The set of complex numbers is denoted by 𝐴∢[0,∞)→ℝ𝑛×𝑑; the real part of ξξ€œπ΄(𝑧)=∞0𝐴(𝑑)π‘’βˆ’π‘§π‘‘π‘‘π‘‘.(2.3) in πœ–βˆˆβ„ being denoted by ∫∞0‖𝐴(𝑑)β€–π‘’βˆ’πœ–π‘‘π‘‘π‘‘<∞. The Laplace transform of the function 𝐴(𝑧) is defined asRe𝑧β‰₯πœ–If 𝑧↦𝐴(𝑧) and Re𝑧>πœ– then 𝐽 exists for ℝ and 𝑉 is analytic for β€–β‹…β€–.

If 𝐢(𝐽,𝑉) is an interval in πœ™βˆΆπ½β†’π‘‰ and πœ™βˆΆ[0,∞)→𝑉 a finite-dimensional normed space with norm 𝐿1([0,∞),𝑉) then ∫∞0β€–πœ™(𝑑)‖𝑑𝑑<∞ denotes the family of continuous functions πœ™βˆΆ[0,∞)→𝑉. The space of Lebesgue integrable functions 𝐿2([0,∞),𝑉) will be denoted by ∫∞0β€–πœ™(𝑑)β€–2𝑑𝑑<∞ where 𝑉. The space of Lebesgue square-integrable functions 𝐾∢[0,∞)→ℝ𝑛×𝑛 will be denoted by 𝐾∈𝐢([0,∞),ℝ𝑛×𝑛)∩𝐿1([0,∞),ℝ𝑛×𝑛),(2.4) where π‘“βˆΆ[0,∞)→ℝ𝑛. When π‘“βˆˆπΆ([0,∞),ℝ𝑛)∩𝐿1([0,∞),ℝ𝑛),(2.5) is clear from the context, it is omitted it from the notation.

We now make our problem precise. We assume that the function Σ∢[0,∞)→ℝ𝑛×𝑑 satisfiesΣ∈𝐢([0,∞),ℝ𝑛×𝑑).(2.6)the function 𝐾1 satisfies𝐾1∈𝐢([0,∞),ℝ𝑛×𝑛)and the function 𝐾1ξ€œ(𝑑)=βˆžπ‘‘πΎ(𝑠)𝑑𝑠,𝑑β‰₯0,(2.7) satisfies𝐾Due to (2.4) we may define 𝑓1 to be a function 𝑓1∈𝐢([0,∞),ℝ𝑛) with𝑓1ξ€œ(𝑑)=βˆžπ‘‘π‘“(𝑠)𝑑𝑠,𝑑β‰₯0.(2.8)where this function defines the tail of the kernel {𝐡(𝑑)}𝑑β‰₯0. Similarly, due to (2.5), we may define 𝑑 to be a function (Ξ©,β„±,{ℱ𝐡(𝑑)}𝑑β‰₯0,β„™) given byℱ𝐡(𝑑)=𝜎{𝐡(𝑠)∢0≀𝑠≀𝑑}We let 𝑅 denote 𝑑↦𝑋(𝑑;𝑋0,𝑓,Ξ£)-dimensional Brownian motion on a complete probability space Ξ£ where the filtration is the natural one 𝑓.

Under the hypothesis (2.4), it is well known that (1.2a)-(1.2b) has a unique continuous solution 𝑋0, which is continuously differentiable. We define the function ℱ𝐡 to be the unique solution of the initial value problem (1.1a)-(1.1b). If 𝑋(𝑑;𝑋0,Ξ£,𝑓)=𝑅(𝑑)𝑋0+ξ€œπ‘‘0ξ€œπ‘…(π‘‘βˆ’π‘ )𝑓(𝑠)𝑑𝑠+𝑑0𝑅(π‘‘βˆ’π‘ )Ξ£(𝑠)𝑑𝐡(𝑠),𝑑β‰₯0.(2.9) and 𝑋0,𝑓 are continuous then for any deterministic initial condition Ξ£ there exists an almost surely unique continuous and 𝑝-adapted solution to (1.1a)-(1.1b) given byℝ𝑛When {𝑋(𝑑)}𝑑β‰₯0, and 𝑝 are clear from the context, we omit them from the notation.

The notion of convergence and integrability in π‘‹βˆžth mean and almost sure senses are now defined: the limπ‘‘β†’βˆžπ”Όβ€–π‘‹(𝑑)βˆ’π‘‹βˆžβ€–π‘=0-valued stochastic process 𝑝converges in π‘‹βˆžth mean to 𝛽𝑝>0 if limsupπ‘‘β†’βˆž1𝑑‖‖log(𝔼𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–π‘)β‰€βˆ’π›½π‘;(2.10); the process is {𝑋(𝑑)}𝑑β‰₯0th mean exponentially convergent to π‘‹βˆž if there exists a deterministic 𝑝 such thatξ€œβˆž0𝔼‖‖𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–π‘π‘‘π‘‘<∞.(2.11)we say that the difference between the stochastic process β„™ and Ξ©0 is integrable in the πœ”βˆ‰Ξ©0th mean sense iflimπ‘‘β†’βˆžπ‘‹(𝑑,πœ”)=π‘‹βˆž(πœ”)If there exists a 𝑋-null set π‘‹βˆž such that for every 𝑋, the following holds: π‘‹βˆž, then 𝛽0>0converges almost surely to limsupπ‘‘β†’βˆž1𝑑‖‖log𝑋(𝑑,πœ”)βˆ’π‘‹βˆžβ€–β€–(πœ”)β‰€βˆ’π›½0,a.s.(2.12); we say {𝑋(𝑑)}𝑑β‰₯0 is almost surely exponentially convergent to π‘‹βˆž if there exists a deterministic ξ€œβˆž0‖‖𝑋(𝑑,πœ”)βˆ’π‘‹βˆžβ€–β€–(πœ”)2𝑑𝑑<∞.(2.13) such that𝔼[𝑋𝑝]Finally, the difference between the stochastic process 𝔼𝑋𝑝 and 𝑝,π‘žβˆˆ(0,∞) is square integrable in the almost sure sense if𝑋Henceforth, π‘Œ will be denoted by 𝔼‖𝑋‖𝑝<∞ except in cases where the meaning may be ambiguous. A number of inequalities are used repeatedly in the sequel; they are stated here for clarity. If, for π”Όβ€–π‘Œβ€–π‘ž<∞, the finite-dimensional random variables 𝑝 and 𝑝>0 satisfy 𝔼‖‖𝑋‖‖𝑝1/π‘ξ€Ίβ€–β€–π‘‹β€–β€–β‰€π”Όπ‘žξ€»1/π‘ž,0<π‘β‰€π‘ž.(2.14) and (𝑛𝑖=1||π‘₯𝑖||)π‘˜β‰€π‘›π‘›π‘˜βˆ’1𝑖=1||π‘₯𝑖||π‘˜,𝑛,π‘˜βˆˆβ„•.(2.15), respectively, then the Lyapunov inequality is useful when considering the 𝐾th mean behaviour of random variables as any exponent ξ€œβˆž0𝑑2‖‖‖‖𝐾(𝑑)𝑑𝑑<∞,(3.1) may be considered:Ξ£The following proves useful in manipulating norms:𝑓

3. Discussion of Results

We begin by stating the main result of this paper. That is, we state the necessary and sufficient conditions required on the resolvent, kernel, deterministic perturbation, and noise terms for the solution of (1.1a)-(1.1b) to converge exponentially to a limiting random variable. In this paper, we are particularly interested in the case when the limiting random variable is nontrivial, although the result is still true for the case when the limiting value is zero.

Theorem 3.1. Let 𝐾 satisfy (2.4) and eachentryof𝐾doesnotchangesignon[0,∞),(3.2)let π‘…βˆž satisfy (2.6), and let 𝑅 satisfy (2.5). If π‘…βˆ’π‘…βˆžβˆˆπΏ2([0,∞),ℝ𝑛×𝑛),(3.3) satisfies 𝛼>0,𝛾>0,𝜌>0then the following are equivalent.
(i)There exists a constant matrix 𝑐1>0 such that the solution 𝐾 of (1.2a)-(1.2b) satisfiesξ€œβˆž0‖‖‖‖𝑒𝐾(𝑠)𝛼𝑠𝑑𝑠<∞,(3.4)and there exist constants Ξ£, and ξ€œβˆž0β€–β€–β€–β€–Ξ£(𝑠)2𝑒2𝛾𝑠𝑑𝑠<∞,(3.5) such that 𝑓1 satisfies𝑓‖‖𝑓1β€–β€–(𝑑)≀𝑐1π‘’βˆ’π›Ύπ‘‘,𝑑β‰₯0.(3.6) satisfies𝑋0and 𝑝>0, the tail of ℱ𝐡(∞), defined by (2.8) satisfiesπ‘‹βˆž(𝑋0,Ξ£,𝑓)(ii)For all initial conditions π”Όβ€–π‘‹βˆžβ€–π‘<∞ and constants 𝑋(β‹…;𝑋0,Ξ£,𝑓) there exists an a.s. finite 𝔼‖‖𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–π‘ξ€»β‰€π‘šβˆ—π‘π‘’βˆ’π›½βˆ—π‘π‘‘,𝑑β‰₯0,(3.7)-measurable random variable π›½βˆ—π‘ with π‘šβˆ—π‘=π‘šβˆ—π‘(𝑋0) such that the unique continuous adapted process 𝑋0 which obeys (1.1a)-(1.1b) satisfiesℱ𝐡(∞)where π‘‹βˆž(𝑋0,Ξ£,𝑓) and 𝑋(β‹…;𝑋0,Ξ£,𝑓) are positive constants.(iii)For all initial conditions limsupπ‘‘β†’βˆž1𝑑‖‖log𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–β‰€βˆ’π›½βˆ—0a.s.,(3.8) there exists an a.s. finite π›½βˆ—0-measurable random variable π‘“βˆΆ=0 such that the unique continuous adapted process 𝐾 which obeys (1.1a)-(1.1b) satisfiesΞ£where 𝐾 is a positive constant.

The proof of Theorem 3.1 is complicated by the presence of two perturbations so as an initial step the case when π‘…βˆž is considered. That is we consider the conditions required for exponential convergence of (1.4a)-(1.4b) to a limiting random variable.

Theorem 3.2. Let 𝑅 satisfy (2.4) and (3.1) and let 𝛼>0 satisfy (2.6). If 𝛾>0 satisfies (3.2) then the following are equivalent.
(i) There exists a constant matrix 𝐾 such that the solution Ξ£ of (1.2a)-(1.2b) satisfies (3.3) and there exist constants 𝑋0 and 𝑝>0 such that ℱ𝐡(∞) and π‘‹βˆž(𝑋0,Ξ£) satisfy (3.4) and (3.5), respectively.(ii) For all initial conditions π”Όβ€–π‘‹βˆžβ€–π‘<∞ and constants 𝑋(β‹…;𝑋0,Ξ£) there exists an a.s. finite 𝔼‖‖𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–π‘ξ€»β‰€π‘šπ‘π‘’βˆ’π›½π‘π‘‘,𝑑β‰₯0,(3.9)-measurable random variable 𝛽𝑝 with π‘šπ‘=π‘šπ‘(𝑋0) such that the unique continuous adapted process 𝑋0 which obeys (1.4a)-(1.4b) satisfiesℱ𝐡(∞)where π‘‹βˆž(𝑋0,Ξ£) and 𝑋(β‹…;𝑋0,Ξ£) are positive constants.(iii) For all initial conditions limsupπ‘‘β†’βˆž1𝑑‖‖log𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–β‰€βˆ’π›½0a.s.,(3.10) there exists an a.s. finite 𝛽0-measurable random variable π‘…βˆž=0 such that the unique continuous adapted process 𝐾,𝑓 which obeys (1.4a)-(1.4b) satisfiesΞ£where 𝐾 is a positive constant.

This result is interesting in its own right as it generalises a result in [4] where necessary and sufficient conditions for exponential convergence to zero are found. Theorem 3.2 collapses to this case if 𝐾.

It is interesting to note the relationship between the behaviour of the solutions of (1.1a)-(1.1b), (1.2a)-(1.2b), (1.3a)-(1.3b), and (1.4a)-(1.4b) and the behaviour of the inputs 𝑓, and 𝐾. It is seen in [1] that Ξ£ being exponentially integrable is the crucial condition for exponential convergence when we consider the resolvent equation. Each perturbed equation then builds on this resolvent case: for the deterministically perturbed equation we require the exponential integrability of 𝑅,𝐾 and the exponential decay of the tail of the perturbation Ξ£ (see [1]); for the stochastically perturbed case we require the exponential integrability of 𝑅,𝐾,Ξ£ and the exponential square integrability of 𝑓. In the stochastically and deterministically perturbed case it is seen that the perturbations do not interact in a way that exacerbates or diminishes the influence of the perturbations on the system: we can isolate the behaviours of the perturbations and show that the same conditions on the perturbations are still necessary and sufficient.

Theorem 3.1 has application in the analysis of initial history problems. In particular this theoretical result could be used to interpret the equation as an epidemiological model. Conditions under which a disease becomes endemic (which is the interpretation that is given when solutions settle down to a nontrivial limit) were studied in [9]. The theoretical results obtained in this paper could be exploited to highlight the speed at which this can occur within a population.

The remainder of this paper deals with the proofs of Theorems 3.1 and 3.2. In Section 4 we prove the sufficiency of conditions on Ξ£, and 𝑝 for the exponential convergence of the solution of (1.4a)-(1.4b) while in Section 5 we prove the necessity of these conditions. In Section 6 we prove the sufficiency of conditions on 𝐾, and Ξ£ for the exponential convergence of the solution of (1.1a)-(1.1b), while Section 7 deals with the necessity of the condition on π‘…βˆž. In Section 8 we combine our results to prove the main theorems, namely, Theorems 3.1 and 3.2.

4. Sufficient Conditions for Exponential Convergence of Solutions of (1.4a)-(1.4b)

In this section, sufficient conditions for exponential convergence of solutions of (1.4a)-(1.4b) to a nontrivial limit are obtained. Proposition 4.1 concerns convergence in the 𝑅th mean sense while Proposition 4.2 deals with the almost sure case.

Proposition 4.1. Let 𝛼>0 satisfy (2.4) and (3.1), let 𝛾>0 satisfy (2.6) and 𝛽𝑝>0 be a constant matrix such that the solution 𝑋0 of (1.2a)-(1.2b) satisfies (3.3). If there exist constants π‘šπ‘=π‘šπ‘(𝑋0)>0 and 𝐾 such that (3.4) and (3.5) hold, then there exist constants Ξ£, independent of π‘…βˆž, and 𝑅, such that statement (ii) of Theorem 3.2 holds.

Proposition 4.2. Let 𝛼>0 satisfy (2.4) and (3.1), let 𝛾>0 satisfy (2.6) and 𝛽0>0 be a constant matrix such that the solution 𝑋0 of (1.2a)-(1.2b) satisfies (3.3). If there exist constants 𝑝=2 and 𝑝>0 such that (3.4) and (3.5) hold, then there exists a constant 𝐾, independent of Ξ£ such that statement (iii) of Theorem 3.2 holds.

In [8], the conditions which give mean square convergence to a nontrivial limit were considered. So a natural progression in this paper is the examination of the speed of convergence in the mean square case. Lemma 4.3 examines the case when π‘…βˆž in order to highlight this important case. This lemma may be then used when generalising the result to all 𝑅.

Lemma 4.3. Let 𝛼>0 satisfy (2.4) and (3.1), let 𝛾>0 satisfy (2.6), and let πœ†>0 be a constant matrix such that the solution 𝑋0 of (1.2a)-(1.2b) satisfies (3.3). If there exist constants π‘š=π‘š(𝑋0)>0 and 𝔼‖‖𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–2β‰€π‘š(𝑋0)π‘’βˆ’2πœ†π‘‘,𝑑β‰₯0.(4.1) such that (3.4) and (3.5) hold, then there exist constants π‘…βˆ’π‘…βˆžβˆˆπΏ2([0,∞),ℝ𝑛×𝑛), independent of π‘…βˆ’π‘…βˆžβˆˆπΏ1([0,∞),ℝ𝑛×𝑛), and π‘…βˆ’π‘…βˆžβˆˆπΏ1([0,∞),ℝ𝑛×𝑛), such that π‘…βˆ’π‘…βˆž

From [8, 9] it is evident that π‘…βˆ’π‘…βˆž is a more natural condition on the resolvent than 𝐾 when studying convergence of solutions of (1.4a)-(1.4b). However, the deterministic results obtained in [1] are based on the assumption that π‘…βˆž. Lemma 4.4 is required in order to make use of these results in this paper; this result isolates conditions that ensure the integrability of 𝑅 once 𝑅 is square integrable.

Lemma 4.4. Let π‘…βˆ’π‘…βˆžβˆˆπΏ1([0,∞),ℝ𝑛×𝑛).(4.2) satisfy (2.4) and (3.1) and let 𝑅 be a constant matrix such that the solution π‘…βˆž of (1.2a)-(1.2b) satisfies (3.3). Then the solution 𝐾 of (1.2a)-(1.2b) satisfies π‘…βˆž

We now state some supporting results. It is well known that the behaviour of the resolvent Volterra equation influences the behaviour of the perturbed equation. It is unsurprising therefore that an earlier result found in [1] concerning exponential convergence of the resolvent 𝑅 to a limit 𝛼>0 in needed in the proof of Theorems 3.1 and 3.2.

Theorem 4.5. Let 𝐾 satisfy (2.4) and (3.1). Suppose there exists a constant matrix 𝛽>0 such that the solution 𝑐>0 of (1.2a)-(1.2b) satisfies (4.2). If there exists a constant ‖‖𝑅(𝑑)βˆ’π‘…βˆžβ€–β€–β‰€π‘π‘’βˆ’π›½π‘‘,𝑑β‰₯0.(4.3) such that π‘‹βˆž satisfies (3.4) then there exist constants π‘‹βˆž and 𝐾 such that ξ€œβˆž0𝑑‖‖‖‖𝐾(𝑑)𝑑𝑑<∞,(4.4)

In the proof of Propositions 4.1 and 4.2, an explicit representation of Ξ£ is required. In [8, 9] the asymptotic convergence of the solution of (1.4a)-(1.4b) was considered. Sufficient conditions for convergence were obtained and an explicit representation of ξ€œβˆž0β€–β€–β€–β€–Ξ£(𝑑)2𝑑𝑑<∞.(4.5) was found:

Theorem 4.6. Let 𝑅 satisfy (2.4) and 𝑋and let limπ‘‘β†’βˆžπ‘‹(𝑑)=π‘‹βˆž satisfy (2.6) and π‘‹βˆžSuppose that the resolvent ℱ𝐡(∞) of (1.2a)-(1.2b) satisfies (3.3). Then the solution π‘‹βˆž=π‘…βˆžξ‚€π‘‹0+ξ€œβˆž0Σ(𝑑)𝑑𝐡(𝑑)a.s.(4.6) of (1.4a)-(1.4b) satisfies π‘‹βˆž almost surely, where 𝐾 is an almost surely finite and 𝑋0-measurable random variable given by π‘‹βˆž(𝑋0,Ξ£)

Lemma 4.7 concerns the structure of 𝑑↦𝑋(𝑑;𝑋0,Ξ£) in the almost sure case. It was proved in [9].

Lemma 4.7. Let limπ‘‘β†’βˆžπ‘‹(𝑑;𝑋0,Ξ£)=π‘‹βˆž(𝑋0,Ξ£)a.s.,(4.7)𝑋(β‹…;𝑋0,Ξ£)βˆ’π‘‹βˆž(𝑋0,Ξ£)∈𝐿2([0,∞),ℝ𝑛)a.s.(4.8) satisfy (2.4) and (4.4). Suppose that for all initial conditions ξ‚€ξ€œπ΄+∞0𝑋𝐾(𝑠)π‘‘π‘ βˆž=0a.s.(4.9) there is an almost surely finite random variable 𝐾 such that the solution Ξ£ of (1.4a)-(1.4b) satisfies Ξ£π‘…βˆžThen 𝑅

It is possible to apply this lemma using our a priori assumptions due to Theorem 4.8, which was proved in [9].

Theorem 4.8. Let 𝑋0 satisfy (2.4) and (4.4) and let ℱ𝐡(∞) satisfy (2.6). If π‘‹βˆž(𝑋0,Ξ£) satisfies (4.5) and there exists a constant matrix 𝑋(β‹…;𝑋0,Ξ£) such that the solution Ξ£ of (1.2a)-(1.2b) satisfies (3.3), then for all initial conditions ξ€œβˆž0π‘‘β€–β€–π‘…βˆžβ€–β€–Ξ£(𝑑)2𝑑𝑑<∞,(4.10) there is an almost surely finite βˆ«π‘€=𝐴+∞0𝐾(𝑑)𝑑𝑑-measurable random variable 𝑇, such that the unique continuous adapted process 𝐽=π‘‡βˆ’1𝑀𝑇 which obeys (1.4a)-(1.4b) satisfies (4.7).
Moreover, if the function 𝑒𝑖=1 also satisfies 𝑖then (4.8) holds.

Lemma 4.9 below is required in the proof of Lemma 4.4. It is proved in [8]. Before citing this result some notation is introduced. Let 𝐽 and 𝑒𝑖=0 be an invertible matrix such that 𝐷𝑝=diag(𝑒1,𝑒2,…,𝑒𝑛) has Jordan canonical form. Let 𝑃=π‘‡π·π‘π‘‡βˆ’1 if all the elements of the 𝑄=πΌβˆ’π‘ƒth row of 𝐾 are zero, and π‘…βˆž otherwise. Let 𝑅 and put det[𝐼+𝐹(𝑧)]β‰ 0,Re𝑧β‰₯0,(4.11) and 𝐹.

Lemma 4.9. Let 𝐹(𝑑)=βˆ’π‘’βˆ’π‘‘ξ€œ(𝑄+𝑄𝐴)βˆ’(π‘’βˆ—π‘„πΎ)(𝑑)+π‘ƒβˆžπ‘‘πΎ(𝑒)𝑑𝑒,𝑑β‰₯0.(4.12) satisfy (2.4) and (4.4). If there exists a constant matrix 𝜎∈𝐢([0,∞)Γ—[0,∞),β„π‘Γ—π‘Ÿ) such that the resolvent π”Όβ€–β€–β€–ξ€œπ‘π‘Žβ€–β€–β€–πœŽ(𝑠,𝑑)𝑑𝐡(𝑠)2π‘šβ‰€π‘‘π‘šξ‚€ξ€œ(𝑝,π‘Ÿ)π‘π‘Žβ€–β€–β€–β€–πœŽ(𝑠,𝑑)2ξ‚π‘‘π‘ π‘š,(4.13) of (1.2a)-(1.2b) satisfies (3.3), then π‘‘π‘š(𝑝,π‘Ÿ)=π‘π‘š+1π‘Ÿ2π‘š+1(2π‘š)!(π‘š!2π‘š)βˆ’1𝑐2(𝑝,π‘Ÿ)π‘šwhere ξ‚πΎβˆˆπΆ([0,∞),ℝ𝑛×𝑛)∩L1([0,∞),ℝ𝑛×𝑛) is defined by ξ€œβˆž0‖‖‖‖𝑒𝐾(𝑠)𝛼𝑠𝑑𝑠<∞.(4.14)

Lemma 4.10 concerns the moments of a normally distributed random variable. It can be extracted from [4, Theorem 3.3] and it is used in Proposition 4.1.

Lemma 4.10. Suppose the function Μƒπœ†>0 then Μƒπœ†β‹€Μƒπœ‚=2𝛼where ξ€œπ‘‘0π‘’Μƒβˆ’2πœ†(π‘‘βˆ’π‘ )π‘’βˆ’ξ‚π›Όπ‘ β€–β€–ξ‚β€–β€–πΎ(𝑠)π‘‘π‘ β‰€π‘π‘’βˆ’Μƒπœ‚π‘‘,(4.15).

The following lemma is used in Proposition 4.2. A similar result is proved in [4].

Lemma 4.11. Suppose that 𝑐 and 𝑋(𝑑)β†’π‘‹βˆžIf π‘‹βˆž and π”Όβ€–β€–π‘‹βˆžβ€–β€–2=𝔼[tr(π‘‹βˆžπ‘‹π‘‡βˆžβ€–β€–π‘…)]=βˆžπ‘‹0β€–β€–2+ξ€œβˆž0β€–β€–π‘…βˆžβ€–β€–Ξ£(𝑠)2𝑑𝑠<∞.(4.16) then ‖‖𝔼[𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–2]=𝔼[tr(𝑋(𝑑)βˆ’π‘‹βˆž)(𝑋(𝑑)βˆ’π‘‹βˆž)𝑇],(4.17)where ‖‖𝔼[𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–2β€–β€–]=(𝑅(𝑑)βˆ’π‘…βˆž)𝑋0β€–β€–2+ξ€œπ‘‘0β€–β€–(𝑅(π‘‘βˆ’π‘ )βˆ’π‘…βˆžβ€–β€–)Ξ£(𝑠)2ξ€œπ‘‘π‘ +βˆžπ‘‘β€–β€–π‘…βˆžβ€–β€–Ξ£(𝑠)2𝑑𝑠.(4.18) is a positive constant.

The proofs of Propositions 4.1 and 4.2 and Lemmas 4.3 and 4.4 are now given.

Proof of Lemma 4.3. From Theorem 4.6 we see that β€–β€–(𝑅(𝑑)βˆ’π‘…βˆž)𝑋0β€–β€–2≀𝑐1‖‖𝑋0β€–β€–2π‘’βˆ’2𝛽𝑑.(4.19) almost surely where 𝑅 is given by (4.6), so we see thatπ‘…βˆžSince0<πœ†<min(𝛽,𝛾)we use (2.9) and (4.6) to expand the right hand side of (4.17) to obtainπ‘’πœ†Ξ£In order to obtain an exponential upper bound on (4.18) each term is considered individually. We begin by considering the first term on the right-hand side of (4.18). Using (3.1) and (3.3) we can apply Lemma 4.4 to obtain (4.2). Then using (3.1), (4.2), and (3.4) we see from Theorem 4.5 thatπ‘’πœ†(π‘…βˆ’π‘…βˆž)∈𝐿2[0,∞)We provide an argument to show that the second term decays exponentially. Using (3.5) and the fact that π‘’πœ† decays exponentially quickly to π‘’πœ†(𝑑)=π‘’πœ†π‘‘ we can choose 𝐿2[0,∞) such that 𝐿2[0,∞) and 𝐿2[0,∞) where the function 𝑒2πœ†π‘‘ξ€œπ‘‘0β€–β€–(𝑅(π‘‘βˆ’π‘ )βˆ’π‘…βˆžβ€–β€–)Ξ£(𝑠)2ξ€œπ‘‘π‘ β‰€π‘‘0𝑒2πœ†(π‘‘βˆ’π‘ )‖‖𝑅(π‘‘βˆ’π‘ )βˆ’π‘…βˆžβ€–β€–2𝑒2πœ†π‘ β€–β€–β€–β€–Ξ£(𝑠)2𝑑𝑠≀𝑐2,(4.20) is defined by ξ€œΞ£βˆΆ=∞0β€–β€–β€–β€–Ξ£(𝑠)2𝑒2π›Ύπ‘ ξ€œπ‘‘π‘ β‰₯βˆžπ‘‘β€–β€–β€–β€–Ξ£(𝑠)2𝑒2𝛾𝑠𝑑𝑠β‰₯𝑒2π›Ύπ‘‘ξ€œβˆžπ‘‘β€–β€–β€–β€–Ξ£(𝑠)2𝑑𝑠.(4.21). Since the convolution of an ‖‖𝔼[𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–2]β‰€π‘š(𝑋0)π‘’βˆ’2πœ†π‘‘,(4.22) function with an π‘š(𝑋0)=𝑐1‖𝑋0β€–2+𝑐2+Ξ£β€–π‘…βˆžβ€–2 function is itself an πœ†<min(𝛽,𝛾) function we get0<𝑝≀2and so the second term of (4.18) decays exponentially quickly.
We can show that the third term on the right hand side of (4.18) decays exponentially using (3.5) and the following argument:𝑝>2
Combining these facts we see that0<𝑝≀2where 𝔼[β€–π‘‹βˆžβ€–2]<∞ and π”Όβ€–β€–π‘‹βˆžβ€–β€–π‘ξ€Ίβ€–β€–π‘‹β‰€π”Όβˆžβ€–β€–2𝑝/2<∞.(4.23).

Proof of Proposition 4.1. Consider the case where 0≀𝑝≀2 and 𝔼‖‖𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–π‘ξ€»β€–β€–β‰€π”Ό[𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–2]𝑝/2β‰€π‘šπ‘(𝑋0)π‘’βˆ’π›½π‘π‘‘,𝑑β‰₯0,(4.24) separately. We begin with the case where π‘šπ‘(𝑋0)=π‘š(𝑋0)𝑝/2. The argument given by (4.16) shows that 𝛽𝑝=πœ†π‘. Now applying Lyapunov's inequality we see that𝑝>2We now show that (3.9) holds for π‘šβˆˆβ„•. Lyapunov's inequality and Lemma 4.3 can be applied as follows:2(π‘šβˆ’1)<𝑝≀2π‘šwhere π”Όβ€–π‘‹βˆžβ€–2π‘š and 𝔼[‖𝑋(𝑑)βˆ’π‘‹βˆžβ€–2π‘š].
Now consider the case where π”Όβ€–π‘‹βˆžβ€–π‘. In this case, there exists a constant 𝔼[‖𝑋(𝑑)βˆ’π‘‹βˆžβ€–π‘] such that π”Όβ€–β€–π‘‹βˆžβ€–β€–2π‘šβ€–β€–π‘…β‰€π‘βˆžπ‘‹0β€–β€–2π‘šξ‚€ξ€œ+π‘βˆž0β€–β€–π‘…βˆžβ€–β€–Ξ£(𝑠)2ξ‚π‘‘π‘ π‘š<∞,(4.25). We now seek an upper bound on 𝑐 and π”Όβ€–π‘‹βˆžβ€–π‘β‰€π”Ό[β€–π‘‹βˆžβ€–2π‘š]𝑝/2π‘š<∞, which will in turn give an upper bound on 𝔼[‖𝑋(𝑑)βˆ’π‘‹βˆžβ€–2π‘š] and π‘‹βˆž by using Lyapunov's inequality. By applying Lemma 4.10 we see that2π‘šwhere ‖‖𝔼[𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–2π‘š]≀32π‘šβˆ’1ξ‚€β€–β€–(𝑅(𝑑)βˆ’π‘…βˆž)𝑋0β€–β€–2π‘šξ‚ƒβ€–β€–β€–ξ€œ+𝔼𝑑0(𝑅(π‘‘βˆ’π‘ )βˆ’π‘…βˆžβ€–β€–β€–)Ξ£(𝑠)𝑑𝐡(𝑠)2π‘šξ‚„ξ‚ƒβ€–β€–β€–ξ€œ+π”Όβˆžπ‘‘π‘…βˆžβ€–β€–β€–Ξ£(𝑠)𝑑𝐡(𝑠)2π‘š.(4.26) is a positive constant, so β€–β€–(𝑅(𝑑)βˆ’π‘…βˆž)𝑋0β€–β€–2π‘šβ‰€π‘1‖‖𝑋0β€–β€–2π‘šπ‘’βˆ’2π‘šπ›½π‘‘.(4.27).
Now consider βˆ«π‘‘0β€–(𝑅(π‘‘βˆ’π‘ )βˆ’π‘…βˆž)Ξ£(𝑠)β€–2𝑑𝑠≀𝑐2π‘’βˆ’2πœ†π‘‘. Using the variation of parameters representation of the solution and the expression obtained for πœ†<min(𝛽,𝛾), taking norms, raising both sides of the equation to the π”Όξ‚ƒβ€–β€–β€–ξ€œπ‘‘0(𝑅(π‘‘βˆ’π‘ )βˆ’π‘…βˆžβ€–β€–β€–)Ξ£(𝑠)𝑑𝐡(𝑠)2π‘šξ‚„β‰€π‘‘π‘šξ‚€ξ€œ(𝑛,𝑑)𝑑0‖‖𝑅(π‘‘βˆ’π‘ )βˆ’π‘…βˆžβ€–β€–2β€–β€–β€–β€–Ξ£(𝑠)2ξ‚π‘‘π‘ π‘šβ‰€π‘‘π‘š(𝑛,𝑑)π‘π‘š2π‘’βˆ’2π‘šπœ†π‘‘.(4.28)th power, then taking expectations across the inequality, we arrive atπ”Όξ‚ƒβ€–β€–β€–ξ€œβˆžπ‘‘β€–β€–β€–Ξ£(𝑠)𝑑𝐡(𝑠)2π‘šξ‚„β‰€π‘‘π‘šξ‚€ξ€œ(𝑛,𝑑)βˆžπ‘‘β€–β€–β€–β€–Ξ£(𝑠)2ξ‚π‘‘π‘ π‘šβ‰€π‘‘π‘š(𝑛,𝑑)Ξ£π‘šπ‘’βˆ’2π‘šπ›Ύπ‘‘.(4.29)We consider each term on the right hand side of (4.26). By Theorem 4.5 we have‖‖𝔼[𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–2π‘š]≀32π‘šβˆ’1𝑐1‖‖𝑋0β€–β€–2π‘šπ‘’βˆ’2π‘šπ›½π‘‘+π‘‘π‘š(𝑛,𝑑)π‘π‘š2π‘’βˆ’2π‘šπœ†π‘‘+π‘‘π‘šβ€–β€–π‘…(𝑛,𝑑)βˆžβ€–β€–2π‘šΞ£π‘šπ‘’βˆ’2π‘šπ›Ύπ‘‘ξ€Έ.(4.30)Now, consider the second term on the right-hand side of (4.26). By (4.20) we see that ‖‖𝔼[𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–π‘]β‰€π‘šπ‘(𝑋0)π‘’βˆ’π›½π‘π‘‘,(4.31) where π‘šπ‘(𝑋0)=3𝑝((2π‘šβˆ’1)/2π‘š)(𝑐1‖𝑋0β€–2π‘š+π‘‘π‘š(𝑛,𝑑)π‘π‘š2+π‘‘π‘š(𝑛,𝑑)β€–π‘…βˆžβ€–2π‘šΞ£π‘š)𝑝/2π‘š. Using this and Lemma 4.10 we see that𝛽𝑝=πœ†π‘
Using (4.21) combined with Lemma 4.10 and Fatou's lemma, we show that the third term decays exponentially quickly:𝔼supπ‘›βˆ’1≀𝑑≀𝑛‖‖𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–2ξ‚„β‰€ξ‚π‘š(𝑋0)π‘’βˆ’2πœ‚(π‘›βˆ’1),πœ‚>0.(4.32)Combining (4.27), (4.28), and (4.29) the inequality (4.26) becomes𝑑>0Using Lyapunov's inequality, the inequality (4.30) impliesπ‘›βˆˆβ„•where π‘›βˆ’1≀𝑑<𝑛 and Ξ”(𝑑)=𝑋(𝑑)βˆ’π‘‹βˆž.

Proof of Proposition 4.2. In order to prove this proposition we show that[π‘›βˆ’1,𝑑]For each π‘‹βˆž there exists 𝑋(𝑑)βˆ’π‘‹βˆž=𝑋(π‘›βˆ’1)βˆ’π‘‹βˆžξ€Έ+ξ€œπ‘‘π‘›βˆ’1𝐴(𝑋(𝑠)βˆ’π‘‹βˆž+ξ€œ)π‘‘π‘ π‘‘π‘›βˆ’1ξ€œπ‘ 0𝐾(π‘ βˆ’π‘’)(𝑋(𝑒)βˆ’π‘‹βˆžξ€œ)𝑑𝑒𝑑𝑠+π‘‘π‘›βˆ’1+ξ€œΞ£(𝑠)𝑑𝐡(𝑠)π‘‘π‘›βˆ’1ξ‚€ξ€œπ΄+∞0𝑋𝐾(𝑒)π‘‘π‘’βˆžξ€œπ‘‘π‘ βˆ’π‘‘π‘›βˆ’1ξ€œβˆžπ‘ πΎ(𝑣)π‘‹βˆžπ‘‘π‘£π‘‘π‘ .(4.33) such that ξ€œΞ”(𝑑)=Ξ”(π‘›βˆ’1)+π‘‘π‘›βˆ’1ξ€œ(𝐴Δ(𝑠)+(πΎβˆ—Ξ”)(𝑠))𝑑𝑠+π‘‘π‘›βˆ’1ξ€œΞ£(𝑠)𝑑𝐡(𝑠)βˆ’π‘‘π‘›βˆ’1𝐾1(𝑠)π‘‘π‘ π‘‹βˆž.(4.34). Define 𝔼supπ‘›βˆ’1≀𝑑≀𝑛‖‖‖‖Δ(𝑑)2𝔼‖‖‖‖≀4Ξ”(π‘›βˆ’1)2ξ€œ+π”Όξ‚ƒξ‚€π‘›π‘›βˆ’1β€–β€–π΄β€–β€–β€–β€–β€–β€–β€–β€–πΎβ€–β€–βˆ—β€–β€–Ξ”β€–β€–ξ‚Ξ”(𝑠)+()(𝑠)𝑑𝑠2+𝔼supπ‘›βˆ’1β‰€π‘‘β‰€π‘›β€–β€–β€–ξ€œπ‘‘π‘›βˆ’1β€–β€–β€–Ξ£(𝑠)𝑑𝐡(𝑠)2ξ‚„+ξ‚€ξ€œπ‘›π‘›βˆ’1‖‖𝐾1‖‖(𝑠)𝑑𝑠2π”Όβ€–β€–π‘‹βˆžβ€–β€–2.(4.35). Integrating (1.4a)-(1.4b) over ‖‖‖‖𝔼[Ξ”(π‘›βˆ’1)2]β‰€π‘š(𝑋0)π‘’βˆ’2πœ†(π‘›βˆ’1).(4.36), then adding and subtracting ξ‚€ξ€œπ‘›π‘›βˆ’1β€–β€–π΄β€–β€–β€–β€–β€–β€–β€–β€–πΎβ€–β€–βˆ—β€–β€–Ξ”β€–β€–ξ‚Ξ”(𝑠)+()(𝑠)𝑑𝑠2ξ€œβ‰€2π‘›π‘›βˆ’1‖‖𝐴‖‖2β€–β€–β€–β€–Ξ”(𝑠)2β€–β€–πΎβ€–β€–βˆ—β€–β€–Ξ”β€–β€–)+(2ξ‚ξ€œ(𝑠)𝑑𝑠≀2π‘›π‘›βˆ’1‖‖𝐴‖‖2β€–β€–β€–β€–Ξ”(𝑠)2+ξ‚€ξ€œπ‘ 0𝑒𝛼(π‘ βˆ’π‘’)/2‖‖‖‖𝐾(π‘ βˆ’π‘’)1/2π‘’βˆ’π›Ό(π‘ βˆ’π‘’)/2‖‖‖‖𝐾(π‘ βˆ’π‘’)1/2‖‖‖‖Δ(𝑠)𝑑𝑒2ξ‚„ξ€œπ‘‘π‘ β‰€2π‘›π‘›βˆ’1‖‖𝐴‖‖2β€–β€–β€–β€–Ξ”(𝑠)2+πΎπ›Όξ€œπ‘ 0π‘’βˆ’π›Ό(π‘ βˆ’π‘’)‖‖‖‖‖‖‖‖𝐾(π‘ βˆ’π‘’)Ξ”(𝑠)2𝑑𝑒𝑑𝑠,(4.37) on both sides we get𝐾𝛼=∫∞0𝑒𝛼𝑑‖𝐾(𝑑)‖𝑑𝑑By applying Theorem 4.8, we see that (4.7) and (4.8) hold so Lemma 4.7 may be applied to obtainπ”Όξ‚ƒξ€œπ‘›π‘›βˆ’1‖‖𝐴‖‖2β€–β€–β€–β€–Ξ”(𝑠)2≀‖‖𝐴‖‖𝑑𝑠2π‘š(𝑋0)ξ€œπ‘›π‘›βˆ’1π‘’βˆ’2πœ†π‘ π‘‘π‘ β‰€π‘1(𝑋0)π‘’βˆ’2πœ†(π‘›βˆ’1).(4.38)Taking norms on both sides of (4.34), squaring both sides, taking suprema, before finally taking expectations yields:𝐾𝐾=We now consider each term on the right hand side of (4.35). From Lemma 4.3 we see that the first term satisfiesΜƒπœ†π›Ό=𝛼,πœ†=In order to obtain an exponential bound on the second term on the right hand side of (4.26) we make use of the Cauchy-Schwarz inequality as follows:πœ‚=Μƒπœ‚where π”Όξ‚ƒξ€œπ‘›π‘›βˆ’1πΎπ›Όξ€œπ‘ 0π‘’βˆ’π›Ό(π‘ βˆ’π‘’)‖‖‖‖‖‖‖‖𝐾(π‘ βˆ’π‘’)Ξ”(𝑠)2ξ‚„π‘‘π‘’π‘‘π‘ β‰€π‘š(𝑋0)πΎπ›Όξ€œπ‘›π‘›βˆ’1ξ€œπ‘ 0π‘’βˆ’π›Όπ‘’β€–β€–β€–β€–π‘’πΎ(𝑒)βˆ’2πœ†(π‘ βˆ’π‘’)𝑑𝑒𝑑𝑠≀𝑐2(𝑋0)π‘’βˆ’πœ‚(π‘›βˆ’1).(4.39). Take expectations and examine the two terms within the integral. Using Lemma 4.3 we obtain𝑐3>0In order to obtain an exponential upper bound for the second term within the integral we apply Lemma 4.11 with 𝔼[supπ‘›βˆ’1β‰€π‘‘β‰€π‘›β€–β€–β€–ξ€œπ‘‘π‘›βˆ’1β€–β€–β€–Ξ£(𝑠)𝑑𝐡(𝑠)2]≀𝑐3Ξ£π‘’βˆ’2𝛾(π‘›βˆ’1).(4.40), πΎπ›Όξ€œβˆΆ=∞0‖‖‖‖𝑒𝐾(𝑠)𝛼𝑠𝑑𝑠β‰₯π‘’π›Όπ‘‘ξ€œβˆžπ‘‘β€–β€–β€–β€–πΎ(𝑠)𝑑𝑠β‰₯𝑒𝛼𝑑‖‖𝐾1β€–β€–(𝑑).(4.41) and π”Όβ€–π‘‹βˆžβ€–2<∞:ξ‚€ξ€œπ‘›π‘›βˆ’1‖‖𝐾1‖‖(𝑠)𝑑𝑠2π”Όβ€–β€–π‘‹βˆžβ€–β€–2β€–β€–π‘‹β‰€π”Όβˆžβ€–β€–2ξ‚€ξ€œπ‘›π‘›βˆ’1πΎπ›Όπ‘’βˆ’π›Όπ‘ ξ‚π‘‘π‘ 2≀𝑐4π‘’βˆ’2𝛼(π‘›βˆ’1).(4.42)Next, we obtain an exponential upper bound on the third term. Using (4.21) and the Burkholder-Davis-Gundy inequality, there exists a constant 𝔼supπ‘›βˆ’1≀𝑑≀𝑛‖‖𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–2ξ‚„β‰€ξ‚π‘š(𝑋0)π‘’βˆ’2πœ‚(π‘›βˆ’1),(4.43) such thatξ‚π‘š(𝑋0)=4(π‘š(𝑋0)+𝑐1(𝑋0)+𝑐2(𝑋0)+𝑐3Ξ£+𝑐4)Now consider the last term on the right hand side of (4.35). Using (3.4) we see thatπœ‚<min(2πœ†,𝛼)Using this and the fact that π‘…ξ…ž(𝑠)=𝐴𝑅(𝑠)+(πΎβˆ—π‘…)(𝑠) (see (4.16)) we obtainΞ¦(π‘‘βˆ’π‘ )Combining (4.36), (4.38), (4.39), (4.40), and (4.42) we obtainΞ¦(𝑑)=𝑃+π‘’βˆ’π‘‘π‘„where [0,𝑑] and π‘…βˆž.
We can now apply the line of reasoning used in [10, Theorem 4.4.2] to obtain (3.10).

Proof of Lemma 4.4. We use a reformulation of (1.2a)-(1.2b) in the proof of this result. It is obtained as follows: multiply both sides of π‘Œ(𝑑)+(πΉβˆ—π‘Œ)(𝑑)=𝐺(𝑑),𝑑β‰₯0,(4.44) across by the function π‘Œ=π‘…βˆ’π‘…βˆž, where 𝐹, integrate over 𝐺, use integration by parts, add and subtract 𝐺(𝑑)=π‘’βˆ’π‘‘π‘„βˆ’π‘’βˆ’π‘‘(π‘„π‘…βˆž+π‘„π΄π‘…βˆžξ€œ)+βˆžπ‘‘ξ€œβˆžπ‘ π‘ƒπΎ(𝑒)π‘…βˆžβˆ’ξ€œπ‘‘π‘’π‘‘π‘ βˆžπ‘‘π‘„πΎ(𝑒)π‘…βˆžπ‘‘π‘’βˆ’(π‘’βˆ—π‘„πΎπ‘…βˆž)(𝑑),𝑑β‰₯0.(4.45) from both sides to obtainπ‘Œwhere ξ€œπ‘Œ(𝑑)=𝐺(𝑑)βˆ’π‘‘0π‘Ÿ(π‘‘βˆ’π‘ )𝐺(𝑠)𝑑𝑠,(4.46), π‘Ÿ is defined by (4.12) and π‘Ÿ+πΉβˆ—π‘Ÿ=𝐹 is defined byπ‘Ÿ+π‘Ÿβˆ—πΉ=𝐹
Consider the reformulation of (1.2a)-(1.2b) given by (4.44). It is well known that 𝐺 can be expressed asπ‘Ÿwhere the function 𝑝 satisfies 𝑝 and 𝐾. We refer the reader to [11] for details. Consider the first term on the right hand side of (4.46). As (3.1) holds it is clear that the function Ξ£ is integrable. Now consider the second term. Since (3.3) and (4.4) hold we may apply Lemma 4.9 to obtain (4.11). Now we may apply a result of Paley and Wiener (see [11]) to see that 𝛼>0 is integrable. The convolution of an integrable function with an integrable function is itself integrable. Now combining the arguments for the first and second terms we see that (4.2) must hold.

5. On the Necessity of (3.5) for Exponential Convergence of Solutions of (1.4a)-(1.4b)

In this section, the necessity of condition (3.5) for exponential convergence in the almost sure and 𝑋0th mean senses is shown. Proposition 5.1 concerns the necessity of the condition in the almost sure case while Proposition 5.2 deals with the π‘‹βˆž(𝑋0,Ξ£)th mean case.

Proposition 5.1. Let 𝑑↦𝑋(𝑑;𝑋0,Ξ£) satisfy (2.4) and (4.4) and 𝛾>0 satisfy (2.6). If there exists a constant 𝑋0 such that (3.4) holds, and if for all 𝐾 there is a constant vector Ξ£ such that the solution 𝛼>0 of (1.4a)-(1.4b) satisfies statement (iii) of Theorem 3.2, then there exists a constant 𝑋0, independent of π‘‹βˆž(𝑋0,Ξ£), such that (3.5) holds.

Proposition 5.2. Let 𝑑↦𝑋(𝑑;𝑋0,Ξ£) satisfy (2.4) and (4.4) and 𝛾>0 satisfy (2.6). If there exists a constant 𝑋0 such that (3.4) holds, and if for all 𝐾 there is a constant vector 𝑋0 such that the solution ℱ𝐡(∞) of (1.4a)-(1.4b) satisfies statement (ii) of Theorem 3.2, then there exists a constant π‘‹βˆž(𝑋0,Ξ£), independent of π”Όβ€–π‘‹βˆžβ€–2<∞, such that (3.5) holds.

In order to prove these propositions the integral version of (1.4a)-(1.4b) is considered. By reformulating this version of the equation an expression for a term related to the exponential integrability of the perturbation is found. Using various arguments, including the Martingale Convergence Theorem in the almost sure case, this term is used to show that (3.5) holds.

Some supporting results are now stated. Lemma 5.3 is the analogue of Lemma 4.7 in the mean square case. It was proved in [8].

Lemma 5.3. Let 𝑑↦𝑋(𝑑;𝑋0,Ξ£) satisfy (2.4) and (4.4). Suppose that for all initial conditions limπ‘‘β†’βˆžπ”Όβ€–β€–π‘‹(𝑑;𝑋0,Ξ£)βˆ’π‘‹βˆž(𝑋0β€–β€–,Ξ£)2𝔼‖‖=0,𝑋(β‹…;𝑋0,Ξ£)βˆ’π‘‹βˆž(𝑋0β€–β€–,Ξ£)2∈𝐿1([0,∞),ℝ).(5.1) there is a π‘‹βˆž-measurable and almost surely finite random variable ξ‚€ξ€œπ΄+∞0𝑋𝐾(𝑠)π‘‘π‘ βˆž=0a.s.(5.2) with 𝑁=(𝑁1,…,𝑁𝑛) such that the solution π‘π‘–βˆΌπ’©(0,𝑣2𝑖) of (1.4a)-(1.4b) satisfies 𝑖=1,…,𝑛Then {𝑣𝑖}𝑛𝑖=1 obeys 𝑑1>0

Lemma 5.4 may be extracted from [4]; it is required in the proof of Proposition 5.2.

Lemma 5.4. Let 𝔼‖‖𝑁‖‖2≀𝑑1𝔼‖‖𝑁‖‖]2.(5.3) where ⋀𝛽0<𝛾<𝛼0 for 𝑍(𝑑)=𝑒𝛾𝑑𝑋(𝑑). Then there exists a πœ…(𝑑)=𝑒𝛾𝑑𝐾(𝑑)-independent constant ξ‚€ξ€œπ‘‘π‘(𝑑)=(𝛾𝐼+𝐴)𝑍(𝑑)+𝑑0ξ‚πœ…(π‘‘βˆ’π‘ )𝑍(𝑠)𝑑𝑠𝑑𝑑+𝑒𝛾𝑑Σ(𝑑)𝑑𝐡(𝑑),(5.4) such that ξ€œπ‘(𝑑)βˆ’π‘(0)=(𝛾𝐼+𝐴)𝑑0ξ€œπ‘(𝑠)𝑑𝑠+𝑑0ξ€œπ‘ 0ξ€œπœ…(π‘ βˆ’π‘’)𝑍(𝑒)𝑑𝑒𝑑𝑠+𝑑0𝑒𝛾𝑠Σ(𝑠)𝑑𝐡(𝑠).(5.5)

Proof of Proposition 5.1. In order to prove this result we follow the argument used in [4, Theorem 4.1]. Let 𝑍(𝑑)=𝑒𝛾𝑑𝑋(𝑑). By defining the process ξ€œπ‘‘0𝑒𝛾𝑠Σ(𝑠)𝑑𝐡(𝑠)=𝑒𝛾𝑑𝑋(𝑑)βˆ’π‘‹0ξ€œβˆ’(𝛾𝐼+𝐴)𝑑0π‘’π›Ύπ‘ ξ€œπ‘‹(𝑠)π‘‘π‘ βˆ’π‘‘0π‘’π›Ύπ‘ ξ€œπ‘ 0𝐾(π‘ βˆ’π‘’)𝑋(𝑒)𝑑𝑒𝑑𝑠.(5.6) and the matrix π‘‹βˆž we can rewrite (1.4a)-(1.4b) asξ€œπ‘‘0𝑒𝛾𝑠Σ(𝑠)𝑑𝐡(𝑠)=𝑒𝛾𝑑(𝑋(𝑑)βˆ’π‘‹βˆž)βˆ’(𝑋0βˆ’π‘‹βˆžξ€œ)βˆ’(𝛾𝐼+𝐴)𝑑0𝑒𝛾𝑠(𝑋(𝑠)βˆ’π‘‹βˆžβˆ’ξ€œ)𝑑𝑠𝑑0π‘’π›Ύπ‘ ξ€œπ‘ 0𝐾(π‘ βˆ’π‘’)(𝑋(𝑒)βˆ’π‘‹βˆžξ€œ)𝑑𝑒𝑑𝑠+𝑑0𝑒𝛾𝑠𝐾1(𝑠)π‘‘π‘ π‘‹βˆž.(5.7)the integral form of which is𝛾<𝛽0Using 𝛾<𝛽0 and rearranging this becomes𝑒𝛾(π‘‹βˆ’π‘‹βˆž)∈𝐿1[0,∞)Adding and subtracting π‘‘β†’βˆž from the right hand side and applying Lemma 4.7 we obtain:⋀𝛽0<𝛾<𝛼0Consider each term on the right hand side of (5.7). We see that the first term tends to zero as (3.10) holds and 𝛾1>0. The second term is finite by hypothesis. Again, using the fact that 𝛾<𝛾1⋀𝛽<𝛼0 and that assumption (3.10) holds we see that 𝑑↦𝑒𝛾1𝑑𝐾(𝑑), so the third term tends to a limit as 𝑑↦𝑒𝛾1𝑑(𝑋(𝑑)βˆ’π‘‹βˆž). Now consider the fourth term. Since β€–β€–β€–ξ€œπ‘ 0𝐾(π‘ βˆ’π‘’)(𝑋(𝑒)βˆ’π‘‹βˆžβ€–β€–β€–)π‘‘π‘’β‰€π‘π‘’βˆ’π›Ύ1𝑠.(5.8), we can choose π‘‘β†’βˆž such that π‘‘β†’βˆž. So the functions limπ‘‘β†’βˆžξ€œπ‘‘0𝑒𝛾𝑠Σ(𝑠)𝑑𝐡(𝑠)existsandisalmostsurelyfinite.(5.9) and ⋀𝛽𝛾<𝛼1 are both integrable. The convolution of these two integrable functions is itself an integrable function, soπ”Όξ‚ƒβ€–β€–β€–ξ€œπ‘‘0𝑒𝛾𝑠‖‖‖Σ(𝑠)𝑑𝐡(𝑠)≀𝔼[𝑒𝛾𝑑‖‖𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–β€–β€–π‘‹]+𝔼[0βˆ’π‘‹βˆžβ€–β€–]+β€–β€–β€–β€–ξ€œπ›ΎπΌ+𝐴𝑑0𝔼[𝑒𝛾𝑠‖‖𝑋(𝑠)βˆ’π‘‹βˆžβ€–β€–+ξ€œ]𝑑𝑠𝑑0π‘’π›Ύπ‘ ξ€œπ‘ 0‖‖‖‖‖‖𝐾(𝑒)𝔼[𝑋(π‘ βˆ’π‘’)βˆ’π‘‹βˆžβ€–β€–+ξ€œ]𝑑𝑒𝑑𝑠𝑑0𝑒𝛾𝑠‖‖𝐾1‖‖‖‖𝑋(𝑠)π‘‘π‘ π”Όβˆžβ€–β€–.(5.10)Thus, it is clear that the fourth term has a finite limit as π‘š1. Finally, the fifth term on the right hand side of (5.7) has a finite limit at infinity, using (4.41).
Each term on the right hand side of the inequality has a finite limit as 𝔼[𝑒𝛾𝑑‖‖𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–]β‰€π‘š1π‘’βˆ’(𝛽1βˆ’π›Ύ)𝑑,(5.11), so therefore[0,∞)The Martingale Convergence Theorem [12, Proposition 5.1.8] may now be applied component by component to obtain (3.5).

Proof of Proposition 5.2. By Lemma 5.3, (5.7) still holds. Define ⋀𝛽0<𝛾<𝛼1, take norms and expectations across (5.7) to obtain𝛾1>0There exists 𝛾<𝛾1⋀𝛽<𝛼1 such that𝑑↦𝑒𝛾1𝑑𝐾(𝑑)thus the first, second and third terms on the right hand side of (5.10) are uniformly bounded on 𝑑↦𝑒𝛾1𝑑𝔼‖𝑋(𝑑)βˆ’π‘‹βˆžβ€–. Now consider the fourth term. Since ξ€œπ‘ 0‖‖‖‖𝔼‖‖𝐾(π‘ βˆ’π‘’)𝑋(𝑒)βˆ’π‘‹βˆžβ€–β€–π‘‘π‘’β‰€π‘π‘’βˆ’π›Ύ1𝑠,(5.12), we can choose [0,∞) such that ξ€œπ‘‘0𝑒𝛾𝑠‖‖𝐾1‖‖‖‖𝑋(𝑠)π‘‘π‘ π”Όβˆžβ€–β€–β‰€πΎπ›Όπ”Όβ€–β€–π‘‹βˆžβ€–β€–ξ€œπ‘‘0π‘’βˆ’(π›Όβˆ’π›Ύ)𝑠𝑑𝑠<∞,(5.13) so that the functions 𝛾<𝛼 and 𝑐>0 are both integrable. The convolution of two integrable functions is itself an integrable function, soπ”Όξ‚ƒβ€–β€–β€–ξ€œπ‘‘0𝑒𝛾𝑠‖‖‖Σ(𝑠)𝑑𝐡(𝑠)≀𝑐.(5.14)so it is clear that the fourth term is uniformly bounded on β€–β€–β€–ξ€œπ‘‘0𝑒𝛾𝑠‖‖‖Σ(𝑠)𝑑𝐡(𝑠)2=𝑛𝑖=1𝑁𝑖(𝑑)2,(5.15). Finally, we consider the final term on the right hand side of (5.10). Using (4.41) we obtain𝑁𝑖(𝑑)=𝑑𝑗=1ξ€œπ‘‘0𝑒𝛾𝑠Σ𝑖𝑗(𝑠)𝑑𝐡𝑗(𝑠).(5.16)since 𝑁𝑖(𝑑). Thus there is a constant 𝑣𝑖(𝑑)2=𝑑𝑗=1ξ€œπ‘‘0𝑒2𝛾𝑠Σ𝑖𝑗(𝑠)2𝑑𝑠.(5.17) such thatξ€œπ‘‘0𝑒2𝛾𝑠‖‖‖‖Σ(𝑠)2𝑑𝑠=𝑛𝑑𝑖=1𝑗=1ξ€œπ‘‘0𝑒2𝛾𝑠||Σ𝑖𝑗||(𝑠)2=𝑑𝑠𝑛𝑖=1𝑣𝑖(𝑑)2ξ‚ƒβ€–β€–β€–ξ€œ=𝔼𝑑0𝑒𝛾𝑠‖‖‖Σ(𝑠)𝑑𝐡(𝑠)2≀𝑑1π”Όξ‚ƒβ€–β€–β€–ξ€œπ‘‘0𝑒𝛾𝑠‖‖‖Σ(𝑠)𝑑𝐡(𝑠)2≀𝑑1𝑐2.(5.18)The proof now follows the line of reasoning found in [4, Theorem 4.3]: observe thatπ‘‘β†’βˆžwhere𝑝It is clear that 𝐾 is normally distributed with zero mean and variance given byΞ£Lemma 5.4 and (5.14) may now be applied to obtain:𝑓Allowing π‘…βˆž on both sides of this inequality yields the desired result.

6. Sufficient Conditions for Exponential Convergence of Solutions of (1.1a)-(1.1b)

In this section, sufficient conditions for exponential convergence of solutions of (1.1a)-(1.1b) to a nontrivial limit are found. Proposition 6.1 concerns the 𝑅th mean sense while Proposition 6.2 deals with the almost sure case.

Proposition 6.1. Let 𝛼>0 satisfy (2.4) and (3.1), let 𝜌>0 satisfy (2.6), let 𝛾>0 satisfy (2.5), and let π›½βˆ—π‘>0 be a constant matrix such that the solution 𝑋0 of (1.2a)-(1.2b) satisfies (3.3). If there exist constants π‘šβˆ—π‘=π‘šβˆ—π‘(𝑋0)>0, 𝐾 and Ξ£ such that (3.4), (3.6) and (3.5) hold, then there exist constants 𝑓, independent of π‘…βˆž, and 𝑅, such that statement (ii) of Theorem 3.1 holds.

Proposition 6.2. Let 𝛼>0,𝜌>0 satisfy (2.4) and (3.1), let 𝛾>0 satisfy (2.6), let π›½βˆ—0>0 satisfy (2.5), and let 𝑋0 be a constant matrix such that the solution π‘“βˆΆ=0 of (1.2a)-(1.2b) satisfies (3.3). If there exist constants π‘‹βˆž and 𝐾 such that (3.4), (3.6) and (3.5) hold, then there exists constant Ξ£, independent of 𝑅 such that statement (iii) of Theorem 3.1 holds.

As in the case where 𝑋 we require an explicit formulation for π‘‹β†’π‘‹βˆž(𝑋0,Ξ£,𝑓). The proof of this result follows the line of reasoning used in the proof of Theorem 4.6 and is therefore omitted.

Theorem 6.3. Let π‘‹βˆž(𝑋0,Ξ£,𝑓)=π‘‹βˆž(𝑋0,Ξ£)+π‘…βˆžξ€œβˆž0𝑓(𝑑)𝑑𝑑,a.s.(6.1) satisfy (2.4) and (4.4), let π‘‹βˆž(𝑋0,Ξ£,𝑓) satisfy (2.6) and (4.5), and let f satisfy (2.5). Suppose that the resolvent π”Όβ€–π‘‹βˆž(𝑋0,Ξ£,𝑓)‖𝑝 of (1.2a)-(1.2b) satisfies (3.3), then the solution π”Όβ€–β€–π‘‹βˆž(𝑋0β€–β€–,Ξ£,𝑓)𝑝≀2π‘βˆ’1ξ‚€π”Όβ€–β€–π‘‹βˆž(𝑋0β€–β€–,Ξ£)𝑝+β€–β€–β€–ξ€œβˆž0π‘…βˆžβ€–β€–β€–π‘“(𝑠)𝑑𝑠𝑝<∞.(6.2) of (1.1a)-(1.1b) satisfies 𝑋(β‹…;𝑋0,Ξ£,𝑓) almost surely, where π‘‹βˆž(𝑋0,Ξ£,𝑓)and 𝑋(𝑑;𝑋0,Ξ£,𝑓)βˆ’π‘‹βˆž(𝑋0,Ξ£,𝑓)=(𝑋(𝑑;𝑋0,Ξ£)βˆ’π‘‹βˆž(𝑋0ξ€œ,Ξ£))+𝑑0(𝑅(π‘‘βˆ’π‘ )βˆ’π‘…βˆžξ€œ)𝑓(𝑠)π‘‘π‘ βˆ’βˆžπ‘‘π‘…βˆžπ‘“(𝑠)𝑑𝑠.(6.3) is almost surely finite.

Proof of Proposition 6.1. We begin by showing that 𝑋(𝑑;𝑋0,Ξ£,𝑓)βˆ’π‘‹βˆž(𝑋0,Ξ£,𝑓)=(𝑋(𝑑;𝑋0,Ξ£)βˆ’π‘‹βˆž(𝑋0,Ξ£))βˆ’π‘“1(𝑑)+(𝑅(𝑑)βˆ’π‘…βˆž)𝑓1ξ€œ(0)βˆ’π‘‘0π‘…ξ…ž(π‘‘βˆ’π‘ )𝑓1(𝑠)𝑑𝑠.(6.4) is finite. Clearly, we see that𝑝Now, consider the difference between the solution 𝔼‖‖𝑋(𝑑;𝑋0,Ξ£,𝑓)βˆ’π‘‹βˆž(𝑋0β€–β€–,Ξ£,𝑓)𝑝≀4π‘βˆ’1𝔼‖‖𝑋(𝑑;𝑋0,Ξ£)βˆ’π‘‹βˆž(𝑋0β€–β€–,Ξ£)𝑝+‖‖𝑓1β€–β€–(𝑑)𝑝+‖‖𝑅(𝑑)βˆ’π‘…βˆžβ€–β€–π‘β€–β€–π‘“1β€–β€–(0)𝑝+ξ‚€ξ€œπ‘‘0β€–β€–π‘…ξ…žβ€–β€–β€–β€–π‘“(π‘‘βˆ’π‘ )1‖‖(𝑠)𝑑𝑠𝑝.(6.5) of (1.1a)-(1.1b) and its limit π‘…ξ…ž given by (6.1):π›½βˆ—π‘<min(𝛽𝑝,𝛽,𝜌)Using integration by parts this expression becomesπ‘…ξ…žTaking norms on both sides of equation (6.4), raising the power to π‘…ξ…ž(𝑑)=𝐴(𝑅(𝑑)βˆ’π‘…βˆžξ€œ)+𝑑0𝐾(π‘‘βˆ’π‘ )(𝑅(𝑠)βˆ’π‘…βˆž)π‘‘π‘ βˆ’πΎ1(𝑑)π‘…βˆž+ξ‚€ξ€œπ΄+∞0𝑅𝐾(𝑠)π‘‘π‘ βˆž.(6.6) on both sides, and taking expectations across we obtain𝑅Now consider the right hand side of (6.5). The first term decays exponentially quickly due to Theorem 3.2. The second term decays exponentially quickly due to assumption (3.6). By applying Lemma 4.4 we see that (4.2) holds so we can apply Theorem 4.5 to show that the third term must decay exponentially. In the sequel, an argument is provided to show that π‘…βˆž decays exponentially; thus the final term must decay exponentially. Combining these arguments we see that (3.7) holds, where π‘…βˆ’π‘…βˆž.
It is now shown that πœ‡ decays exponentially. It is clear from the resolvent equation (1.2a)-(1.2b) thatπ‘‘β†¦π‘’πœ‡π‘‘πΎ(𝑑)Consider each term on the right hand side of (6.6). We can apply Theorem 4.5 to obtain that π‘‘β†¦π‘’πœ‡π‘‘(𝑅(𝑑)βˆ’π‘…βˆž) decays exponentially quickly to 𝐿1([0,∞),ℝ𝑛×𝑛). In order to show that the second term decays exponentially we proceed as follows: since π‘’πœ‡π‘‘β€–β€–β€–ξ€œπ‘‘0𝐾(π‘‘βˆ’π‘ )(𝑅(𝑠)βˆ’π‘…βˆžβ€–β€–β€–=β€–β€–β€–ξ€œ)𝑑𝑠𝑑0π‘’πœ‡(π‘‘βˆ’π‘ )𝐾(π‘‘βˆ’π‘ )π‘’πœ‡π‘ (𝑅(𝑠)βˆ’π‘…βˆžβ€–β€–β€–)𝑑𝑠≀𝑐.(6.7) decays exponentially and (3.4) holds it is possible to choose ∫(𝐴+∞0𝐾(𝑠)𝑑𝑠)π‘…βˆž=0 such that the functions π‘…ξ…ž and 0 are both in ‖‖𝑋(𝑑;𝑋0,Ξ£,𝑓)βˆ’π‘‹βˆž(𝑋0‖‖≀‖‖,Ξ£,𝑓)𝑋(𝑑;𝑋0,Ξ£)βˆ’π‘‹βˆž(𝑋0β€–β€–+‖‖𝑓,Ξ£)1β€–β€–+β€–β€–(𝑑)𝑅(𝑑)βˆ’π‘…βˆžβ€–β€–β€–β€–π‘“1β€–β€–+β€–β€–β€–ξ€œ(0)𝑑0π‘…ξ…ž(π‘‘βˆ’π‘ )𝑓1β€–β€–β€–.(𝑠)𝑑𝑠(6.8). The convolution of two integrable functions is itself an integrable function, soπ‘…ξ…žTo see that the third term decays exponentially we use (4.41). Finally, we consider the fourth term. By Lemma 4.4 and (3.3) we have that (4.2) holds. In [1, Theorem 6.1] it was shown that π›½βˆ—β‰€min(𝛽0,𝛽,𝜌) under this hypothesis and (3.1). Combining the above we see that 𝑝 decays exponentially quickly to 𝑝.

Proof of Proposition 6.2. Take norms across (6.4) to obtain𝐾Using Theorem 3.2, we see that the first term on the right hand side of (6.8) decays exponentially. The second term on the right hand side decays exponentially as (3.6) holds. We can apply Theorem 4.5 to show that the third term must decay exponentially. An argument was provided in Proposition 6.1 to show that Ξ£ decays exponentially. Combining this with (3.6) enables us to show that the fourth term decays exponentially. Using the above arguments we obtain (3.8), where 𝑓.

7. On the Necessity of (3.6) and (3.5) for Exponential Convergence of Solutions of (1.1a)-(1.1b)

In this section, the necessity of (3.6) and (3.5) for exponential convergence of solutions of (1.1a)-(1.1b) in the almost sure and 𝛼>0th mean senses is shown. Proposition 7.1 concerns the necessity of the conditions in the 𝑋0th mean case while Proposition 7.2 deals with the almost sure case.

Proposition 7.1. Let π‘‹βˆž(𝑋0,Ξ£,𝑓) satisfy (2.4) and (4.4), let 𝑑↦𝑋(𝑑;𝑋0,Ξ£,𝑓) satisfy (2.6), and let 𝜌>0 satisfy (2.5). If there exists constant 𝛾>0 such that (3.4) holds, and if for all 𝑋0 there is constant vector 𝐾 such that the solution Ξ£ of (1.1a)-(1.1b) satisfies statement (ii) of Theorem 3.1, then there exist constants 𝑓 and 𝛼>0, independent of 𝑋0, such that (3.6) and (3.5) hold.

Proposition 7.2. Let π‘‹βˆž(𝑋0,Ξ£,𝑓) satisfy (2.4) and (4.4), let 𝑑↦𝑋(𝑑;𝑋0,Ξ£,𝑓) satisfy (2.6), and let 𝜌>0 satisfy (2.5). If there exists constant 𝛾>0 such that (3.4) holds, and if for all 𝑋0 there is a constant vector 𝑐>0 such that the solution ‖‖𝑓1(𝑑)+πœ‡1β€–β€–(𝑑,πœ”)≀𝑐(πœ”)π‘’βˆ’πœ†π‘‘,(7.1) of (1.1a)-(1.1b) satisfies statement (iii) of Theorem 3.1, then there exist constants πœ†>0 and πœ”βˆˆΞ©βˆ—, independent of β„™[Ξ©βˆ—]=1, such that (3.6) and (3.5) hold.

The following lemma is used in the proof of Proposition 7.2. This lemma allows us to separate the behavior of the deterministic perturbation from the stochastic perturbation in the almost sure case. It is interesting to note that we can prove this lemma without any reference to the integro-differential equation.

Lemma 7.3. Suppose 𝑓1 is an almost surely finite random variable and πœ‡1where πœ‡1ξ€œ(𝑑)=βˆžπ‘‘Ξ£(𝑠)𝑑𝐡(𝑠),𝑑β‰₯0,(7.2), {πœ‰π‘–}βˆžπ‘–=1, 𝔼[πœ‰π‘–]=0 and the functions 𝔼[πœ‰2𝑖]=𝑣2𝑖β‰₯1 and limsupπ‘šπ‘šβ†’βˆžξ“π‘–=1πœ‰π‘–=∞,liminfπ‘šπ‘šβ†’βˆžξ“π‘–=1πœ‰π‘–=βˆ’βˆž,a.s.(7.3) are defined by (2.8) and 𝛾>0respectively. Then (3.5) and (3.6) hold.

In order to prove Lemma 7.3 we require Lemmas 7.4 and 7.5 below. Lemma 7.5 was proved in [13]. The proof of Lemma 7.4 makes use of Kolmogorov's Zero-One Law. It follows the proof of Theorem 2 in [14, Chapter IV, Section 1] and so is omitted.

Lemma 7.4. Let 𝜎∈𝐢([0,∞),ℝ) be a sequence of independent Gaussian random variables with ξ€œβˆž0𝜎(𝑠)2𝑒2𝛾𝑠𝑑𝑠<∞,(7.4) and limsupπ‘‘β†’βˆž1𝑑|||ξ€œlogβˆžπ‘‘|||𝜎(𝑠)𝑑𝐡(𝑠)β‰€βˆ’π›Ύ,a.s.(7.5). Then {𝐡(𝑑)}𝑑β‰₯0

Lemma 7.5. If there is a 𝐾 such that 𝑋0 and ℱ𝐡(∞)then π‘‹βˆž(𝑋0,Ξ£)where π”Όβ€–π‘‹βˆžβ€–2<∞ is a one-dimensional standard Brownian motion.

Lemmas 7.6 and 7.7 are used in the proofs of Propositions 7.1 and 7.2, respectively and are the analogues of Lemmas 5.3 and 4.7. Their proofs are identical in all important aspects and so are omitted.

Lemma 7.6. Let 𝑑↦𝑋(𝑑;𝑋0,Ξ£) satisfy (2.4) and (4.4). Suppose that for all initial conditions limπ‘‘β†’βˆžπ”Όβ€–β€–π‘‹(𝑑;𝑋0,Ξ£,𝑓)βˆ’π‘‹βˆž(𝑋0β€–β€–,Ξ£,𝑓)2𝔼‖‖=0,𝑋(β‹…;𝑋0,Ξ£,𝑓)βˆ’π‘‹βˆž(𝑋0β€–β€–,Ξ£,𝑓)2∈𝐿1([0,∞),ℝ).(7.6) there is an π‘‹βˆž-measurable and almost surely finite random variable ξ‚€ξ€œπ΄+∞0𝑋𝐾(𝑠)π‘‘π‘ βˆž=0a.s.(7.7) with 𝐾 such that the solution 𝑋0 of (1.1a)-(1.1b) satisfies ℱ𝐡(∞)Then π‘‹βˆž(𝑋0,Ξ£,𝑓) obeys 𝑑↦𝑋(𝑑;𝑋0,Ξ£,𝑓)

Lemma 7.7. Let limπ‘‘β†’βˆžπ‘‹(𝑑;𝑋0,Ξ£,𝑓)=π‘‹βˆž(𝑋0,Ξ£,𝑓)a.s.,𝑋(β‹…;𝑋0,Ξ£,𝑓)βˆ’π‘‹βˆž(𝑋0,Ξ£,𝑓)∈𝐿2([0,∞),ℝ𝑛)a.s.(7.8) satisfy (2.4) and (4.4). Suppose that for all initial conditions π‘‹βˆž there is an 𝑋0=0-measurable and almost surely finite random variable π‘‹βˆž such that the solution Ξ”(𝑑)=βˆ’π‘‹βˆž+ξ€œπ‘‘0ξ€œπ›Ώ(𝑠)𝑑𝑠+𝑑0ξ€œπ‘“(𝑠)𝑑𝑠+πœ‡(𝑑)βˆ’π‘‘0𝐾1(𝑠)π‘‘π‘ π‘‹βˆž,(7.9) of (1.1a)-(1.1b) satisfies Ξ”(𝑑)=𝑋(𝑑)βˆ’π‘‹βˆžThen 𝛿 obeys (7.7).

Proof of Proposition 7.1. Since (3.7) holds for every initial condition we can choose 𝛿(𝑑)=𝐴Δ(𝑑)+(πΎβˆ—Ξ”)(𝑑),(7.10): this simplifies calculations. Moreover using (3.7) in Lemma 7.6 it is clear that assumption (7.7) holds. Consider the integral form of (1.1a)-(1.1b). Adding and subtracting βˆ«πœ‡(𝑑)=𝑑0Ξ£(𝑠)𝑑𝐡(𝑠) from both sides and applying Lemma 7.6 we obtainπ‘‘β†’βˆžwhere ξ€Ίπ‘‹βˆ’π”Όβˆžξ€»ξ€œ=βˆ’βˆž0ξ€œπ”Ό[𝛿(𝑠)]π‘‘π‘ βˆ’βˆž0ξ€œπ‘“(𝑠)𝑑𝑠+∞0𝐾1𝑋(𝑠)π‘‘π‘ π”Όβˆžξ€»,(7.11), the function 𝔼[𝛿(𝑑)]=𝐴𝔼[Ξ”(𝑑)]+(πΎβˆ—π”Ό[Ξ”])(𝑑) is defined by𝔼[π‘‹βˆž]and 𝑓1ξ€Ίξ€»βˆ’ξ€œ(𝑑)=βˆ’π”ΌΞ”(𝑑)βˆžπ‘‘ξ€œπ”Ό[𝛿(𝑠)]𝑑𝑠+βˆžπ‘‘πΎ1𝑋(𝑠)π‘‘π‘ π”Όβˆžξ€»,(7.12). Taking expectations across (7.9) and allowing 𝔼[𝛿(β‹…)] we obtain𝑓1where β€–β€–β€–ξ€œβˆž0𝑒𝛾𝑑‖‖‖𝑓(𝑑)𝑑𝑑<∞,(7.13). Using this expression for 𝛾>0 we obtain⋀𝛽𝛾<𝛼1The first term on the right-hand side of (7.12) decays exponentially due to (3.7). Assumptions (3.4) and (3.7) imply that π‘’π›Ύπ‘‘ξ€œΞ”(𝑑)=Ξ”(0)+(𝛾𝐼+𝐴)𝑑0𝑒𝛾𝑠+ξ€œΞ”(𝑠)𝑑𝑠𝑑0π‘’π›Ύπ‘ ξ€œπ‘ 0ξ€œπΎ(π‘ βˆ’π‘’)Ξ”(𝑒)π‘‘π‘’π‘‘π‘ βˆ’π‘‘0𝑒𝛾𝑠𝐾1(𝑠)π‘‘π‘’π‘‘π‘ π‘‹βˆž+ξ€œπ‘‘0π‘’π›Ύπ‘ ξ€œπ‘“(𝑠)𝑑𝑠+𝑑0𝑒𝛾𝑠Σ(𝑠)𝑑𝐡(𝑠).(7.14) decays exponentially so the second term decays exponentially. The third term on the right-hand side of (7.12) decays exponentially due to the argument given by (4.41). Hence, β€–β€–β€–ξ€œπ‘‘0𝑒𝛾𝑠‖‖‖‖‖𝑒𝑓(𝑠)𝑑𝑠≀𝔼𝛾𝑑‖‖‖‖‖‖+β€–β€–β€–β€–ξ€œΞ”(𝑑)+𝔼Δ(0)𝛾𝐼+𝐴𝑑0𝑒𝛾𝑠𝔼‖‖‖‖+ξ€œΞ”(𝑠)𝑑𝑠𝑑0π‘’π›Ύπ‘ ξ€œπ‘ 0β€–β€–β€–β€–π”Όβ€–β€–β€–β€–ξ€œπΎ(π‘ βˆ’π‘’)Ξ”(𝑒)𝑑𝑒𝑑𝑠+𝑑0𝑒𝛾𝑠‖‖𝐾1‖‖‖‖𝑋(𝑠)π‘‘π‘ π”Όβˆžβ€–β€–.(7.15) decays exponentially.
Proving that (3.5) holds breaks into two steps. We begin by showing that⋀𝛽0<𝛾<𝛼1where 𝛾1>0. By choosing 𝛾<𝛾1⋀𝛽<𝛼1 we can obtain the following reformulation of (1.1a)-(1.1b) using methods applied in [15, Proposition 5.1] ξ€œπ‘ 0‖‖‖‖𝔼‖‖‖‖𝐾(π‘ βˆ’π‘’)Ξ”(𝑒)π‘‘π‘’β‰€π‘π‘’βˆ’π›Ύ1𝑠.(7.16)Rearranging (7.14), taking expectations and then norms on both sides we can obtainπ”Όβ€–β€–β€–ξ€œπ‘‘0𝑒𝛾𝑠‖‖‖‖‖𝑒Σ(𝑠)𝑑𝐡(𝑠)≀𝔼𝛾𝑑‖‖‖‖‖‖+β€–β€–β€–β€–ξ€œΞ”(𝑑)+𝔼Δ(0)𝛾𝐼+𝐴𝑑0π‘’π›Ύπ‘ π”Όβ€–β€–β€–β€–ξ€œΞ”(𝑠)𝑑𝑠+𝑑0π‘’π›Ύπ‘ ξ€œπ‘ 0‖‖‖‖𝔼‖‖‖‖+ξ€œπΎ(π‘ βˆ’π‘’)Ξ”(𝑒)𝑑𝑒𝑑𝑠𝑑0𝑒𝛾𝑠‖‖𝐾1‖‖‖‖𝑋(𝑒)π‘‘π‘’π‘‘π‘ π”Όβˆžβ€–β€–+β€–β€–β€–ξ€œπ‘‘0𝑒𝛾𝑠‖‖‖.𝑓(𝑠)𝑑𝑠(7.17)Since (3.7) holds this implies that both the first and third terms on the right-hand side of (7.15) are bounded. The second term is bounded due to our assumptions. Since π”Όβ€–β€–β€–ξ€œπ‘‘0𝑒𝛾𝑠‖‖‖Σ(𝑠)𝑑𝐡(𝑠)≀𝐢.(7.18), we can choose π‘‘β†’βˆž such that βˆ’π‘‹βˆžξ€œ=βˆ’βˆž0ξ€œπ›Ώ(𝑠)π‘‘π‘ βˆ’βˆž0ξ€œπ‘“(𝑠)π‘‘π‘ βˆ’πœ‡(∞)+∞0𝐾1(𝑠)π‘‘π‘ π‘‹βˆž,(7.19). It can easily be shown that𝛿Finally, we see that the fifth term is bounded using (4.41). So, (7.13) holds.
We now return to (7.14). Again rearranging the equation and taking norms and then expectations across both sides, we obtainπ‘‹βˆžWe already provided an argument above to show that the first five terms on the right hand side of this expression are bounded. Also, we know that (7.13) holds. Thus,ξ€œΞ”(𝑑)=βˆ’βˆžπ‘‘π›Ώ(𝑠)π‘‘π‘ βˆ’π‘“1(𝑑)βˆ’πœ‡1ξ€œ(𝑑)+βˆžπ‘‘πΎ1(𝑠)π‘‘π‘ π‘‹βˆž,(7.20)The proof is now identical to Proposition 5.2.

Proof of Proposition 7.2. Since Lemma 7.7 holds we can obtain (7.9). Thus, as πœ‡1∫(𝑑)=βˆžπ‘‘Ξ£(𝑠)𝑑𝐡(𝑠), we obtain‖‖𝑓1(𝑑)+πœ‡1‖‖≀‖‖‖‖+β€–β€–β€–ξ€œ(𝑑)Ξ”(𝑑)βˆžπ‘‘β€–β€–β€–+β€–β€–β€–ξ€œπ›Ώ(𝑠)π‘‘π‘ βˆžπ‘‘πΎ1(𝑠)π‘‘π‘ π‘‹βˆžβ€–β€–β€–.(7.21)where βˆ«βˆžπ‘‘π΄Ξ”(𝑠)𝑑𝑠 is defined by (7.10). Using this expression for βˆ«βˆžπ‘‘(πΎβˆ—Ξ”)(𝑠)𝑑𝑠, (7.9) becomes𝑐>0where ‖‖𝑓1(𝑑)+πœ‡1β€–β€–(𝑑)β‰€π‘π‘’βˆ’πœ†π‘‘βˆ€π‘‘β‰₯0,a.s.,(7.22). Rearranging the equation and taking norms yieldsπœ†<min(π›½βˆ—0,𝛼)The first term on the right hand side of (7.21) decays exponentially due to (3.8). Using the argument given in (4.41) we see that the third term on the right hand side of (7.21) decays exponentially. Finally, we consider the second term. Clearly 𝛾 decays exponentially due to (3.8). In order to show that 1≀𝑖≀𝑛 decays exponentially we use an argument similar to that applied in the proof of Proposition 7.1. So there is an almost surely finite random variable 1≀𝑗≀𝑑 such thatξ€œβˆž0Σ𝑖𝑗(𝑠)2𝑒2𝛾𝑠𝑑𝑠<∞.(7.23)where limsupπ‘‘β†’βˆž1𝑑|||ξ€œlogβˆžπ‘‘Ξ£π‘–π‘—(𝑠)𝑑𝐡𝑗|||(𝑠)β‰€βˆ’π›Ύ,πœ”βˆˆΞ©π‘–π‘—,β„™[Ω𝑖𝑗]=1.(7.24). We can now apply Lemma 7.3 to obtain (3.6) and (3.5).

Proof of Lemma 7.3. We suppose that there exists a constant πœ–βˆˆ(0,𝛾) such that (3.5) holds. Using the equivalence of norms we see that for all πœ”βˆˆΞ©π‘–π‘— and 𝑐𝑖𝑗(πœ”,πœ–)β‰₯1 assumption (3.5) implies that|||ξ€œβˆžπ‘‘Ξ£π‘–π‘—(𝑠)𝑑𝐡𝑗|||(𝑠)≀𝑐𝑖𝑗(πœ”,πœ–)π‘’βˆ’(π›Ύβˆ’πœ–)𝑑.(7.25)Applying Lemma 7.5 we obtain𝑗Choose any ||πœ‡π‘–1||(𝑑)≀𝑐𝑖(πœ”,πœ–)π‘’βˆ’(π›Ύβˆ’πœ–)𝑑,(7.26). For each πœ”βˆˆΞ©π‘–=βˆ©π‘‘π‘—=1Ω𝑖𝑗 we can choose a constant 𝑐𝑖=βˆ‘π‘‘π‘—=1𝑐𝑖𝑗 such thatπœ‡π‘–1βˆ‘(𝑑)=𝑑𝑗=1βˆ«βˆžπ‘‘Ξ£π‘–π‘—(𝑠)𝑑𝐡𝑗(𝑠)Now, summing over ||𝑓𝑖1(𝑑)+πœ‡π‘–1||(𝑑)2≀𝑛𝑖=1||𝑓𝑖1(𝑑)+πœ‡π‘–1||(𝑑)2=‖‖𝑓1(𝑑)+πœ‡1β€–β€–(𝑑)2(7.27) we see that||𝑓𝑖1(𝑑)+πœ‡π‘–1||(𝑑)≀𝑐(πœ”)π‘’βˆ’πœ†π‘‘,πœ”βˆˆΞ©βˆ—.(7.28)where πœ”βˆˆΞ©π‘–βˆ©Ξ©βˆ—, |𝑓𝑖1(𝑑)|≀|𝑓𝑖1(𝑑)+πœ‡π‘–1(𝑑)|+|πœ‡π‘–1(𝑑)|≀𝑐(πœ”)π‘’βˆ’πœ†π‘‘+𝑐𝑖(πœ”,πœ–)π‘’βˆ’(π›Ύβˆ’πœ–)𝑑 and ||𝑓𝑖1||≀(𝑑)𝑐𝑖(πœ”)π‘’βˆ’πœŒπ‘‘,πœ”βˆˆΞ©βˆ—βˆ©Ξ©π‘–,(7.29).
Now, since𝑐𝑖>0we see thatπœŒβ‰€max(πœ†,π›Ύβˆ’πœ–)So for 𝑖 we see that πœ”βˆˆβˆ©π‘–(Ξ©βˆ—βˆ©Ξ©π‘–). This gives𝛾where 0<𝛾<πœ† is finite and 𝑑. Now summing over ξ€œπ‘‘(𝑑)=𝑑01𝛾(π‘’π›Ύπ‘ βˆ’1)𝑓(𝑠)𝑑𝑠,(7.30) we obtain (3.6), by picking out any 𝑀. This concludes the case when (3.5) holds.
Now, consider the case where assumption (3.5) fails to hold. We choose a constant ξ€œπ‘€(𝑑)=𝑑01𝛾(π‘’π›Ύπ‘ βˆ’1)Ξ£(𝑠)𝑑𝐡(𝑠).(7.31) such that 𝑀𝑖 and define the function 𝑖 as𝑀and the vector martingale βŸ¨π‘€π‘–βŸ© as𝑀𝑖We let βŸ¨π‘€π‘–βŸ©(𝑑)=𝑑𝑗=1ξ€œπ‘‘01𝛾2(π‘’π›Ύπ‘ βˆ’1)2Σ𝑖𝑗(𝑠)2𝑑𝑠.(7.32) denote the ‖‖‖‖𝑑(𝑑)+𝑀(𝑑)<π‘βˆ—(πœ”),πœ”βˆˆΞ©βˆ—,β„™[Ξ©βˆ—]=1,(7.33)th component of 𝑖,1≀𝑖≀𝑛 and 𝑀 denote the quadratic variation of limπ‘‘β†’βˆžβŸ¨π‘€π‘–βŸ©(𝑑)=∞.(7.34) given byliminfπ‘‘β†’βˆžπ‘€π‘–(𝑑)=βˆ’βˆžWe show at the end of this proof thatlimsupπ‘‘β†’βˆžπ‘€π‘–(𝑑)=∞and therefore assume it for the time being.
Since (3.5) fails to hold there exists an entry 𝑖, of the martingale 𝑑 such that𝑑𝑖It follows that 𝑑𝑖 and 𝑀𝑖 a.s. Consider the corresponding βŸ¨π‘€π‘–βŸ©(𝑑)th entry of 𝑑𝑖, denoted |𝑑𝑖(𝑑)+𝑀𝑖(𝑑)|<π‘βˆ—(πœ”); it is either bounded or unbounded. If πœ”βˆˆΞ©βˆ— is bounded then βˆ’π‘βˆ—βˆ’π‘€π‘–(𝑑)<𝑑𝑖(𝑑) is bounded and so, by the Martingale Convergence Theorem, ∞=βˆ’π‘βˆ—βˆ’liminfπ‘‘β†’βˆžπ‘€π‘–(𝑑)≀limsupπ‘‘β†’βˆžπ‘‘π‘–(𝑑).(7.35) is bounded: this contradicts (7.34). So, we suppose the latter, that 𝑑 is unbounded, and proceed to show this is also contradictory. Since {π‘‘π‘š}βˆžπ‘š=0, for 𝑑0=0 it is clear that 𝑑𝑖(π‘‘π‘š)β†’βˆž. Taking the limit superior on both sides of the inequality yieldsπ‘šβ†’βˆžAs 𝑀𝑖(π‘‘π‘š)β†’βˆ’βˆž is deterministic, there exists an increasing sequence of deterministic times π‘šβ†’βˆž with {πœπ‘š}βˆžπ‘š=0 such that 𝜏0=𝑑0 as 𝑣2π‘™βˆΆ=𝑑𝑗=1ξ€œπœπ‘™πœπ‘™βˆ’11𝛾2(π‘’π›Ύπ‘ βˆ’1)2Ξ£2𝑖𝑗(𝑠)𝑑𝑠β‰₯1.(7.36). Consequently, 𝑆𝑖(π‘š)=𝑀𝑖(πœπ‘š) as 𝑆𝑖(π‘š)=π‘šξ“π‘™=1πœ‰π‘™(𝑖)(7.37). We choose a subsequence of these times πœ‰π‘™(𝑖)=𝑑𝑗=1ξ€œπœπ‘™πœπ‘™βˆ’11𝛾(π‘’π›Ύπ‘ βˆ’1)Σ𝑖𝑗(𝑠)𝑑𝐡𝑗(𝑠).(7.38) with {πœ‰π‘™(𝑖)}βˆžπ‘™=1 such thatπœ‰π‘™(𝑖)Define 𝑣2𝑙β‰₯1. Obviouslyξ€œπ‘‘(𝑑)=𝑑0𝑒𝛾𝑠(𝑓1(𝑠)βˆ’π‘“1ξ€œ(𝑑))𝑑𝑠,𝑀(𝑑)=𝑑0𝑒𝛾𝑠(πœ‡1(𝑠)βˆ’πœ‡1(𝑑))𝑑𝑠.(7.39)where0<𝛾<πœ†It is clear that β€–β€–β€–β€–β‰€ξ€œπ‘‘(𝑑)+𝑀(𝑑)𝑑0𝑒𝛾𝑠(‖‖𝑓1(𝑑)+πœ‡1β€–β€–+‖‖𝑓(𝑑)1(𝑠)+πœ‡1β€–β€–ξ€œ(𝑠))𝑑𝑠≀𝑐(πœ”)𝑑0𝑒𝛾𝑠(π‘’βˆ’πœ†π‘‘+π‘’βˆ’πœ†π‘ )𝑑𝑠<𝑐1(πœ”),πœ”βˆˆΞ©βˆ—.(7.40) is an indepenendent normally distributed sequence with the variance of each 𝐾 given by π‘…βˆž so we may apply Lemma 7.4.
We now show that assumption (7.33) holds. By changing the order of integration we can show that𝛽>0Thus, as 𝑐>0,𝑅

8. On the Necessary and Sufficient Conditions for Exponential Convergence of Solutions of (1.1a)-(1.1b) and (1.4a)-(1.4b)

We now combine the results from Sections 4 and 5 to prove Theorem 3.2 and combine the results from Sections 6 and 7 to prove Theorem 3.1.

We showed the necessity of (3.5) for the exponential convergence of the solution of (1.4a)-(1.4b) in Section 5. In order to prove the necessity of the exponential integrability of the kernel we require the following result which was extracted from [1].

Theorem 8.1. Let 𝐾 satisfy (2.4) and (3.1). Suppose that there exists a constant matrix 𝛼>0 and constants 𝐾 and 𝑛 such that the solution {𝑋𝑗(𝑑)}𝑗=1,…,𝑛 of (1.2a)-(1.2b) satisfies (4.3). If the kernel 𝑋𝑗(0)=πžπ‘— satisfies (3.2) then there exists a constant π‘š1(πžπ‘—)π‘’βˆ’π›½1𝑑‖‖𝑋β‰₯𝔼𝑗(𝑑)βˆ’π‘‹π‘—β€–β€–β‰₯‖‖𝔼𝑋(∞)𝑗(𝑑)βˆ’π‘‹π‘—ξ€»β€–β€–=β€–β€–(∞)(𝑅(𝑑)βˆ’π‘…(∞))πžπ‘—β€–β€–.(8.1) such that 𝑗=1,…,𝑛 satisfies (3.4).

Proof of Theorem 3.2. We begin by proving the equivalence between (i) and (ii). The implication (i) implies (ii) is the subject of Proposition 4.1. We can demonstrate that (ii) implies (i) as follows. We begin by proving that (3.9) implies (3.4). We consider the following 𝑅 solutions of (1.4a)-(1.4b); π‘…βˆž, where 𝑋0. Since (3.9) holds we obtain𝑛+1for each 𝑋𝑗(𝑑)𝑗=1,…,𝑛+1. Thus, the resolvent 𝑋𝑗(0)=πžπ‘—for𝑗=1,…,𝑛,𝑋𝑛+1(0)=0.(8.2) of (1.2a)-(1.2b) decays exponentially to 𝑋𝑗(𝑑). We can apply Theorem 8.1 to obtain (3.4) after which Proposition 5.2 can be applied to obtain (3.5). As (8.1) holds it is clear that (3.3) holds.
We now prove the equivalence between (i) and (iii). The implication (i) implies (iii) is the subject of Proposition 4.2. We now demonstrate that (iii) implies (i). We begin by proving that (3.10) implies (3.4). As (3.10) holds for all 𝑋𝑗(∞) we can consider the following 𝑆𝑗(𝑑)=𝑋𝑗(𝑑)βˆ’π‘‹π‘›+1(𝑑),(8.3) solutions of (1.4a)-(1.4b); 𝑆𝑗(0)=πžπ‘— where𝑆=[𝑆1,…,𝑆𝑛]βˆˆπ‘…π‘›Γ—π‘›We know that π‘†ξ…ž(𝑑)=𝐴𝑆(𝑑)+(πΎβˆ—π‘†)(𝑑),𝑑>0,𝑆(0)=𝐼.(8.4) approaches 𝑆𝑗(∞)=𝑋𝑗(∞)βˆ’π‘‹π‘›+1(∞) exponentially quickly in the almost sure sense. Introduce𝑆(𝑑)β†’π‘†βˆžand notice ‖‖𝔼[𝑋(𝑑)βˆ’π‘‹βˆž]‖‖‖‖≀𝔼𝑋(𝑑)βˆ’π‘‹βˆžβ€–β€–β‰€π‘šβˆ—π‘’βˆ’π›½βˆ—1𝑑.(8.5). Let 𝑛+1. Then𝑋𝑗If we define 𝑋𝑗(0)=πžπ‘— then 𝑗=1,…,𝑛 exponentially quickly so we can apply Theorem 8.1 to obtain (3.4). As (3.4) and (3.10) hold we can apply Proposition 5.1 to obtain (3.5). Also evident from this argument is that (3.3) holds. This proves that (iii) implies (i).

Proof of Theorem 3.1. We begin by proving the equivalence between (i) and (ii). The implication that (i) implies (ii) is the subject of Proposition 6.1. Now consider the implication (ii) implies (i). Using (3.7) we see that𝑋𝑛+1(0)=0Consider the 𝑅(𝑑)πžπ‘—=𝑋𝑗(𝑑)βˆ’π‘‹π‘›+1(𝑑) solutions 𝔼[𝑋𝑗(𝑑)βˆ’π‘‹π‘—(∞)]+𝔼[𝑋𝑛+1(𝑑)βˆ’π‘‹π‘›+1(∞)]=𝑅(𝑑)πžπ‘—βˆ’π”Ό[𝑐𝑗],(8.6) of (1.1a)-(1.1b) with initial conditions 𝑐𝑗=𝑋𝑗(∞)βˆ’π‘‹π‘›+1(∞) for 𝑑↦𝑅(𝑑)πžπ‘— and 𝔼[𝑐𝑗]. Since π‘‘β†’βˆž we see that𝑅where 𝑛+1 is an almost surely finite constant. As both terms on the left hand side of this expression are decaying exponentially to zero, 𝑋𝑗(𝑑) must decay exponentially to 𝑋𝑗(0)=πžπ‘— as 𝑗=1,…,𝑛. Thus 𝑋𝑛+1(0)=0 must satisfy (4.3). Now, apply Theorem 8.1 to obtain (3.4) and Proposition 7.1 to obtain (3.6) and (3.5).
We now prove the equivalence between (i) and (iii). The implication (i) implies (iii) is the subject of Proposition 6.2. Once again, consider the 𝑅(𝑑)πžπ‘—=𝑋𝑗(𝑑)βˆ’π‘‹π‘›+1(𝑑) solutions 𝑗=1,…,𝑛 with initial conditions (𝑋𝑗(𝑑)βˆ’π‘‹π‘—(∞))βˆ’(𝑋𝑛+1(𝑑)βˆ’π‘‹π‘›+1(∞))=𝑅(𝑑)πžπ‘—βˆ’π‘π‘—,(8.7) for 𝑐𝑗=𝑋𝑗(∞)βˆ’π‘‹π‘›+1(∞) and 𝑋𝑗. Since 𝑋𝑗(∞) for 𝑋𝑛+1, we can write𝑋𝑛+1(∞)where 𝑅 is an almost surely finite random variable. From (3.8) we know that decays exponentially quickly to , similarly decays exponentially quickly to . Thus, decays exponentially to a limit. As a result (4.3) must hold. Now apply Theorem 8.1 to obtain (3.4) and Proposition 7.2 to obtain (3.6) and (3.5).

Acknowledgments

The authors are pleased to acknowledge the referees for their careful scrutiny of and suggested corrections to the manuscript. The first author was partially funded by an Albert College Fellowship, awarded by Dublin City University’s Research Advisory Panel. The second author was funded by The Embark Initiative operated by the Irish Research Council for Science, Engineering and Technology (IRCSET).