Abstract

We consider the geometric Markov renewal processes as a model for a security market and study this processes in a diffusion approximation scheme. Weak convergence analysis and rates of convergence of ergodic geometric Markov renewal processes in diffusion scheme are presented. We present European call option pricing formulas in the case of ergodic, double-averaged, and merged diffusion geometric Markov renewal processes.

1. Introduction

Let 𝑁(𝑡) be a standard Poisson process and (𝑌𝑘)𝑘𝑍+ be i.i.d. random variables which are independent of 𝑁(𝑡) and 𝑆0>0. The geometric compound Poisson processes𝑆𝑡=𝑆0𝑁(𝑡)𝑘=11+𝑌𝑘,𝑡>0,(1.1) is a trading model in many financial applications with pure jumps [1, page 214]. Motivated by the geometric compound Poisson processes (1.1), Swishchuk and Islam [2] studied the Geometric Markov renewal processes (2.5) (see Section 2) for a security market in a series scheme. The geometric Markov renewal processes (2.5) are also known as a switched-switching process. Averaging and diffusion approximation methods are important approximation methods for a switched-switching system. Averaging schemes of the geometric Markov renewal processes (2.5) were studied in [2].

The singular perturbation technique of a reducible invertible-operator is one of the techniques for the construction of averaging and diffusion schemes for a switched-switching process. Strong ergodicity assumption for the switching process means that the singular perturbation problem has a solution with some additional nonrestrictive conditions. Averaging and diffusion approximation schemes for switched-switching processes in the form of random evolutions were studied in [3, page 157] and [1, page 41]. In this paper, we introduce diffusion approximation of the geometric Markov renewal processes. We study a discrete Markov-modulated (𝐵,𝑆)-security market described by a geometric Markov renewal process (GMRP). Weak convergence analysis and rates of convergence of ergodic geometric Markov renewal processes in diffusion scheme are presented. We present European call option pricing formulas in the case of ergodic, double-averaged, and merged diffusion geometric Markov renewal processes.

The paper is organized as follows. In Section 2 we review the definition of the geometric Markov renewal processes (GMRP) from [2]. Moreover we present notation and summarize results such as random evolution of GMRP, Markov renewal equation for GMRP, infinitesimal operator of GMRP, and martingale property of GMRP. In Section 3 we present diffusion approximation of GMRP in ergodic, merged, and double-averaging schemes. In Section 4 we present proofs of the above-mentioned results. Section 4 contains solution of martingale problem, weak convergence, rates of convergence for GMRP, and characterization of the limit measure. In Section 5 we present merged diffusion GMRP in the case of two ergodic classes. European call option pricing formula for ergodic, merged, and diffusion GMRP are presented in Section 6.

2. The Geometric Markov Renewal Processes (GMRP)

In this section we present the Geometric Markov renewal processes. We closely follow [2].

Let (Ω,,𝑡,) be a standard probability space with complete filtration 𝑡 and let (𝑥𝑘)𝑘+ be a Markov chain in the phase space (𝑋,𝒳) with transition probability 𝑃(𝑥,𝐴), where 𝑥𝑋,𝐴𝒳. Let (𝜃𝑘)𝑘+ be a renewal process which is a sequence of independent and identically distributed (i.i.d.) random variables with a common distribution function 𝐹(𝑥)={𝑤𝜃𝑘(𝑤)𝑥}. The random variables (𝜃𝑘)𝑘+ can be interpreted as lifetimes (operating periods, holding times, renewal periods) of a certain system in a random environment. From the renewal process (𝜃𝑘)𝑘+ we can construct another renewal process (𝜏𝑘)𝑘+ defined by𝜏𝑘=𝑘𝑛=0𝜃𝑛.(2.1) The random variables 𝜏𝑘 are called renewal times (or jump times). The process𝑣(𝑡)=sup𝑘𝜏𝑘𝑡(2.2) is called the counting process.

Definition 2.1 (see [1, 4]). A homogeneous two-dimensional Markov chain (𝑥𝑛,𝜃𝑛)𝑛+ on the phase space 𝑋×+ is called a Markov renewal process (MRP) if its transition probabilities are given by the semi-Markov kernel 𝑥𝑄(𝑥,𝐴,𝑡)=𝑛+1𝐴,𝜃𝑛+1𝑡𝑥𝑛=𝑥,𝑥𝑋,𝐴𝒳,𝑡+.(2.3)

Definition 2.2. The process 𝑥(𝑡)=𝑥𝑣(𝑡)(2.4) is called a semi-Markov process.

The ergodic theorem for a Markov renewal process and a semi-Markov process respectively can be found in [3, page 195], [1, page 66], and [4, page 113].

Let (𝑥𝑛,𝜃𝑛)𝑛+ be a Markov renewal process on the phase space 𝑋×+ with the semi-Markov kernel 𝑄(𝑥,𝐴,𝑡) defined in (2.3), and let 𝑥(𝑡)=𝑥𝑣(𝑡) be a semi-Markov process where the counting process 𝑣(𝑡) is defined in (2.2). Let 𝜌(𝑥) be a bounded continuous function on 𝑋 such that 𝜌(𝑥)>1. We define the geometric Markov renewal process (GMRP) {𝑆𝑡}𝑡+ as a stochastic functional 𝑆𝑡 defined by𝑆𝑡=𝑆0𝑣(𝑡)𝑘=1𝑥1+𝜌𝑘,𝑡+,(2.5) where 𝑆0>0 is the initial value of 𝑆𝑡. We call this process (𝑆𝑡)𝑡+ a geometric Markov renewal process by analogy with the geometric compound Poisson processes𝑆𝑡=𝑆0𝑁(𝑡)𝑘=11+𝑌𝑘,(2.6) where 𝑆0>0,  𝑁(𝑡) is a standard Poisson process, (𝑌𝑘)𝑘𝑍+ are i.i.d. random variables. The geometric compound Poisson processes {𝑆𝑡}𝑡+ in (2.6) is a trading model in many financial applications as a pure jump model [5, 6]. The geometric Markov renewal processes {𝑆𝑡}𝑡+ in (2.5) will be our main trading model in further analysis.

Jump semi-Markov random evolutions, infinitesimal operators, and Martingale property of the GMRP were presented in [2]. For the convenience of readers we repeat them again in the following.

2.1. Jump Semi-Markov Random Evolutions

Let 𝐶0(+) be the space of continuous functions on + vanishing at infinity, and let us define a family of bounded contracting operators 𝐷(𝑥) on 𝐶0(+) as follows:𝐷(𝑥)𝑓(𝑠)=𝑓𝑠(1+𝜌(𝑥)),𝑥𝑋,𝑠+.(2.7) With these contraction operators 𝐷(𝑥) we define the following jump semi-Markov random evolution (JSMRE) 𝑉(𝑡) of the geometric Markov renewal processes {𝑆𝑡}𝑡+ in (2.5):𝑉(𝑡)=𝑣(𝑡)𝑘=1𝐷𝑥𝑘𝑥=𝐷𝑣(𝑡)𝑥𝐷𝑣(𝑡)1𝑥𝐷1.(2.8) Using (2.7) we obtain from (2.8)𝑉(𝑡)𝑓(𝑠)=𝑣(𝑡)𝑘=1𝐷𝑥𝑘𝑠𝑓(𝑠)=𝑓𝑣(𝑡)𝑘=1𝑥1+𝜌𝑘𝑆=𝑓𝑡,(2.9) where 𝑆𝑡 is defined in (2.5) and 𝑆0=𝑠. Let 𝑄(𝑥,𝐴,𝑡) be a semi-Markov kernel for Markov renewal process (𝑥𝑛;𝜃𝑛)𝑛𝑍+, that is, 𝑄(𝑥,𝐴,𝑡)=𝑃(𝑥,𝐴)𝐺𝑥(𝑡), where 𝑃(𝑥,𝐴) is the transition probability of the Markov chain (𝑥𝑛)𝑛+ and 𝐺𝑥(𝑡) is defined by 𝐺𝑥(𝑡)=(𝜃𝑛+1𝑡𝑥𝑛=𝑥). Let𝑢(𝑡,𝑥)=𝐸𝑥[𝑉][𝑉](𝑡)𝑔(𝑥(𝑡))=𝐸(𝑡)𝑔(𝑥(𝑡))𝑥(0)=𝑥(2.10) be the mean value of the semi-Markov random evolution 𝑉(𝑡) in (2.9).

The following theorem is proved in [1, page 60] and [4, page 38].

Theorem 2.3. The mean value 𝑢(𝑡,𝑥) in (2.10) of the semi-Markov random evolution 𝑉(𝑡) given by the solution of the following Markov renewal equation (MRE): 𝑢(𝑡,𝑥)𝑡0𝑋𝑄(𝑥,𝑑𝑦,𝑑𝑠)𝐷(𝑦)𝑢(𝑡𝑠,𝑦)=𝐺𝑥(𝑡)𝑔(𝑥),(2.11) where 𝐺𝑥(𝑡)=1𝐺𝑥(𝑡),𝐺𝑥(𝑡)=(𝜃𝑛+1𝑡𝑥𝑛=𝑥),  g(𝑥) is a bounded and continuous function on 𝑋.

2.2. Infinitesimal Operators of the GMRP

Let 1𝜌𝑇(𝑥)=𝜌(𝑥)𝑇,𝑇>0,(2.12)𝑆𝑇𝑡=𝑆0𝑣(𝑡𝑇)𝑘=11+𝜌𝑇𝑥𝑘=𝑆0𝑣(𝑡𝑇)𝑘=11+𝑇1𝜌𝑥𝑘.(2.13) A detailed information about 𝜌𝑇(𝑥) and 𝑆𝑇𝑡 can be found in Section 4 of [2]. It can be easily shown that𝑆ln𝑇𝑡𝑆0=𝑣(𝑡𝑇)𝑘=1𝜌𝑥ln1+𝑘𝑇.(2.14) To describe martingale properties of the GMRP (𝑆𝑡)𝑡+ in (2.5) we need to find an infinitesimal operator of the process𝜂(𝑡)=𝑣(𝑡)𝑘=1𝑥ln1+𝜌𝑘.(2.15) Let 𝛾(𝑡)=𝑡𝜏𝑣(𝑡) and consider the process (𝑥(𝑡),𝛾(𝑡)) on 𝑋×𝑅+. It is a Markov process with infinitesimal operatorQ𝑓(𝑥,𝑡)=𝑑𝑓+𝑔𝑑𝑡𝑥(𝑡)𝐺𝑥(𝑡)𝑋[],𝑃(𝑥,𝑑𝑦)𝑓(𝑦,0)𝑓(𝑥,𝑡)(2.16) where 𝑔𝑥(𝑡)=𝑑𝐺𝑥(𝑡)/𝑑𝑡,  𝐺𝑥(𝑡)=1𝐺𝑥(𝑡), where 𝑓(𝑥,𝑡)𝐶(𝑋×𝑅+). The infinitesimal operator for the process ln𝑆(𝑡) has the form:𝑔𝐴𝑓(𝑧,𝑥)=𝑥(𝑡)𝐺𝑥(𝑡)𝑋[],𝑃(𝑥,𝑑𝑦)𝑓(𝑧+ln(1+𝜌(𝑦),𝑥)𝑓(𝑧,𝑥)(2.17) where 𝑧=ln𝑆0. The process (ln𝑆(𝑡),𝑥(𝑡),𝛾(𝑡)) is a Markov process on 𝑅+×𝑋×𝑅+ with the infinitesimal operator𝐿𝑓(𝑧,𝑥,𝑡)=𝐴𝑓(𝑧,𝑥,𝑡)+𝑄𝑓(𝑧,𝑥,𝑡),(2.18) where the operators 𝐴 and 𝑄 are defined in (2.17) and (2.18), respectively. Thus we obtain that the process𝑚(𝑡)=𝑓(ln𝑆(𝑡),𝑥(𝑡),𝛾(𝑡))𝑓(𝑧,𝑥,0)𝑡0𝑄𝐴+𝑓(ln𝑆(𝑢),𝑥(𝑢),𝛾(𝑢))𝑑𝑢(2.19) is an 𝑡-martingale, where 𝑡=𝜎(𝑥(𝑠),𝛾(𝑠);0𝑠𝑡). If 𝑥(𝑡)=𝑥𝑣(𝑡) is a Markov process with kernel𝑄(𝑥,𝐴,𝑡)=𝑃(𝑥,𝐴)1𝑒𝜆(𝑥)𝑡,(2.20) namely, 𝐺𝑥(𝑡)=1𝑒𝜆(𝑥)𝑡, then 𝑔𝑥(𝑡)=𝜆(𝑥)𝑒𝜆(𝑥)𝑡,  𝐺𝑥(𝑡)=𝑒𝜆(𝑥)𝑡,  𝑔𝑥𝐺(𝑡)/𝑥(𝑡)=𝜆(𝑥), and the operator 𝐴 in (2.17) has the form:𝐴𝑓(𝑧)=𝜆(𝑥)𝑋[].𝑃(𝑥,𝑑𝑦)𝑓(𝑧+ln(1+𝜌(𝑦)))𝑓(𝑧)(2.21) The process (ln𝑆(𝑡),𝑥(𝑡)) on 𝑅+×𝑋 is a Markov process with infinitesimal operator𝐿𝑓(𝑧,𝑥)=𝐴𝑓(𝑧,𝑥)+𝑄𝑓(𝑧,𝑥),(2.22) where 𝑄𝑓(𝑧,𝑥)=𝜆(𝑥)𝑋𝑃(𝑥,𝑑𝑦)(𝑓(𝑦)𝑓(𝑥)).(2.23) It follows that the process𝑚(𝑡)=𝑓(ln𝑆(𝑡),𝑥(𝑡))𝑓(𝑧,𝑥)𝑡0𝐴+𝑄𝑓(ln𝑆(𝑢),𝑥(𝑢))𝑑𝑢(2.24) is an 𝑡-martingale, where 𝑡=𝜎(𝑥(𝑢);0𝑢𝑡).

2.3. Martingale Property of the GMRP

Consider the geometric Markov renewal processes (𝑆𝑡)𝑡+𝑆𝑡=𝑆0𝑣(𝑡)𝑘=1𝑥1+𝜌𝑘.(2.25) For 𝑡[0,𝑇] let us define𝐿𝑡=𝐿0𝑣(𝑡)𝑘=1𝑥𝑘,𝐸𝐿0=1,(2.26) where (𝑥) is a bounded continuous function such that𝑋(𝑦)𝑃(𝑥,𝑑𝑦)=1,𝑋(𝑦)𝑃(𝑥,𝑑𝑦)𝑝(𝑦)=0.(2.27) If 𝐸𝐿𝑇=1, then geometric Markov renewal process 𝑆𝑡 in (2.25) is an (𝑡,𝑃)-martingale, where measure 𝑃 is defined as follows:𝑑𝑃𝑑𝑃=𝐿𝑇,𝑡=𝜎(𝑥(𝑠);0𝑠𝑡).(2.28) In the discrete case we have𝑆𝑛=𝑆0𝑛𝑘=1𝑥1+𝜌𝑘.(2.29) Let 𝐿𝑛=𝐿0𝑛𝑘=1(𝑥𝑘),𝐸𝐿0=1, where (𝑥) is defined in (2.27). If 𝐸𝐿𝑁=1, then 𝑆𝑛 is an (𝑡,𝑃)-martingale, where 𝑑𝑃/𝑑𝑃=𝐿𝑁, and 𝑛=𝜎(𝑥𝑘;0𝑘𝑛).

3. Diffusion Approximation of the Geometric Markov Renewal Process (GMRP)

Under an additional balance condition, averaging effect leads to diffusion approximation of the geometric Markov renewal process (GMRP). In fact, we consider the counting process 𝑣(𝑡) in (2.5) in the new accelerated scale of time 𝑡𝑇2, that is, 𝑣𝑣(𝑡𝑇2). Due to more rapid changes of states of the system under the balance condition, the fluctuations are described by a diffusion processes.

3.1. Ergodic Diffusion Approximation

Let us suppose that balance condition is fulfilled for functional 𝑆𝑇𝑡=𝑆0𝑣(𝑡𝑇)𝑘=1(1+𝜌𝑇(𝑥𝑘)):̂𝜌=𝑋𝑝(𝑑𝑥)𝑋𝑃(𝑥,𝑑𝑦)𝜌(𝑦)𝑚=0,(3.1) where 𝑝(𝑥) is ergodic distribution of Markov chain (𝑥𝑘)𝑘𝑍+. Then 𝑆(𝑡)=𝑆0, for all 𝑡𝑅+. Consider 𝑆𝑇𝑡 in the new scale of time 𝑡𝑇2:𝑆𝑇(𝑡)=𝑆𝑇𝑡𝑇2=𝑆0𝑣𝑡𝑇2𝑘=11+𝑇1𝜌𝑥𝑘.(3.2) Due to more rapid jumps of 𝑣(𝑡𝑇2) the process 𝑆𝑇(𝑡) will be fluctuated near the point 𝑆0 as 𝑇+. By similar arguments similar to (4.3)–(4.5) in [2], we obtain the following expression:𝑆ln𝑇(𝑡)𝑆0=𝑇𝑣1𝑡𝑇2𝑘=1𝜌𝑥𝑘12𝑇𝑣2𝑡𝑇2𝑘=1𝜌2𝑥𝑘+𝑇𝑣2𝑡𝑇2𝑘=1𝑟𝑇1𝜌𝑥𝑘𝜌2𝑥𝑘.(3.3) Algorithms of ergodic averaging give the limit result for the second term in (3.3) (see [1, page 43] and [4, page 88]):lim𝑇+12𝑇𝑣2𝑡𝑇2𝑘=1𝜌2𝑥𝑘=12𝑡̂𝜌2,(3.4) where ̂𝜌2=𝑋𝑝(𝑑𝑥)𝑋𝑃(𝑥,𝑑𝑦)𝜌2(𝑦)/𝑚. Using algorithms of diffusion approximation with respect to the first term in (3.3) we obtain [4, page 88]:lim𝑇+𝑇1𝑣(𝑡𝑇2)𝑘=1𝜌𝑥𝑘=𝜎𝜌𝑤(𝑡),(3.5) where 𝜎2𝜌=𝑋𝑝(𝑑𝑥)[1/2𝑋𝑃(𝑥,𝑑𝑦)𝜌2(𝑦)+𝑋𝑃(𝑥,𝑑𝑦)𝜌(𝑦)𝑅0𝑃(𝑥,𝑑𝑦)𝜌(𝑦)]/𝑚,  𝑅0 is a potential [3, page 68], of (𝑥𝑛)𝑛𝑍+,  𝑤(𝑡) is a standard Wiener process. The last term in (3.3) goes to zero as 𝑇+. Let 𝑆(𝑡) be the limiting process for 𝑆𝑇(𝑡) in (3.3) as 𝑇+. Taking limit on both sides of (3.3) we obtainlim𝑇+𝑆ln𝑇(𝑡)𝑆0=ln𝑆(𝑡)𝑆0=𝜎𝜌1𝑤(𝑡)2𝑡̂𝜌2,(3.6) where 𝜎2𝜌 and ̂𝜌2 are defined in (3.4) and (3.5), respectively. From (3.6) we obtain 𝑆(𝑡)=𝑆0𝑒𝜎𝜌𝑤(𝑡)(1/2)𝑡̂𝜌2=𝑆0𝑒(1/2)𝑡̂𝜌2𝑒𝜎𝜌𝑤(𝑡).(3.7) Thus, 𝑆(𝑡) satisfies the following stochastic differential equation (SDE):𝑑1𝑆(𝑡)=𝑆(𝑡)2𝜎2𝜌̂𝜌2𝑑𝑡+𝜎𝜌.𝑑𝑤(𝑡)(3.8)

In this way we have the following corollary.

Corollary 3.1. The ergodic diffusion GMRP has the form 𝑆(𝑡)=𝑆0𝑒(1/2)𝑡̂𝜌2𝑒𝜎𝜌𝑤(𝑡),(3.9) and it satisfies the following SDE: 𝑑𝑆(𝑡)=1𝑆(𝑡)2𝜎𝜌̂𝜌2𝑑𝑡+𝜎𝜌𝑑𝑤(𝑡).(3.10)

3.2. Merged Diffusion Approximation

Let us suppose that the balance condition satisfies the following:̂𝜌(𝑘)=𝑋𝑘𝑝𝑘(𝑑𝑥)𝑋𝑘𝑃(𝑥,𝑑𝑦)𝜌(𝑦)𝑚(𝑘)=0,(3.11) for all 𝑘=1,2,,𝑟 where (𝑥𝑛)𝑛𝑍+ is the supporting embedded Markov chain, 𝑝𝑘 is the stationary density for the ergodic component 𝑋𝑘,  𝑚(𝑘) is defined in [2], and conditions of reducibility of 𝑋 are fulfilled. Using the algorithms of merged averaging [1, 3, 4] we obtain from the second part of the right hand side in (3.3):lim𝑇+12𝑇𝑣1𝑡𝑇2𝑘=1𝜌2𝑥𝑘=12𝑡0̂𝜌2(̂𝑥(𝑠))𝑑𝑠,(3.12) wherê𝜌2(𝑘)=𝑋𝑘𝑝𝑘(𝑑𝑥)𝑋𝑘𝑃(𝑥,𝑑𝑦)𝜌2(𝑦)𝑚(𝑘)(3.13) using the algorithm of merged diffusion approximation that [1, 3, 4] obtain from the first part of the right hand side in (3.3):lim𝑇+𝑇𝑣1𝑡𝑇2𝑘=1𝜌𝑥𝑘=t0𝜎𝜌(̂𝑥(𝑠))𝑑𝑤(𝑠),(3.14) where𝜎2𝜌(𝑘)=𝑋𝑘𝑝𝑘(𝑑𝑥)𝑋𝑘𝑃(𝑥,𝑑𝑦)𝜌2(𝑦)+𝑋𝑘𝑃(𝑥,𝑑𝑦)𝜌(𝑦)𝑅0𝑋𝑘𝑃(𝑥,𝑑𝑦)𝜌(𝑦).𝑚(𝑘)(3.15) The third term in (3.3) goes to 0 as 𝑇+. In this way, from (3.3) we obtain:lim𝑇+𝑆ln𝑇(𝑡)𝑆0=ln𝑆(𝑡)𝑆0=𝑡0𝜎𝜌1(̂𝑥(𝑠))𝑑𝑤(𝑠)2𝑡0̂𝜌2(̂𝑥(𝑠))𝑑𝑠,(3.16) where 𝑆(𝑡) is the limit 𝑆𝑇(𝑡) as 𝑇+. From (3.16) we obtain𝑆(𝑡)=𝑆0𝑒(1/2)𝑡0̂𝜌2(̂𝑥(𝑠))𝑑𝑠+𝑡0𝜎𝜌(̂𝑥(𝑠))𝑑𝑤(𝑠).(3.17) Stochastic differential equation (SDE) for 𝑆(𝑡) has the following form:𝑑𝑆(𝑡)=1𝑆(𝑡)2𝜎2𝜌(̂𝑥(𝑡))̂𝜌2(̂𝑥(𝑡))𝑑𝑡+𝜎𝜌(̂𝑥(𝑡))𝑑𝑤(𝑡),(3.18) where ̂𝑥(𝑡) is a merged Markov process.

In this way we have the following corollary.

Corollary 3.2. Merged diffusion GMRP has the form (3.17) and satisfies the SDE (3.18).

3.3. Diffusion Approximation under Double Averaging

Let us suppose that the phase space 𝑋={1,2,,𝑟} of the merged Markov process ̂𝑥(𝑡) consists of one ergodic class with stationary distributions (̂𝑝𝑘;𝑘={1,2,𝑟}). Let us also suppose that the balance condition is fulfilled:𝑟𝑘=1̂𝑝𝑘̂𝜌(𝑘)=0.(3.19) Then using the algorithms of diffusion approximation under double averaging (see [3, page 188], [1, page 49] and [4, page 93]) we obtain:lim𝑇+𝑆ln𝑇(𝑡)𝑆0𝑆=ln(𝑡)𝑆0=𝜎𝜌1𝑤(𝑡)2𝑝2𝑡,(3.20) where𝜎2𝜌=𝑟𝑘=1̂𝑝𝑘𝜎2𝜌𝑝(𝑘),2=𝑟𝑘=1̂𝑝𝑘̂𝜌2(𝑘),(3.21) and ̂𝜌2(𝑘) and 𝜎2𝜌(𝑘) are defined in (3.13) and (3.15), respectively. Thus, we obtain from (3.20): 𝑆(𝑡)=𝑆0𝑒𝑝(1/2)2𝑡+𝜎𝜌𝑤(𝑡).(3.22)

Corollary 3.3. The diffusion GMRP under double averaging has the form 𝑆(𝑡)=𝑆0𝑒(1/2)𝜌2𝑡+𝜎𝜌𝑤(𝑡),(3.23) and satisfies the SDE 𝑑𝑆(𝑡)𝑆=1(𝑡)2𝜎2𝜌𝜌2𝑑𝑡+𝜎𝜌𝑑𝑤(𝑡).(3.24)

4. Proofs

In this section we present proofs of results in Section 3. All the above-mentioned results are obtained from the general results for semi-Markov random evolutions [3, 4] in series scheme. The main steps of proof are (1) weak convergence of 𝑆𝑇𝑡 in Skorokhod space 𝐷𝑅[0,+) [7, page 148]; (2) solution of martingale problem for the limit process 𝑆(𝑡); (3) characterization of the limit measure for the limit process 𝑆(𝑡); (4) uniqueness of solution of martingale problem. We also give here the rate of convergence in the diffusion approximation scheme.

4.1. Diffusion Approximation (DA)

Let𝐺𝑇𝑡=𝑇1𝑣(𝑡𝑇2)𝑘=0𝜌𝑥𝑘,𝐺𝑇𝑛=𝐺𝑇𝜏𝑛𝑇1,𝐺𝑇0=ln𝑠,(4.1) and the balance condition is satisfied:̂𝜌=𝑋𝑝(𝑑𝑥)𝑋𝑃(𝑥,𝑑𝑦)𝜌(𝑦)=0.(4.2) Let us define the functions𝜑𝑇(𝑠,𝑥)=𝑓(𝑠)+𝑇1𝜑1𝑓(𝑠,𝑥)+𝑇2𝜑2𝑓(𝑠,𝑥),(4.3) where 𝜑1𝑓 and 𝜑2𝑓 are defined as follows:(𝑃𝐼)𝜑1𝑓(𝑠,𝑥)=𝜌(𝑥)𝑓(𝑠),(𝑃𝐼)𝜑2𝑓𝐴(𝑠,𝑥)=𝐴(𝑥)+𝑓(𝑠),(4.4) where𝐴=𝑋𝑝(𝑑𝑥)𝐴(𝑥),(4.5) and 𝐴(𝑥)=[𝜌2(𝑥)/2+𝜌(𝑥)(𝑅0𝐼)𝜌(𝑥)]𝑑2/𝑑𝑠2. From the balance condition (4.2) and equality Π(𝐴𝐴(𝑥))=0 it follows that both equations in (4.3) simultaneously solvable and the solutions 𝜑𝑖𝑓(𝑠,𝑥) are bounded functions, 𝑖=1,2.

We note that𝑓𝑆𝑇𝑛+1𝐺𝑓𝑇𝑛=1𝑇𝜌𝑥𝑛𝑥𝑑𝑓𝑛𝑑𝑠(4.6) and define𝜑𝑇(𝑠,𝑥)=𝑓(𝑠)+𝑇1𝜑1𝑓(𝑠,𝑥)+𝑇2𝜑2𝑓(𝑠,𝑥),(4.7) where 𝜑1𝑓(𝑠,𝑥) and 𝜑2𝑓(𝑠,𝑥) are defined in (4.4) and (4.5), respectively. We note, that 𝐺𝑇𝑛+1𝐺𝑇𝑛=𝑇1𝜌(𝑥𝑛).

4.2. Martingale Problem for the Limiting Problem 𝐺0(𝑡) in DA

Let us introduce the family of functions:𝜓𝑇(𝑠,𝑡)=𝜑𝑇𝐺𝑇𝑡𝑇2,𝑥[𝑡𝑇2]𝜑𝑇𝐺𝑇𝑠𝑡2,𝑥[𝑠𝑇2]𝑡𝑇21𝑗=𝑠𝑇2𝐸𝜑𝑇𝐺𝑇𝑗,𝑥𝑗+1𝜑𝑇𝐺𝑇𝑗,𝑥𝑗𝑗,(4.8) where 𝜑𝑇 are defined in (4.7) and 𝐺𝑇𝑗 is defined by 𝐺𝑇𝜏𝑛/𝑇=1𝑇𝑛𝑘=0𝜌𝑥𝑘.(4.9) Functions 𝜓𝑇(𝑠,𝑡) are [𝑡𝑇2]-martingale by 𝑡. Taking into account the expression (4.6) and (4.7), we find the following expression:𝜓𝑇𝐺(𝑠,𝑡)=𝑓𝑇𝑡𝑇2𝐺𝑓𝑇𝑠𝑇2𝜑+𝜖1𝑓𝐺𝑇𝑡𝑇2,𝑥[𝑡𝑇2]𝜑1𝑓G𝑇𝑠𝑡2,𝑥[𝑠𝑇2]+𝜖2𝜑2𝑓𝐺𝑇𝑡𝑇2,𝑥[𝑡𝑇2]𝜑2𝑓𝐺𝑇𝑠𝑇2,𝑥[𝑠𝑇2]𝑇1[𝑡𝑇2]1𝑗=[𝑠𝑇2]𝜌𝑥𝑗𝐺𝑑𝑓𝑇𝑗𝜑𝑑𝑔+𝐸1𝑓𝐺𝑇𝑗,𝑥𝑗+1𝜑2𝑓𝐺𝑇𝑗,𝑥𝑗𝑗𝑇2[𝑡𝑇2]1𝑗=[𝑠𝑇2]21𝜌2𝑥𝑗𝐺𝑑𝑓𝑇𝑗𝑥𝑑𝑔+𝜌𝑗𝐸𝑑𝜑1𝑓𝐺𝑇𝑗,𝑥𝑗+1𝑑𝑔𝑗𝜑+𝐸2𝑓𝐺𝑇𝑗,𝑥𝑗+1𝜑2𝑓𝐺𝑇𝑗,𝑥𝑗𝑗𝑇+𝑜2𝐺=𝑓𝑇𝑡𝑇2𝐺𝑓𝑇𝑠𝑇2+𝜑1𝑓𝐺𝑇𝑡𝑇2,𝑥[𝑡𝑇2]𝜑1𝑓𝐺𝑇𝑠𝑇2,𝑥[𝑠𝑇2]+𝑇2𝜑2𝑓𝐺𝑇𝑡𝑇2,𝑥[𝑡𝑇2]𝜑2𝑓𝐺𝑇𝑠𝑇2,𝑥[𝑠𝑇2]𝑇2𝑡𝑇21𝑗=𝑠𝑇2𝐺𝐴𝑓𝑇𝑗𝑇+𝑂2,(4.10) where 𝑂(𝑇2) is the sum of terms with 𝑇2nd order. Since 𝜓𝑇(0,𝑡) is [𝑡𝑇2]-martingale with respect to measure 𝑄𝑇, generated by process 𝐺𝑇(𝑡) in (4.1), then for every scalar linear continuous functional 𝜂𝑠0 we have from (4.8)-(4.10):0=𝐸𝑇𝜓𝑇(𝑠,𝑡)𝜂𝑠0=𝐸𝑇𝑓𝐺𝑇𝑡𝑇2𝐺𝑓𝑇𝑠𝑇2𝑇2𝑡𝑇21𝑗=𝑠𝑇2𝐺𝐴𝑓𝑇𝑗𝜂𝑠0𝑇1𝐸𝑇𝜑1𝑓𝐺𝑇𝑡𝑇2,𝑥[𝑡𝑇2]𝜑1𝑓𝐺𝑇𝑠𝑇2,𝑥[𝑠𝑇2]𝜂𝑠0𝑇2𝐸𝑇𝜑2𝑓𝐺𝑇𝑡𝑇2,𝑥[𝑡𝑇2]𝜑2𝑓𝐺𝑇𝑠𝑇2,𝑥[𝑠𝑇2]𝜂𝑠0𝑇𝑂2,(4.11) where 𝐸𝑇 is a mean value by measure 𝑄𝑇. If the process 𝐺𝑇[𝑡𝑇2] converges weakly to some process 𝐺0(𝑡) as 𝑇+, then from (4.11) we obtain0=𝐸𝑇𝑓𝐺0𝐺(𝑡)𝑓0(𝑠)𝑡𝑠𝐺𝐴𝑓0,(𝑢)𝑑𝑢(4.12) that is, the process 𝑓𝐺0𝐺(𝑡)𝑓0(𝑠)𝑡𝑠𝐺𝐴𝑓0(𝑢)𝑑𝑢(4.13) is a continuous 𝑄𝑇-martingale. Since 𝐴 is the second order differential operator and coefficient 𝜎21 is positively defined, where𝜎21=𝑋𝜌𝜋(𝑑𝑥)2(𝑥)2+𝜌(𝑥)𝑅0,𝜌(𝑥)(4.14) then the process 𝐺0(𝑡) is a Wiener process with variance 𝜎21 in (4.14): 𝐺0(𝑡)=𝜎𝑤(𝑡). Taking into account the renewal theorem for 𝑣(𝑡), namely, 𝑇1𝑣(𝑡𝑇2)𝑇+𝑡/𝑚, and the following representation 𝐺𝑇(𝑡)=𝑇𝑣1𝑡𝑇2𝑘=0𝜌𝑥𝑘=𝑇1𝑡𝑇2𝑘=0𝜌𝑥𝑘+𝑇1𝑣(𝑡𝑇2)𝑘=𝑡𝑇2+1𝜌𝑥𝑘(4.15) we obtain, replacing [𝑡𝑇2] by 𝑣(𝑡𝑇2), that process 𝐺𝑇(𝑡) converges weakly to the process 𝐺0(𝑡) as 𝑇+, which is the solution of such martingale problem: 𝑓𝐺0𝐺(𝑡)𝑓0(𝑠)𝑡𝑠𝐴0𝑓𝐺0(𝑢)𝑑𝑢(4.16) is a continuous 𝑄𝑇-martingale, where 𝐴0=𝐴/𝑚, and 𝐴 is defined in (4.5)-(4.5).

4.3. Weak Convergence of the Processes 𝐺𝑇(𝑡) in DA

From the representation of the process 𝐺𝑇(𝑡) it follows thatΔ𝑇||𝐺(𝑠,𝑡)=𝑇(𝑡)𝐺𝑇||=||||||𝑇(𝑠)𝑣1𝑡𝑇2𝑘=𝑣𝑠𝑇2+1𝜌𝑥𝑘||||||𝑇1sup𝑥||𝑣𝜌(𝑥)𝑡𝑇2𝑣𝑠𝑇2||.1(4.17) This representation gives the following estimation:||Δ𝑇𝑡1,𝑡2||||Δ𝑇𝑡2,𝑡3||𝑇2sup𝑥𝜌(𝑥)2||𝑣𝑡3𝑇2𝑡𝑣1𝑇2||2.(4.18) Taking into account the same reasonings as in [2] we obtain the weak convergence of the processes 𝐺𝑇(𝑡) in DA.

4.4. Characterization of the Limiting Measure 𝑄 for 𝑄𝑇 as 𝑇+ in DA

From Section 4.3 (see also Section 4.1.4 of [2]) it follows that there exists a sequence 𝑇𝑛 such that measures 𝑄𝑇𝑛 converge weakly to some measure 𝑄 on 𝐷𝑅[0,+) as 𝑇+, where 𝐷𝑅[0,+) is the Skorokhod space [7, page 148]. This measure is the solution of such martingale problem: the following process𝐺𝑚(𝑠,𝑡)=𝑓0𝐺(𝑡)𝑓0(𝑠)𝑡𝑠𝐴0𝑓𝐺0(𝑢)𝑑𝑢(4.19) is a 𝑄-martingale for all 𝑓(𝑔)𝐶2(𝑅) and𝐸𝑚(𝑠,𝑡)𝜂𝑠0=0,(4.20) for scalar continuous bounded functional 𝜂𝑠0,  𝐸 is a mean value by measure 𝑄. From (4.19) it follows that 𝐸𝑇𝑚𝑇(𝑠,𝑡)𝜂𝑠0=0, and it is necessary to show that the limiting passing in (4.1) goes to the process in (3.12) as 𝑇+. From equality (4.11) we find that lim𝑇𝑛+𝐸𝑇𝑛𝑚(𝑠,𝑡)𝜂𝑠0=𝐸𝑚(𝑠,𝑡)𝜂𝑠0. Moreover, from the following expression ||𝐸𝑇𝑚(𝑠,𝑡)𝜂𝑠0𝐸𝑚(𝑠,𝑡)𝜂𝑠0||||𝐸𝑇𝐸𝑚(𝑠,𝑡)𝜂𝑠0||+𝐸𝑇||𝑚(𝑠,𝑡)𝑚𝑇||||𝜂(𝑠,𝑡)𝑠0||𝑇+0,(4.21) we obtain that there exists the measure 𝑄 on 𝐷𝑅[0,+) which solves the martingale problem for the operator 𝐴0 (or, equivalently, for the process 𝐺0(𝑡) in the form (4.12)). Uniqueness of the solution of the martingale problem follows from the fact that operator 𝐴0 generates the unique semigroup with respects to the Wiener process with variance 𝜎21 in (4.14). As long as the semigroup is unique then the limit process 𝐺0(𝑡) is unique. See [3, Chapter 1].

4.5. Calculation of the Quadratic Variation for GMRP

If 𝐺𝑇𝑛=𝐺𝑇𝑇1𝜏𝑛, the sequence𝑚𝑇𝑛=𝐺𝑇𝑛𝐺𝑇0𝑛1𝑘=0𝐸𝐺𝑇𝑘+1𝐺𝑇𝑘𝑘,𝐺𝑇0=𝑔,(4.22) is 𝑛-martingale, where 𝑛=𝜎{𝑥𝑘,𝜃𝑘;0𝑘𝑛}. From the definition it follows that the characteristic 𝑚𝑇𝑛 of the martingale 𝑚𝑇𝑛 has the form𝑚𝑇𝑛=𝑛1𝑘=0𝐸𝑚𝑇𝑘+1𝑚𝑇𝑘2𝑘.(4.23) To calculate 𝑚𝑇𝑛 let us represent 𝑚𝑇𝑛 in (4.22) in the form of martingale-difference: 𝑚𝑇𝑛=𝑛1𝑘=0𝐺𝑇𝑘+1𝐺𝐸𝑇𝑘+1𝑘.(4.24) From representation 𝐺𝑇𝑛+1𝐺𝑇𝑛=1𝑇𝜌𝑥𝑛(4.25) it follows that 𝐸(𝐺𝑇𝑘+1𝑘)=𝐺𝑇𝑘+𝑇1𝜌(𝑥𝑘), that is why𝐺𝑇𝑘+1𝐺𝐸𝑇𝑘+1𝑘=𝑇1𝜌𝑥𝑘𝑥𝑃𝜌𝑘.(4.26) Since from (4.22) it follows that𝑚𝑇𝑘+1𝑚𝑇𝑘=𝐺𝑇𝑘+1𝐺𝐸𝑇𝑘+1𝑘=𝑇1𝜌𝑥𝑘𝑥𝑃𝜌𝑘,(4.27) then substituting (4.27) in (4.23) we obtain𝑚𝑇𝑛=𝑇2𝑛1𝑘=0𝑥(𝐼𝑃)𝜌𝑘2.(4.28) In an averaging scheme (see [2]) for GMRP in the scale of time 𝑡𝑇 we obtain that 𝑚𝑇[𝑡𝑇] goes to zero as 𝑇+ in probability, which follows from (4.27): 𝑚𝑇[𝑡𝑇]=𝑇2[𝑡𝑇]1𝑘=0𝑥(𝐼𝑃)𝜌𝑘20as𝑇+(4.29) for all 𝑡𝑅+. In the diffusion approximation scheme for GMRP in scale of time 𝑡𝑇2 from (4.27) we obtain that characteristic 𝑚𝑇[𝑡𝑇2] does not go to zero as 𝑇+ since 𝑚𝑇𝑡𝑇2=𝑇2𝑡𝑇21𝑘=0𝑥(𝐼𝑃)𝜌𝑘2𝑡𝜎21,(4.30) where 𝜎21=𝑋𝜋(𝑑𝑥)[(𝐼𝑃)𝜌(𝑥)]2.

4.6. Rates of Convergence for GMRP

Consider the representation (4.22) for martingale 𝑚𝑇𝑛. It follows that𝐺𝑇𝑛=𝑔+𝑚𝑇𝑛+𝑛1𝑘=0𝐸𝐺𝑇𝑘+1𝐺𝑇𝑘𝑘.(4.31) In diffusion approximation scheme for GMRP the limit for the process 𝐺𝑇[𝑡𝑇2] as 𝑇+ will be diffusion process 𝑆(𝑡) (see (3.10)). If 𝑚0(𝑡) is the limiting martingale for 𝑚𝑇[𝑡𝑇2] in (4.22) as 𝑇+, then from (4.31) and (3.10) we obtain 𝐸𝐺𝑇𝑡𝑇2𝑚𝑆(𝑡)=𝐸𝑇𝑡𝑇2𝑚0(𝑡)+𝑇1𝑡𝑇21𝑘=0𝜌𝑥𝑘𝑆(𝑡).(4.32) Since 𝐸[𝑚𝑇[𝑡𝑇2]𝑚0(𝑡)]=0, (because 𝑚𝑇[𝑡𝑇2] and 𝑚0(𝑡) are zero-mean martingales) then from (4.32) we obtain:|||𝐸𝐺𝑇𝑡𝑇2|||𝑆(𝑡)𝑇1||||||𝑡𝑇21𝑘=0𝜌𝑥𝑘||||||.𝑆(𝑡)𝑇(4.33) Taking into account the balance condition 𝑋𝜋(𝑑𝑥)𝜌(𝑥)=0 and the central limit theorem for a Markov chain [4, page 98], we obtain||||||𝑡𝑇21𝑘=0𝜌𝑥𝑘||||||𝑆(𝑡)𝑇=𝐶1𝑡0,(4.34) where 𝐶1(𝑡0) is a constant depending on 𝑡0,𝑡[0,𝑡0]. From (4.33), (4.2), and (4.32) we obtain:|||𝐸𝐺𝑇𝑡𝑇2|||𝑆(𝑡)𝑇1𝐶1𝑡0.(4.35) Thus, the rates of convergence in diffusion scheme has the order 𝑇1.

5. Merged Diffusion Geometric Markov Renewal Process in the Case of Two Ergodic Classes

5.1. Two Ergodic Classes

Let 𝑃(𝑥,𝐴)={𝑥𝑛+1𝐴𝑥𝑛=𝑥} be the transition probabilities of supporting embedded reducible Markov chain {𝑥𝑛}𝑛0 in the phase space 𝑋. Let us have two ergodic classes 𝑋0 and 𝑋1 of the phase space such that:𝑋=𝑋0𝑋1,𝑋0𝑋1=.(5.1) Let {𝑋={0,1},𝒱} be the measurable merged phase space. A stochastic kernel 𝑃0(𝑥,𝐴) is consistent with the splitting (5.1) in the following way: 𝑃0𝑥,𝑋𝑘=1𝑘=1,𝑥𝑋𝑘,̸0,𝑥𝑋𝑘,𝑘=0,1.(5.2) Let the supporting embedded Markov chain (𝑥𝑛)𝑛𝑍+ with the transition probabilities P0(𝑥,𝐴) be uniformly ergodic in each class 𝑋𝑘,𝑘=0,1 and have a stationary distribution 𝜋𝑘(𝑑𝑥) in the classes 𝑋𝑘,  𝑘=0,1: 𝜋𝑘(𝐴)=𝑋𝑘𝜋𝑘(𝑑𝑥)𝑃0(𝑥,𝐴),𝐴𝑋𝑘,𝑘=0,1.(5.3) Let the stationary escape probabilities of the embedded Markov chain (𝑥𝑛)𝑛𝑍+ with transition probabilities 𝑃(𝑥,𝐴)={𝑥𝑛+1𝐴𝑥𝑛=𝑥} be positive and sufficiently small, that is, 𝑞𝑘(𝐴)=𝑋𝑘𝜋𝑘(𝑑𝑥)𝑃𝑥,𝑋𝑋𝑘>0,𝑘=0,1.(5.4) Let the stationary sojourn time in the classes of states be uniformly bounded, namely,0𝐶1𝑚𝑘=𝑋𝑘𝜋𝑘(𝑑𝑥)𝑚(𝑥)𝐶2,𝑘=0,1,(5.5) where 𝑚(𝑥)=0𝐺𝑥(𝑡)𝑑𝑡.(5.6)

5.2. Algorithms of Phase Averaging with Two Ergodic Classes

The merged Markov chain (̂𝑥𝑛)𝑛𝑍+ in merged phase space 𝑋 is given by matrix of transition probabilities𝑃=̂𝑝𝑘𝑟𝑘,𝑟=0,1;̂𝑝01=1̂𝑝11=𝑋1𝜋1(𝑑𝑥)𝑃𝑥,𝑋0=1𝑋1𝜋1(𝑑𝑥)𝑃𝑥,𝑋1;̂𝑝01=1̂𝑝00=𝑋0𝜋0(𝑑𝑥)𝑃𝑥,𝑋1=1𝑋0𝜋0(𝑑𝑥)𝑃𝑥,𝑋0.(5.7) As ̂𝑝𝑘𝑟0, 𝑘=0,1, then ̂𝑥𝑛 has virtual transitions. Intensities Λ𝑘 of sojourn times ̂𝜃𝑘, 𝑘=0,1, of the merged MRP are calculated as follows:Λ𝑘=1𝑚𝑘,𝑚𝑘=𝑋𝑘𝜋𝑘(𝑑𝑥)𝑚(𝑥),𝑘=0,1.(5.8) And, finally, the merged MRP (̂𝑥𝑛,̂𝜃)𝑛𝑍+ in the merged phase space 𝑋 is given by the stochastic matrix𝑄𝑄(𝑡)=𝑘𝑟𝑘,𝑟=0,1=̂𝑝𝑘𝑟1𝑒Λ𝑘𝑡,𝑘,𝑟=0,1.(5.9) Hence, the initial semi-Markov system is merged to a Markov system with two classes.

5.3. Merged Diffusion Approximation in the Case of Two Ergodic Classes

The merged diffusion GMRP in the case of two ergodic classes has the form:𝑆(𝑡)=𝑆0𝑒(1/2)𝑡0̂𝜌2(̂𝑥(𝑠))𝑑𝑠+𝑡0𝜎𝜌(̂𝑥(𝑠))𝑑𝑤(𝑠)(5.10) which satisfies the stochastic differential equation (SDE):𝑑𝑆(𝑡)=1𝑆(𝑡)2𝜎2𝜌(̂𝑥(𝑡))̂𝜌2(̂𝑥(𝑡))𝑑𝑡+𝜎𝜌(̂𝑥(𝑡))𝑑𝑤(𝑡),(5.11) where ̂𝜌2(1)=𝑋1𝑝1(𝑑𝑥)𝑋1𝑃(𝑥,𝑑𝑦)𝜌2(𝑦),𝑚(1)̂𝜌2(0)=𝑋0𝑝0(𝑑𝑥)𝑋0𝑃(𝑥,𝑑𝑦)𝜌2(𝑦),𝑚(0)𝜎2𝜌(1)=𝑋1𝑝1(𝑑𝑥)𝑋1𝑃(𝑥,𝑑𝑦)𝜌2(𝑦)+𝑋1𝑃(𝑥,𝑑𝑦)𝜌(𝑦)𝑅0𝑋1𝑃(𝑥,𝑑𝑦)𝜌(𝑦),𝑚(1)𝜎2𝜌(0)=𝑋0𝑝0(𝑑𝑥)𝑋0𝑃(𝑥,𝑑𝑦)𝜌2(𝑦)+𝑋0𝑃(𝑥,𝑑𝑦)𝜌(𝑦)𝑅0𝑋0𝑃(𝑥,𝑑𝑦)𝜌(𝑦),𝑚(0)(5.12)̂𝑥(𝑡) is a merged Markov process in 𝑋={0,1} with stochastic matrix 𝑄(𝑡) in (5.9).

6. European Call Option Pricing Formulas for Diffusion GMRP

6.1. Ergodic Geometric Markov Renewal Process

As we have seen in Section 3, an ergodic diffusion GMRP 𝑆(𝑡) satisfies the following SDE (see (3.10)): 𝑑𝑆(𝑡)=1𝑆(𝑡)2𝜎𝜌̂𝜌2𝑑𝑡+𝜎𝜌𝑑𝑤(𝑡),(6.1) wherê𝜌2=𝑋𝑝(𝑑𝑥)𝑋𝑃(𝑥,𝑑𝑦)𝜌2(𝑦)𝑚,(6.2)𝜎2𝜌=𝑋1𝑝(𝑑𝑥)2𝑋𝑃(𝑥,𝑑𝑦)𝜌2(𝑦)+𝑋𝑃(𝑥,𝑑𝑦)𝜌(𝑦)𝑅0𝑃(𝑥,𝑑𝑦)𝜌(𝑦)𝑚.(6.3) The risk-neutral measure 𝑃 for the process in (6.1) is:𝑑𝑃𝑃1=exp𝜃𝑡2𝜃2,𝑤(𝑡)(6.4) where 𝜎𝜃=(1/2)𝜌̂𝜌2𝑟𝜎𝜌.(6.5) Under 𝑃, the process 𝑒𝑟𝑡𝑆𝑡 is a martingale and the process 𝑤(𝑡)=𝑤(𝑡)+𝜃𝑡 is a Brownian motion. In this way, in the risk-neutral world, the process 𝑆𝑡 has the following form𝑑𝑆(𝑡)𝑆(𝑡)=𝑟𝑑𝑡+𝜎𝜌𝑑𝑤(𝑡).(6.6) Using Black-Scholes formula (see [8]) we obtain the European call option pricing formula for our model (6.6):𝐶=𝑆0Φ𝑑+𝐾𝑒𝑟𝑇Φ𝑑,(6.7) where 𝑑+=𝑆ln0+/𝐾𝑟+(1/2)𝜎𝜌𝑡𝜎𝜌𝑡,𝑑=𝑆ln0+/𝐾𝑟(1/2)𝜎𝜌𝑡𝜎𝜌𝑡,(6.8)Φ(𝑥) is a normal distribution and 𝜎𝜌 is defined in (6.3).

6.2. Double Averaged Diffusion GMRP

Using the similar arguments as in (6.1)–(6.7), we can get European call option pricing formula for a double averaged diffusion GMRP in (3.24):𝑑𝑆(𝑡)𝑆=1(𝑡)2𝜎2𝜌𝜌2𝑑𝑡+𝜎𝜌𝑑𝑤(𝑡),(6.9) where 𝜎2𝜌 and 𝜌2 are defined in (3.21), (see also (3.13)), and (3.15). Namely, the European call option pricing formula for a double averaged diffusion GMRP is:𝐶=𝑆0Φ𝑑+𝐾𝑒𝑟𝑇Φ𝑑,(6.10) where 𝑑+=𝑆ln0+/𝐾𝑟+(1/2)𝜎𝜌𝑡𝜎𝜌𝑡,𝑑=𝑆ln0+/𝐾𝑟(1/2)𝜎𝜌𝑡𝜎𝜌𝑡,(6.11)Φ(𝑥) is a normal distribution and 𝜎𝜌 is defined in (3.21).

6.3. European Call Option Pricing Formula for Merged Diffusion GMRP

From Section 3.2, the merged diffusion GMRP has the following form:𝑑𝑆(𝑡)=1𝑆(𝑡)2𝜎2𝜌(̂𝑥(𝑡))̂𝜌2(̂𝑥(𝑡))𝑑𝑡+𝜎𝜌(̂𝑥(𝑡))𝑑𝑤(𝑡),(6.12) where 𝜎2𝜌 and ̂𝜌2 are defined in Section 3.2 (see (3.18). Taking into account the result on European call option pricing formula for regime-switching geometric Brownian motion (see [4, page 224, corollary]), we obtain the option pricing formula for the merged diffusion GMRP:𝐶𝐶=BS𝑇𝑧𝑇21,𝑇,𝑆0𝐹𝑥𝑇(𝑑𝑧),(6.13) where 𝐶BS𝑇 is a Black-Scholes value and 𝐹𝑥𝑇(𝑑𝑧) is a distribution of the random variable 𝑧𝑥𝑇=𝑇0𝜎2𝜌(̂𝑥(𝑡))𝑑𝑠,(6.14) where ̂𝑥(𝑡) is a merged Markov process.

Acknowledgment

This research is partially supported by the University of Prince Edward Island major research grants (MRG) of M. S. Islam and NSERC grant of A. Swishchuk.