Table of Contents Author Guidelines Submit a Manuscript
ISRN Discrete Mathematics
VolumeΒ 2011Β (2011), Article IDΒ 346503, 15 pages
http://dx.doi.org/10.5402/2011/346503
Research Article

First Hitting Problems for Markov Chains That Converge to a Geometric Brownian Motion

1DΓ©partement de MathΓ©matiques et de GΓ©nie Industriel, Γ‰cole Polytechnique de MontrΓ©al, C.P. 6079, Succursale Centre-Ville, MontrΓ©al, QC, H3C 3A7, Canada
2DΓ©partement de MathΓ©matiques et de Statistique, UniversitΓ© de MontrΓ©al, C.P. 6128, Succursale Centre-Ville, MontrΓ©al, QC, H3C 3J7, Canada

Received 1 July 2011; Accepted 21 July 2011

Academic Editors: C.-K.Β Lin and B.Β Zhou

Copyright Β© 2011 Mario Lefebvre and Moussa Kounta. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We consider a discrete-time Markov chain with state space {1,1+Ξ”π‘₯,…,1+π‘˜Ξ”π‘₯=𝑁}. We compute explicitly the probability 𝑝𝑗 that the chain, starting from 1+𝑗Δπ‘₯, will hit N before 1, as well as the expected number 𝑑𝑗 of transitions needed to end the game. In the limit when Ξ”π‘₯ and the time Δ𝑑 between the transitions decrease to zero appropriately, the Markov chain tends to a geometric Brownian motion. We show that 𝑝𝑗 and 𝑑𝑗Δ𝑑 tend to the corresponding quantities for the geometric Brownian motion.

1. Introduction

Let {𝑋(𝑑),𝑑β‰₯0} be a one-dimensional geometric Brownian motion defined by the stochastic differential equation 𝑑𝑋(𝑑)=πœ‡π‘‹(𝑑)𝑑𝑑+πœŽπ‘‹(𝑑)𝑑𝐡(𝑑),(1.1) where πœ‡βˆˆβ„, 𝜎>0, and {𝐡(𝑑),𝑑β‰₯0} is a standard Brownian motion. Assume that 𝑋(0)=π‘₯∈(1,𝑁), where π‘βˆˆβ„• (for simplicity), and define𝜏(π‘₯)=inf{𝑑>0βˆΆπ‘‹(𝑑)=1orπ‘βˆ£π‘‹(0)=π‘₯}.(1.2) As is well known (see, e.g., Lefebvre [1, page 220]), the probability [𝑋[]]𝑝(π‘₯)∢=π‘ƒπœ(π‘₯)=𝑁(1.3) satisfies the ordinary differential equation 12𝜎2π‘₯2π‘ξ…žξ…ž(π‘₯)+πœ‡π‘₯π‘ξ…ž(π‘₯)=0,(1.4) subject to the boundary conditions 𝑝(1)=0,𝑝(𝑁)=1.(1.5) We easily find that, if π‘βˆΆ=πœ‡/𝜎2β‰ 1/2, π‘₯𝑝(π‘₯)=1βˆ’2π‘βˆ’1𝑁1βˆ’2cβˆ’1for1≀π‘₯≀𝑁.(1.6) When 𝑐=1/2, the solution is 𝑝(π‘₯)=lnπ‘₯ln𝑁for1≀π‘₯≀𝑁.(1.7) Moreover, the function []π‘š(π‘₯)∢=𝐸𝜏(π‘₯)(1.8) satisfies the ordinary differential equation (see, again, Lefebvre [1, page 220]) 12𝜎2π‘₯2π‘šξ…žξ…ž(π‘₯)+πœ‡π‘₯π‘šξ…ž(π‘₯)=βˆ’1,(1.9) subject to π‘š(1)=π‘š(𝑁)=0.(1.10) This time, if 𝑐≠1/2 we find that 2π‘š(π‘₯)=(1βˆ’2𝑐)𝜎2ξ‚»π‘₯lnπ‘₯βˆ’ln𝑁1βˆ’2π‘βˆ’1𝑁1βˆ’2π‘ξ‚Όβˆ’1for1≀π‘₯≀𝑁(1.11) and, for 𝑐=1/2, π‘š(π‘₯)=lnπ‘₯𝜎2(lnπ‘βˆ’lnπ‘₯)for1≀π‘₯≀𝑁.(1.12) Now, it can be shown (see Cox and Miller [2, page 213]) that the discrete-time Markov chain {π‘‹π‘šΞ”π‘‘,π‘š=0,1,…} with state space {1,1+Ξ”π‘₯,…,1+π‘˜Ξ”π‘₯}, where π‘˜ is such that 1+π‘˜Ξ”π‘₯=𝑁, and transition probabilities 𝑝1+𝑗Δπ‘₯,1+(𝑗+1)Ξ”π‘₯=1ξ€½2𝐴(1+𝑗Δπ‘₯)2𝜎2ξ€Ύ,𝑝+(1+𝑗Δπ‘₯)πœ‡Ξ”π‘₯1+𝑗Δπ‘₯,1+(π‘—βˆ’1)Ξ”π‘₯=1ξ€½2𝐴(1+𝑗Δπ‘₯)2𝜎2ξ€Ύ,π‘βˆ’(1+𝑗Δπ‘₯)πœ‡Ξ”π‘₯1+𝑗Δπ‘₯,1+𝑗Δπ‘₯1=1βˆ’π΄(1+𝑗Δπ‘₯)2𝜎2,(1.13) where π‘—βˆˆ{1,…,π‘˜βˆ’1}, converges to the geometric Brownian motion {𝑋(𝑑),𝑑β‰₯0} as Ξ”π‘₯ and Δ𝑑 decrease to zero, provided that(Ξ”π‘₯)2=𝐴Δ𝑑,(1.14)(𝑗Δπ‘₯)2<π΄βˆ€π‘—βˆˆ{0,…,π‘˜}.(1.15)

Remarks 1.1. (i) We assume that all the probabilities defined by (1.13) are well defined; that is, they all belong to the interval [0,1].
(ii) The condition in (1.15) implies that (Ξ”π‘₯)2<𝐴/π‘˜2.
Let π‘‡π‘—ξ€½βˆΆ=infπ‘š>0βˆΆπ‘‹π‘šΞ”π‘‘=1orπ‘βˆ£π‘‹0ξ€Ύ,𝑝=1+𝑗Δπ‘₯(1.16)π‘—ξ‚ƒπ‘‹βˆΆ=𝑃𝑇𝑗=𝑁.(1.17)

In the next section, we will compute the quantity 𝑝𝑗 for π‘—βˆˆ{1,…,π‘˜βˆ’1}. We will show that 𝑝𝑗 converges to the function 𝑝(π‘₯) for the geometric Brownian motion as Ξ”π‘₯ decreases to zero and π‘˜ tends to infinity in such a way that 1+π‘˜Ξ”π‘₯ remains equal to 𝑁.

In Section 3, we will compute the mean number of transitions needed to end the game, namely, π‘‘π‘—ξ€Ίπ‘‡βˆΆ=𝐸𝑗.(1.18) By making a change of variable to transform the diffusion process {𝑋(𝑑),𝑑β‰₯0} into a geometric Brownian motion with infinitesimal mean equal to zero and by considering the corresponding discrete-time Markov chain, we will obtain an explicit and exact expression for 𝑑𝑗 that, when multiplied by Δ𝑑, tends to π‘š(π‘₯) if the time Δ𝑑 between the transitions is chosen suitably.

The motivation for our work is the following. Lefebvre [3] computed the probability 𝑝(π‘₯) and the expected duration π‘š(π‘₯) for asymmetric Wiener processes in the interval (βˆ’π‘‘,𝑑), that is, for Wiener processes for which the infinitesimal means πœ‡+ and πœ‡βˆ’, and infinitesimal variances 𝜎2+ and 𝜎2βˆ’, are not necessarily the same when π‘₯>0 or π‘₯<0. To confirm his results, he considered a random walk that converges to the Wiener process. Lefebvre's results were extended by Abundo [4] to general one-dimensional diffusion processes. However, Abundo did not obtain the quantities 𝑝𝑗 and 𝑑𝑗 for the corresponding discrete-time Markov chains. Also, it is worth mentioning that asymmetric diffusion processes need not be defined in an interval that includes the origin. A process defined in the interval (π‘Ž,𝑏) can be asymmetric with respect to any π‘Ž<𝑐<𝑏.

Next, Lefebvre and Guilbault [5] and Guilbault and Lefebvre [6] computed 𝑝𝑗 and 𝑑𝑗, respectively, for a discrete-time Markov chain that tends to the Ornstein-Uhlenbeck process. The authors also computed the quantity 𝑝𝑗 in the case when the Markov chain is asymmetric (as in Lefebvre [3]).

Asymmetric processes can be used in financial mathematics to model the price of a stock when, in particular, the infinitesimal variance (i.e., the volatility) tends to increase with the price of the stock. Indeed, it seems logical that the volatility is larger when the stock price 𝑋(𝑑) is very large than when it is close to zero. The prices of commodities, such as gold and oil, are also more volatile when they reach a certain level.

In order to check the validity of the expressions obtained by Abundo [4] for 𝑝(π‘₯) and π‘š(π‘₯), it is important to obtain the corresponding quantities for the discrete-time Markov chains and then proceed by taking the limit as Ξ”π‘₯ and Δ𝑑 decrease to zero appropriately. Moreover, the formulas that will be derived in the present paper are interesting in themselves, since in reality stock or commodity prices do not vary completely continuously.

First passage problems for Markov chains have many applications. For example, in neural networks, an important quantity is the interspike time, that is, the time between spikes of a firing neuron (which means that the neuron sends a signal to other neurons). Discrete-time Markov chains have been used as models in this context, and the interspike time is the number of steps it takes the chain to reach the threshold at which firing occurs.

2. Computation of the Probability 𝑝𝑗

Assume first that Ξ”π‘₯=1, so that the state space is {1,2,…,𝑁} and the transition probabilities become 𝑝𝑗,𝑗+1=1𝑗2𝐴2𝜎2ξ€Ύ+π‘—πœ‡,𝑝𝑗,π‘—βˆ’1=1𝑗2𝐴2𝜎2ξ€Ύβˆ’π‘—πœ‡,𝑝𝑗,𝑗𝑗=1βˆ’2𝜎2𝐴(2.1)

for π‘—βˆˆ{2,…,π‘βˆ’1}. The probability defined in (1.17) satisfies the following difference equation: 𝑝𝑗=𝑝𝑗,𝑗+1𝑝𝑗+1+𝑝𝑗,π‘—βˆ’1π‘π‘—βˆ’1+𝑝𝑗,𝑗𝑝𝑗.(2.2) That is, 2𝑗𝑝𝑗=(𝑗+𝑐)𝑝𝑗+1+(π‘—βˆ’π‘)π‘π‘—βˆ’1,(2.3) where 𝑐=πœ‡/𝜎2. The boundary conditions are 𝑝1=0,𝑝𝑁=1.(2.4)

In the special case when πœ‡=0, (2.3) reduces to the second-order difference equation with constant coefficients 𝑝𝑗+1=2π‘π‘—βˆ’π‘π‘—βˆ’1.(2.5) We easily find that the (unique) solution that satisfies the boundary conditions (2.4) is 𝑝𝑗=π‘—βˆ’1π‘βˆ’1for𝑗=1,2,…,𝑁.(2.6)

Assume now that πœ‡β‰ 0. Letting π‘€π‘—βˆΆ=𝑝𝑗+1βˆ’π‘π‘—.(2.7) Equation (2.3) can be rewritten as (𝑗+𝑐)𝑀𝑗=(π‘—βˆ’π‘)π‘€π‘—βˆ’1.(2.8)

Using the mathematical software program Maple, we find that the solution of this first-order difference equation that satisfies the boundary condition 𝑀1=𝑝2 is given by 𝑀𝑗𝑝=βˆ’2πœ‹[(]𝑐𝑐sin2+𝑐)πœ‹2ξ€ΈΞ“βˆ’12(π‘βˆ’1)Ξ“(𝑗+1βˆ’π‘)Ξ“(𝑗+1+𝑐),(2.9) where Ξ“ is the gamma function.

Next, we must solve the first-order difference equation 𝑝𝑗+1βˆ’π‘π‘—=𝑓(𝑐)Ξ“(𝑗+1βˆ’π‘)Ξ“(𝑗+1+𝑐),(2.10) where 𝑝𝑓(𝑐)∢=βˆ’2πœ‹[]𝑐𝑐sin(2+𝑐)πœ‹2ξ€ΈΞ“βˆ’12(π‘βˆ’1),(2.11) subject to the boundary conditions (2.4). We find that, if 𝑐≠1/2, then 𝑝𝑗=𝑓(𝑐)(1βˆ’2𝑐𝑗+𝑐)Ξ“(𝑗+1βˆ’π‘)𝑐Γ(𝑗+1+𝑐)+𝑓(𝑐)2π‘βˆ’1Ξ“(1βˆ’π‘)Ξ“(1+𝑐)+πœ…,(2.12) where πœ… is a constant. Applying the boundary conditions (2.4), we obtain that 𝑝𝑗=(𝑗+𝑐)(Ξ“(𝑗+1βˆ’π‘)/Ξ“(𝑗+1+𝑐))βˆ’(1+𝑐)(Ξ“(2βˆ’π‘)/Ξ“(2+𝑐))(𝑁+𝑐)(Ξ“(𝑁+1βˆ’π‘)/Ξ“(𝑁+1+𝑐))βˆ’(1+𝑐)(Ξ“(2βˆ’π‘)/Ξ“(2+𝑐))for𝑗=1,2,…,𝑁.(2.13)

Remark 2.1. When 𝑐 tends to 1/2, the solution becomes 𝑝𝑗=Ξ¨(𝑗+1/2)βˆ’2+𝛾+2ln2Ξ¨(𝑁+1/2)βˆ’2+𝛾+2ln2,(2.14) where 𝛾 is Euler's constant and Ξ¨ is the digamma function defined by ΓΨ(𝑧)=ξ…ž(𝑧)Ξ“(𝑧).(2.15)

Notice that Ξ¨(3/2)=2βˆ’π›Ύβˆ’2ln2,(2.16) so that we indeed have 𝑝1=0, and the solution (2.14) can be rewritten as 𝑝𝑗=Ξ¨(𝑗+1/2)βˆ’Ξ¨(3/2)Ξ¨(𝑁+1/2)βˆ’Ξ¨(3/2)for𝑗=1,2,…,𝑁.(2.17)

Now, in the general case when Ξ”π‘₯>0, we must solve the difference equation 𝑝𝑗=1ξ€½2𝐴(1+𝑗Δπ‘₯)2𝜎2𝑝+(1+𝑗Δπ‘₯)πœ‡Ξ”π‘₯𝑗+1+1ξ€½2𝐴(1+𝑗Δπ‘₯)2𝜎2ξ€Ύπ‘βˆ’(1+𝑗Δπ‘₯)πœ‡Ξ”π‘₯π‘—βˆ’1+ξ‚€11βˆ’π΄(1+𝑗Δπ‘₯)2𝜎2𝑝𝑗,(2.18) which can be simplified to 2(1+𝑗Δπ‘₯)𝑝𝑗=[]𝑝(1+𝑗Δπ‘₯)+𝑐Δπ‘₯𝑗+1+[]𝑝(1+𝑗Δπ‘₯)βˆ’π‘Ξ”π‘₯π‘—βˆ’1.(2.19) The boundary conditions become 𝑝0=0,π‘π‘˜=1.(2.20)

When πœ‡=0 (which implies that 𝑐=0), the difference equation above reduces to the same one as when Ξ”π‘₯=1, namely (2.5). The solution is 𝑝𝑗=π‘—π‘˜for𝑗=0,1,…,π‘˜.(2.21) Writing 𝑛=1+𝑗Δπ‘₯(2.22) and using the fact that (by hypothesis) 𝑁=1+π‘˜Ξ”π‘₯, we obtain that𝑝𝑛=π‘›βˆ’1π‘βˆ’1for𝑛=1,1+Ξ”π‘₯,…,1+π‘˜Ξ”π‘₯=𝑁.(2.23) Notice that this solution does not depend on the increment Ξ”π‘₯. Hence, if we let Ξ”π‘₯ decrease to zero and π‘˜ tend to infinity in such a way that 1+π‘˜Ξ”π‘₯ remains equal to 𝑁, we have that π‘π‘›βŸΆπ‘›βˆ’1π‘βˆ’1for1≀𝑛≀𝑁,(2.24) which is the same as the function 𝑝(π‘₯) in (1.6) when 𝑐=0/𝜎2=0.

Next, proceeding as above, we obtain that, if 𝑐≠1/2, the probability 𝑝𝑗 is given by 𝑝𝑗=(1+𝑗Δπ‘₯+𝑐Δπ‘₯)(Ξ“((1+(𝑗+1)Ξ”π‘₯βˆ’π‘Ξ”π‘₯)/Ξ”π‘₯)/Ξ“((1+(𝑗+1)Ξ”π‘₯+𝑐Δπ‘₯)/Ξ”π‘₯))βˆ’π’œ(1+π‘˜Ξ”π‘₯+𝑐Δπ‘₯)(Ξ“((1+(π‘˜+1)Ξ”π‘₯βˆ’π‘Ξ”π‘₯)/Ξ”π‘₯)/Ξ“((1+(π‘˜+1)Ξ”π‘₯+𝑐Δπ‘₯)/Ξ”π‘₯))βˆ’π’œ,(2.25) where π’œ denotes(1+𝑐Δπ‘₯)(Ξ“((1+Ξ”π‘₯βˆ’π‘Ξ”π‘₯)/Ξ”π‘₯)/Ξ“((1+Ξ”π‘₯+𝑐Δπ‘₯)/Ξ”π‘₯)). In terms of 𝑛 and 𝑁, this expression becomes𝑝𝑛=(𝑛+𝑐Δπ‘₯)(Ξ“((𝑛+Ξ”π‘₯βˆ’π‘Ξ”π‘₯)/Ξ”π‘₯)/(Ξ“(𝑛+Ξ”π‘₯+𝑐Δπ‘₯)/Ξ”π‘₯))βˆ’π’œ(𝑁+𝑐Δπ‘₯)(Ξ“((𝑁+Ξ”π‘₯βˆ’π‘Ξ”π‘₯)/Ξ”π‘₯))/(Ξ“((𝑁+Ξ”π‘₯+𝑐Δπ‘₯)/Ξ”π‘₯))βˆ’π’œ(2.26) for π‘›βˆˆ{1,1+Ξ”π‘₯,…,1+π‘˜Ξ”π‘₯=𝑁}. The solution reduces to 𝑝𝑛=Ξ¨((2𝑛+Ξ”π‘₯)/2Ξ”π‘₯)βˆ’Ξ¨((2+Ξ”π‘₯)/2Ξ”π‘₯)Ξ¨((2𝑁+Ξ”π‘₯)/2Ξ”π‘₯)βˆ’Ξ¨((2+Ξ”π‘₯)/2Ξ”π‘₯)if𝑐=1/2.(2.27) We can now state the following proposition.

Proposition 2.2. Let 𝑛=1+𝑗Δπ‘₯ for π‘—βˆˆ{0,1,…,π‘˜}, with π‘˜ such that 1+π‘˜Ξ”π‘₯=𝑁. The probability 𝑝𝑛 that the discrete-time Markov chain defined in Section 1, starting from 𝑛, will hit 𝑁 before 1 is given by (2.23) if πœ‡=0, and by (2.26) if 𝑐=πœ‡/𝜎2β‰ 0. The value of 𝑝𝑛 tends to the function in (2.27) when πœ‡/𝜎2 tends to 1/2.

To complete this section, we will consider the case when Ξ”π‘₯ decreases to zero. We have already mentioned that when 𝑐=0, the probability 𝑝𝑛 does not depend on Ξ”π‘₯, and it corresponds to the function 𝑝(π‘₯) in (1.6) with 𝑐=0.

Next, when 𝑐=1/2, making use of the formula Ξ¨(𝑧)∼ln𝑧for𝑧large,(2.28)we can write that limΞ”π‘₯↓0𝑝𝑛=limΞ”π‘₯↓0ln(2𝑛+Ξ”π‘₯)βˆ’ln(2+Ξ”π‘₯)=ln(2𝑁+Ξ”π‘₯)βˆ’ln(2+Ξ”π‘₯)ln𝑛[].ln𝑁forπ‘›βˆˆ1,𝑁(2.29)Again, this expression corresponds to the function 𝑝(π‘₯) given in (1.7), obtained when 𝑐=1/2.

Finally, we have: Ξ“(𝑧+π‘Ž)Ξ“(𝑧+𝑏)βˆπ‘§π‘Žβˆ’π‘ξ‚€ξ‚€11+𝑂𝑧(2.30)as |𝑧| tends to infinity (if |Arg(𝑧+π‘Ž)|<πœ‹). Hence, in the case when 𝑐≠0,1/2, we can write that limΞ”π‘₯↓0𝑝𝑛=limΞ”π‘₯↓0(𝑛+𝑐Δπ‘₯)(𝑛+Ξ”π‘₯)βˆ’2π‘βˆ’(1+𝑐Δπ‘₯)(1+Ξ”π‘₯)βˆ’2𝑐(𝑁+𝑐Δπ‘₯)(𝑁+Ξ”π‘₯)βˆ’2π‘βˆ’(1+𝑐Δπ‘₯)(1+Ξ”π‘₯)βˆ’2𝑐=𝑛1βˆ’2π‘βˆ’1𝑁1βˆ’2π‘βˆ’1(2.31)for 1≀𝑛≀𝑁. Therefore, we retrieve the formula for 𝑝(π‘₯) in (1.6).

In the next section, we will derive the formulas that correspond to the function π‘š(π‘₯) in Section 1.

3. Computation of the Mean Number of Transitions 𝑑𝑗 Needed to End the Game

As in Section 2, we will first assume that Ξ”π‘₯=1. Then, with 𝑛=1+𝑗 for 𝑗=0,1,…,π‘˜ (and 1+π‘˜=𝑁), the function 𝑑𝑛 satisfies the following second-order, linear, nonhomogeneous difference equation: 𝑑𝑛=𝑝𝑛,𝑛+1𝑑𝑛+1+𝑝𝑛,π‘›βˆ’1π‘‘π‘›βˆ’1+𝑝𝑛,𝑛𝑑𝑛+1for𝑛=2,…,π‘βˆ’1.(3.1)

The boundary conditions are𝑑1=𝑑𝑁=0.(3.2) We find that the difference equation can be rewritten as (𝑛+𝑐)𝑑𝑛+1βˆ’2𝑛𝑑𝑛+(π‘›βˆ’π‘)π‘‘π‘›βˆ’1=βˆ’2π΄π‘›πœŽ2.(3.3)

Let us now assume that πœ‡=0, so that we must solve the second-order, linear, nonhomogeneous difference equation with constant coefficients 𝑑𝑛+1βˆ’2𝑑𝑛+π‘‘π‘›βˆ’1=βˆ’2𝐴𝑛2𝜎2.(3.4)With the help of the mathematical software program Maple, we find that the unique solution that satisfies the boundary conditions (3.2) is 𝑑𝑛=βˆ’π‘›βˆ’1π‘βˆ’12𝐴𝜎2ξ‚»ξ‚΅πœ‹Ξ¨(𝑁)+𝑁Ψ(1,𝑁)βˆ’(1βˆ’π›Ύ)βˆ’π‘βˆ’1+26+ξ‚Άξ‚Ό2𝐴𝜎2ξ‚»ξ‚΅πœ‹Ξ¨(𝑛)+𝑛Ψ(1,𝑛)βˆ’(1βˆ’π›Ύ)βˆ’π‘›βˆ’1+26,ξ‚Άξ‚Ό(3.5) where 𝑑Ψ(1,π‘₯)∢=𝑑π‘₯Ξ¨(π‘₯)(3.6) is the first polygamma function.

Next, in the general case Ξ”π‘₯>0, we must solve (with 𝑐=0) 𝑑𝑗+1βˆ’2𝑑𝑗+π‘‘π‘—βˆ’1=βˆ’2𝐴(1+𝑗Δπ‘₯)2𝜎2(3.7) for 𝑗=0,1,…,π‘˜. The solution that satisfies the boundary conditions 𝑑0=π‘‘π‘˜=0(3.8) is given by 𝑑𝑗𝑗=βˆ’π‘˜2𝐴𝜎2(Ξ”π‘₯)3(1+π‘˜Ξ”π‘₯)Ξ¨1,1+π‘˜Ξ”π‘₯Δπ‘₯+Ξ”π‘₯Ξ¨1+π‘˜Ξ”π‘₯1Ξ”π‘₯βˆ’Ξ¨1,1Ξ”π‘₯βˆ’Ξ”π‘₯Ξ¨+Ξ”π‘₯2𝐴𝜎2(Ξ”π‘₯)3ξ‚»ξ‚΅(1+𝑗Δπ‘₯)Ξ¨1,1+𝑗Δπ‘₯ξ‚Άξ‚΅Ξ”π‘₯+Ξ”π‘₯Ξ¨1+𝑗Δπ‘₯ξ‚Άξ‚€1Ξ”π‘₯βˆ’Ξ¨1,1Ξ”π‘₯βˆ’Ξ”π‘₯Ψ.Ξ”π‘₯(3.9) In terms of π‘›βˆΆ=1+𝑗Δπ‘₯ and 𝑁=1+π‘˜Ξ”π‘₯, this expression becomes 𝑑𝑛=βˆ’π‘›βˆ’1π‘βˆ’12𝐴𝜎2(Ξ”π‘₯)3𝑁𝑁Ψ1,𝑁Δπ‘₯+Ξ”π‘₯Ψ1Ξ”π‘₯βˆ’Ξ¨1,1Ξ”π‘₯βˆ’Ξ”π‘₯Ξ¨+Ξ”π‘₯2𝐴𝜎2(Ξ”π‘₯)3𝑛𝑛Ψ1,𝑛Δπ‘₯+Ξ”π‘₯Ψ1Ξ”π‘₯βˆ’Ξ¨1,1Ξ”π‘₯βˆ’Ξ”π‘₯Ξ¨.Ξ”π‘₯(3.10)

Finally, the mean duration of the game is obtained by multiplying 𝑑𝑛 by Δ𝑑. Making use of the fact that (see (1.14)) Δ𝑑=(Ξ”π‘₯)2/𝐴, we obtain the following proposition.

Proposition 3.1. When Ξ”π‘₯>0 and πœ‡=0, the mean duration 𝐷𝑛 of the game is given by 𝐷𝑛=βˆ’π‘›βˆ’12π‘βˆ’1𝜎2𝑁Δπ‘₯𝑁Ψ1,𝑁Δπ‘₯+Ξ”π‘₯Ψ1Ξ”π‘₯βˆ’Ξ¨1,1Ξ”π‘₯βˆ’Ξ”π‘₯Ξ¨+2Ξ”π‘₯ξ‚ξ‚‡πœŽ2𝑛Δπ‘₯𝑛Ψ1,𝑛Δπ‘₯+Ξ”π‘₯Ψ1Ξ”π‘₯βˆ’Ξ¨1,1Ξ”π‘₯βˆ’Ξ”π‘₯ΨΔπ‘₯(3.11) for 𝑛=1,1+Ξ”π‘₯,…,1+π‘˜Ξ”π‘₯=𝑁.

Next, using the fact that1Ξ¨(π‘₯)∼lnπ‘₯,Ξ¨(1,π‘₯)∼π‘₯forπ‘₯large,(3.12) we obtain that, as Ξ”π‘₯ decreases to zero and 1+π‘˜Ξ”π‘₯ remains equal to 𝑁, π·π‘›βŸΆ2𝜎2ξ‚†βˆ’ξ‚€π‘›βˆ’1[]π‘βˆ’1ln𝑁+ln𝑛forπ‘›βˆˆ1,𝑁.(3.13)Notice that 𝐷𝑛 indeed corresponds to the function π‘š(π‘₯) given in (1.11) if 𝑐=0.

To complete our work, we need to find the value of the mean number of transitions 𝑑𝑗 in the case when πœ‡β‰ 0 and Ξ”π‘₯>0. To do so, we must solve the nonhomogeneous difference equation with nonconstant coefficients (3.3). We can obtain the general solution to the corresponding homogeneous equation. However, we then need to find a particular solution to the nonhomogeneous equation. This entails evaluating a difficult sum. Instead, we will use the fact that we know how to compute 𝑑𝑗 when πœ‡=0.

Let us go back to the geometric Brownian motion {𝑋(𝑑),𝑑β‰₯0} defined in (1.1), and let us define, for 𝑐≠1/2, []π‘Œ(𝑑)=𝑋(𝑑)1βˆ’2𝑐.(3.14)Then, we find (see Karlin and Taylor [7, page 173]) that {π‘Œ(𝑑),𝑑β‰₯0} remains a geometric Brownian motion, with infinitesimal variance 𝜎2π‘Œ=(1βˆ’2𝑐)2𝜎2𝑦2, but with infinitesimal mean πœ‡π‘Œ=0. In the case when 𝑐=1/2, we define []π‘Œ(𝑑)=ln𝑋(𝑑),(3.15) and we obtain that {π‘Œ(𝑑),𝑑β‰₯0} is a Wiener process with πœ‡π‘Œ=0 and 𝜎2π‘Œ=𝜎2.

Remark 3.2. When 𝑐=1/2, we find that {𝑋(𝑑),𝑑β‰₯0} can be expressed as the exponential of a Wiener process {π‘Š(𝑑),𝑑β‰₯0} having infinitesimal mean πœ‡π‘Š=0 and infinitesimal variance 𝜎2π‘Š=𝜎2.

When we make the transformation π‘Œ(𝑑)=[𝑋(𝑑)]1βˆ’2𝑐, the interval [1,𝑁] becomes [1,𝑁1βˆ’2𝑐], respectively [𝑁1βˆ’2𝑐,1], if 𝑐<1/2, respectively 𝑐>1/2. Assume first that 𝑐<1/2. We have (see (1.2)) ξ€½πœ(π‘₯)=inf𝑑>0βˆΆπ‘Œ(𝑑)=1oπ‘Ÿπ‘1βˆ’2π‘βˆ£π‘Œ(0)=π‘₯1βˆ’2𝑐.(3.16)

Now, we consider the discrete-time Markov chain with state space {1,1+Ξ”π‘₯,…,1+π‘˜Ξ”π‘₯=𝑁1βˆ’2𝑐} and transition probabilities given by (1.13). Proceeding as above, we obtain the expression in (3.9) for the mean number of transitions 𝑑𝑗 from state 1+𝑗Δπ‘₯. This time, we replace 1+𝑗Δπ‘₯ by 𝑛1βˆ’2𝑐 and 1+π‘˜Ξ”π‘₯ by 𝑁1βˆ’2𝑐, so that 𝑑𝑛𝑛=βˆ’1βˆ’2π‘βˆ’1𝑁1βˆ’2π‘βˆ’12𝐴𝜎2(Ξ”π‘₯)3𝑁1βˆ’2𝑐Ψ𝑁1,1βˆ’2𝑐𝑁Δπ‘₯+Ξ”π‘₯Ξ¨1βˆ’2𝑐1Ξ”π‘₯βˆ’Ξ¨1,1Ξ”π‘₯βˆ’Ξ”π‘₯Ψ+Ξ”π‘₯2𝐴𝜎2(Ξ”π‘₯)3𝑛1βˆ’2𝑐Ψ𝑛1,1βˆ’2𝑐𝑛Δπ‘₯+Ξ”π‘₯Ξ¨1βˆ’2𝑐1Ξ”π‘₯βˆ’Ξ¨1,1Ξ”π‘₯βˆ’Ξ”π‘₯ΨΔπ‘₯(3.17) for 𝑛=1,(1+Ξ”π‘₯)1/(1βˆ’2𝑐),…,(1+π‘˜Ξ”π‘₯)1/(1βˆ’2𝑐)=𝑁.

Assume that each displacement takes Δ𝑑=(Ξ”π‘₯)2(1βˆ’2𝑐)2𝐴(3.18) time units. Taking the limit as Ξ”π‘₯ decreases to zero (and π‘˜β†’βˆž), we obtain (making use of the formulas in (3.12)) that π·π‘›βŸΆ2(1βˆ’2𝑐)𝜎2ξ‚»βˆ’ξ‚΅π‘›1βˆ’2π‘βˆ’1𝑁1βˆ’2𝑐[]βˆ’1ln𝑁+ln𝑛forπ‘›βˆˆ1,𝑁.(3.19)

This formula corresponds to the function π‘š(π‘₯) in (1.11) when 𝑐<1/2.

When 𝑐>1/2, we consider the Markov chain having state space ξ‚»1𝑁2π‘βˆ’1=1,11+π‘˜Ξ”π‘₯11+(π‘˜βˆ’1)Ξ”π‘₯,…,ξ‚Ό1+Ξ”π‘₯,1(3.20) (and transition probabilities given by (1.13)). To obtain 𝑑𝑗, we must again solve the difference equation (3.7), subject to the boundary conditions 𝑑0=π‘‘π‘˜=0. However, once we have obtained the solution, we must now replace 1+𝑗Δπ‘₯ by (1+𝑗Δπ‘₯)βˆ’1 (and 1+π‘˜Ξ”π‘₯ by (1+π‘˜Ξ”π‘₯)βˆ’1). Moreover, because 𝑗=(1+𝑗Δπ‘₯)βˆ’1Ξ”π‘₯,(3.21) we replace 𝑗 by 1/(1+𝑗Δπ‘₯)βˆ’1𝑗Δπ‘₯=βˆ’1+𝑗Δπ‘₯(3.22)

(and similarly for π‘˜).

Remark 3.3. The quantity 𝑑𝑗 here actually represents the mean number of steps needed to end the game when the Markov chain starts from state 1/(1+𝑗Δπ‘₯), with π‘—βˆˆ{0,…,π‘˜}.

We obtain that 𝑑𝑗𝑗=βˆ’1+𝑗Δπ‘₯1+π‘˜Ξ”π‘₯π‘˜2𝐴𝜎2(Ξ”π‘₯)3ξ‚»1Ξ¨ξ‚΅11+π‘˜Ξ”π‘₯1,ξ‚Άξ‚΅1(1+π‘˜Ξ”π‘₯)Ξ”π‘₯+Ξ”π‘₯Ξ¨ξ‚Άξ‚€1(1+π‘˜Ξ”π‘₯)Ξ”π‘₯βˆ’Ξ¨1,1Ξ”π‘₯βˆ’Ξ”π‘₯Ξ¨+Ξ”π‘₯2𝐴𝜎2(Ξ”π‘₯)3ξ‚»1Ξ¨ξ‚΅11+𝑗Δπ‘₯1,ξ‚Άξ‚΅1(1+𝑗Δπ‘₯)Ξ”π‘₯+Ξ”π‘₯Ξ¨ξ‚Άξ‚€1(1+𝑗Δπ‘₯)Ξ”π‘₯βˆ’Ξ¨1,1Ξ”π‘₯βˆ’Ξ”π‘₯Ξ¨.Ξ”π‘₯(3.23)

Next, since 𝑁2π‘βˆ’1=1+π‘˜Ξ”π‘₯, setting 𝑛2π‘βˆ’1=1+𝑗Δπ‘₯ we deduce from the previous expression that𝑑𝑛𝑛=βˆ’2π‘βˆ’1βˆ’1𝑛2π‘βˆ’1𝑁2π‘βˆ’1𝑁2π‘βˆ’1βˆ’12𝐴𝜎2(Ξ”π‘₯)3ξ‚»1𝑁2π‘βˆ’1Ψ𝑁1,1βˆ’2𝑐𝑁Δπ‘₯+Ξ”π‘₯Ξ¨2π‘βˆ’1ξ‚Άξ‚€1Ξ”π‘₯βˆ’Ξ¨1,1Ξ”π‘₯βˆ’Ξ”π‘₯Ψ+Ξ”π‘₯2𝐴𝜎2(Ξ”π‘₯)3ξ‚»1𝑛2π‘βˆ’1Ψ𝑛1,1βˆ’2𝑐𝑛Δπ‘₯+Ξ”π‘₯Ξ¨2π‘βˆ’1ξ‚Άξ‚€1Ξ”π‘₯βˆ’Ξ¨1,1Ξ”π‘₯βˆ’Ξ”π‘₯ΨΔπ‘₯(3.24) for π‘›βˆˆ{1,(1+Ξ”π‘₯)1/(2π‘βˆ’1),…,(1+π‘˜Ξ”π‘₯)1/(2π‘βˆ’1)=𝑁}.

Finally, if we assume, as above, that each step of the Markov chain takes Δ𝑑=(Ξ”π‘₯)2(2π‘βˆ’1)2𝐴(3.25) time units, we find that, when Ξ”π‘₯ decreases to zero, the mean duration of the game tends to 𝐷𝑛=2(2π‘βˆ’1)𝜎2ξ‚»ξ‚΅lnπ‘›βˆ’1βˆ’π‘›1βˆ’2𝑐1βˆ’π‘1βˆ’2𝑐[]ln𝑁forn∈1,𝑁.(3.26)

This last expression is equivalent to the formula for π‘š(π‘₯) in (1.11) when 𝑐>1/2.

Remark 3.4. Actually, the formula for π‘š(π‘₯) is the same whether 𝑐<1/2 or 𝑐>1/2.

At last, in the case when 𝑐=1/2, we consider the random walk with state space {0,Ξ”π‘₯,…,π‘˜Ξ”π‘₯=ln𝑁} and transition probabilities𝑝𝑗Δπ‘₯,(𝑗+1)Ξ”π‘₯=𝑝𝑗Δπ‘₯,(π‘—βˆ’1)Ξ”π‘₯=𝜎2,𝑝2𝐴𝑗Δπ‘₯,𝑗Δπ‘₯𝜎=1βˆ’2𝐴.(3.27) Then, we must solve the nonhomogeneous difference equation 𝑑𝑗+1βˆ’2𝑑𝑗+π‘‘π‘—βˆ’1=βˆ’2𝐴𝜎2,(3.28) subject to the boundary conditions 𝑑0=π‘‘π‘˜=0. We find that 𝑑𝑗𝑗=βˆ’π‘˜π΄πœŽ2π΄π‘˜(1βˆ’π‘˜)+𝜎2𝑗(1βˆ’π‘—).(3.29)

With lnπ‘›βˆΆ=𝑗Δπ‘₯ and ln𝑁=π‘˜Ξ”π‘₯, we get that 𝑑𝑛=𝐴𝜎2(Ξ”π‘₯)2{ln𝑛(lnπ‘βˆ’ln𝑛)}(3.30) for π‘›βˆˆ{1,𝑒Δπ‘₯,…,π‘’π‘˜Ξ”π‘₯=𝑁}. Assuming that Δ𝑑=(Ξ”π‘₯)2/𝐴, we deduce at once that, as Ξ”π‘₯ decreases to zero, π·π‘›βŸΆ1𝜎2[].{ln𝑛(lnπ‘βˆ’ln𝑛)}forπ‘›βˆˆ1,𝑁(3.31) Thus, we retrieve the formula (1.12) for π‘š(π‘₯) when 𝑐=1/2.

We can now state the following proposition.

Proposition 3.5. If the state space of the Markov chain is ξ€½1,(1+Ξ”π‘₯)1/(1βˆ’2𝑐),…,(1+π‘˜Ξ”π‘₯)1/(1βˆ’2𝑐)ξ€Ύ=𝑁,(3.32)respectively, ξ€½1,(1+Ξ”π‘₯)1/(2π‘βˆ’1),…,(1+π‘˜Ξ”π‘₯)1/(2π‘βˆ’1)ξ€Ύ=𝑁,(3.33) where 𝑐<1/2, respectively 𝑐>1/2, and the transition probabilities are those in (1.13), then the value of the mean number of steps 𝑑𝑛 needed to end the game is given by (3.17), respectively, (3.24). If π‘›βˆˆ{1,𝑒Δπ‘₯,…,π‘’π‘˜Ξ”π‘₯=𝑁} and the transition probabilities are the ones in (3.27), then the value of 𝑑𝑛 is given by (3.30).

4. Concluding Remarks

We have obtained explicit and exact formulas for the quantities 𝑝𝑗 and 𝑑𝑗 defined respectively in (1.17) and (1.18) for various discrete-time Markov chains that converge, at least in a finite interval, to a geometric Brownian motion. In the case of the probability 𝑝𝑗 of hitting the boundary 𝑁 before 1, because the appropriate difference equation is homogeneous, we were able to compute this probability for any value of 𝑐=πœ‡/𝜎2 by considering a Markov chain with state space {1,(1+Ξ”π‘₯),…,(1+π‘˜Ξ”π‘₯)=𝑁}. However, to obtain 𝑑𝑗 we first solved the appropriate difference equation when 𝑐=0. Then, making use of the formula that we obtained, we were able to deduce the solution for any π‘βˆˆβ„ by considering a Markov chain that converges to a transformation of the geometric Brownian motion. The transformed process was a geometric Brownian motion with πœ‡=0 (if 𝑐≠1/2), or a Wiener process with πœ‡=0 (if 𝑐=1/2). In each case, we showed that the expression that we derived tends to the corresponding quantity for the geometric Brownian motion. In the case of the mean duration of the game, the time increment Δ𝑑 had to be chosen suitably.

As is well known, the geometric Brownian motion is a very important model in financial mathematics, in particular. In practice, stock or commodity prices vary discretely over time. Therefore, it is interesting to derive formulas for 𝑝𝑗 and 𝑑𝑗 for Markov chains that are as close as we want to the diffusion process.

Now that we have computed explicitly the value of 𝑝𝑗 and 𝑑𝑗 for Markov chains having transition probabilities that involve parameters πœ‡ and 𝜎2 that are the same for all the states, we could consider asymmetric Markov chains. For example, at first the state space could be {1,…,𝑁1,…,𝑁2}, and we could have ξ‚»πœ‡πœ‡=1ξ€½ifπ‘›βˆˆ1,…,𝑁1ξ€Ύ,πœ‡βˆ’12𝑁ifπ‘›βˆˆ1+1,…,𝑁2ξ€Ύ(4.1)

(and similarly for 𝜎2). When the Markov chain hits 𝑁1, it goes to 𝑁1+1, respectively 𝑁1βˆ’1, with probability 𝑝0, respectively 1βˆ’π‘0. By increasing the state space to {1,1+Ξ”π‘₯,…,1+π‘˜1Ξ”π‘₯=𝑁1,…,1+π‘˜2Ξ”π‘₯=𝑁2}, and taking the limit as Ξ”π‘₯ decreases to zero (with π‘˜1 and π‘˜2 going to infinity appropriately), we would obtain the quantities that correspond to 𝑝𝑗 and 𝑑𝑗 for an asymmetric geometric Brownian motion. The possibly different values of 𝜎2 depending on the state 𝑛 of the Markov chain reflect the fact that volatility is likely to depend on the price of the stock or the commodity.

Finally, we could try to derive the formulas for 𝑝𝑗 and 𝑑𝑗 for other discrete-time Markov chains that converge to important one-dimensional diffusion processes.

References

  1. M. Lefebvre, Applied Stochastic Processes, Springer, New York, NY, USA, 2007.
  2. D. R. Cox and H. D. Miller, The Theory of Stochastic Processes, Methuen, London, UK, 1965.
  3. M. Lefebvre, β€œFirst passage problems for asymmetric Wiener processes,” Journal of Applied Probability, vol. 43, no. 1, pp. 175–184, 2006. View at Publisher Β· View at Google Scholar
  4. M. Abundo, β€œFirst-passage problems for asymmetric diffusions and skew-diffusion processes,” Open Systems and Information Dynamics, vol. 16, no. 4, pp. 325–350, 2009. View at Publisher Β· View at Google Scholar
  5. M. Lefebvre and J.-L. Guilbault, β€œFirst hitting place probabilities for a discrete version of the Ornstein-Uhlenbeck process,” International Journal of Mathematics and Mathematical Sciences, vol. 2009, Article ID 909835, 12 pages, 2009. View at Publisher Β· View at Google Scholar
  6. J.-L. Guilbault and M. Lefebvre, β€œOn a non-homogeneous difference equation from probability theory,” Tatra Mountains Mathematical Publications, vol. 43, pp. 81–90, 2009. View at Google Scholar
  7. S. Karlin and H. M. Taylor, A Second Course in Stochastic Processes, Academic Press, New York, NY, USA, 1981.