Abstract

We consider a discrete-time Markov chain with state space {1,1+Δ𝑥,,1+𝑘Δ𝑥=𝑁}. We compute explicitly the probability 𝑝𝑗 that the chain, starting from 1+𝑗Δ𝑥, will hit N before 1, as well as the expected number 𝑑𝑗 of transitions needed to end the game. In the limit when Δ𝑥 and the time Δ𝑡 between the transitions decrease to zero appropriately, the Markov chain tends to a geometric Brownian motion. We show that 𝑝𝑗 and 𝑑𝑗Δ𝑡 tend to the corresponding quantities for the geometric Brownian motion.

1. Introduction

Let {𝑋(𝑡),𝑡0} be a one-dimensional geometric Brownian motion defined by the stochastic differential equation 𝑑𝑋(𝑡)=𝜇𝑋(𝑡)𝑑𝑡+𝜎𝑋(𝑡)𝑑𝐵(𝑡),(1.1) where 𝜇, 𝜎>0, and {𝐵(𝑡),𝑡0} is a standard Brownian motion. Assume that 𝑋(0)=𝑥(1,𝑁), where 𝑁 (for simplicity), and define𝜏(𝑥)=inf{𝑡>0𝑋(𝑡)=1or𝑁𝑋(0)=𝑥}.(1.2) As is well known (see, e.g., Lefebvre [1, page 220]), the probability [𝑋[]]𝑝(𝑥)=𝑃𝜏(𝑥)=𝑁(1.3) satisfies the ordinary differential equation 12𝜎2𝑥2𝑝(𝑥)+𝜇𝑥𝑝(𝑥)=0,(1.4) subject to the boundary conditions 𝑝(1)=0,𝑝(𝑁)=1.(1.5) We easily find that, if 𝑐=𝜇/𝜎21/2, 𝑥𝑝(𝑥)=12𝑐1𝑁12c1for1𝑥𝑁.(1.6) When 𝑐=1/2, the solution is 𝑝(𝑥)=ln𝑥ln𝑁for1𝑥𝑁.(1.7) Moreover, the function []𝑚(𝑥)=𝐸𝜏(𝑥)(1.8) satisfies the ordinary differential equation (see, again, Lefebvre [1, page 220]) 12𝜎2𝑥2𝑚(𝑥)+𝜇𝑥𝑚(𝑥)=1,(1.9) subject to 𝑚(1)=𝑚(𝑁)=0.(1.10) This time, if 𝑐1/2 we find that 2𝑚(𝑥)=(12𝑐)𝜎2𝑥ln𝑥ln𝑁12𝑐1𝑁12𝑐1for1𝑥𝑁(1.11) and, for 𝑐=1/2, 𝑚(𝑥)=ln𝑥𝜎2(ln𝑁ln𝑥)for1𝑥𝑁.(1.12) Now, it can be shown (see Cox and Miller [2, page 213]) that the discrete-time Markov chain {𝑋𝑚Δ𝑡,𝑚=0,1,} with state space {1,1+Δ𝑥,,1+𝑘Δ𝑥}, where 𝑘 is such that 1+𝑘Δ𝑥=𝑁, and transition probabilities 𝑝1+𝑗Δ𝑥,1+(𝑗+1)Δ𝑥=12𝐴(1+𝑗Δ𝑥)2𝜎2,𝑝+(1+𝑗Δ𝑥)𝜇Δ𝑥1+𝑗Δ𝑥,1+(𝑗1)Δ𝑥=12𝐴(1+𝑗Δ𝑥)2𝜎2,𝑝(1+𝑗Δ𝑥)𝜇Δ𝑥1+𝑗Δ𝑥,1+𝑗Δ𝑥1=1𝐴(1+𝑗Δ𝑥)2𝜎2,(1.13) where 𝑗{1,,𝑘1}, converges to the geometric Brownian motion {𝑋(𝑡),𝑡0} as Δ𝑥 and Δ𝑡 decrease to zero, provided that(Δ𝑥)2=𝐴Δ𝑡,(1.14)(𝑗Δ𝑥)2<𝐴𝑗{0,,𝑘}.(1.15)

Remarks 1.1. (i) We assume that all the probabilities defined by (1.13) are well defined; that is, they all belong to the interval [0,1].
(ii) The condition in (1.15) implies that (Δ𝑥)2<𝐴/𝑘2.
Let 𝑇𝑗=inf𝑚>0𝑋𝑚Δ𝑡=1or𝑁𝑋0,𝑝=1+𝑗Δ𝑥(1.16)𝑗𝑋=𝑃𝑇𝑗=𝑁.(1.17)

In the next section, we will compute the quantity 𝑝𝑗 for 𝑗{1,,𝑘1}. We will show that 𝑝𝑗 converges to the function 𝑝(𝑥) for the geometric Brownian motion as Δ𝑥 decreases to zero and 𝑘 tends to infinity in such a way that 1+𝑘Δ𝑥 remains equal to 𝑁.

In Section 3, we will compute the mean number of transitions needed to end the game, namely, 𝑑𝑗𝑇=𝐸𝑗.(1.18) By making a change of variable to transform the diffusion process {𝑋(𝑡),𝑡0} into a geometric Brownian motion with infinitesimal mean equal to zero and by considering the corresponding discrete-time Markov chain, we will obtain an explicit and exact expression for 𝑑𝑗 that, when multiplied by Δ𝑡, tends to 𝑚(𝑥) if the time Δ𝑡 between the transitions is chosen suitably.

The motivation for our work is the following. Lefebvre [3] computed the probability 𝑝(𝑥) and the expected duration 𝑚(𝑥) for asymmetric Wiener processes in the interval (𝑑,𝑑), that is, for Wiener processes for which the infinitesimal means 𝜇+ and 𝜇, and infinitesimal variances 𝜎2+ and 𝜎2, are not necessarily the same when 𝑥>0 or 𝑥<0. To confirm his results, he considered a random walk that converges to the Wiener process. Lefebvre's results were extended by Abundo [4] to general one-dimensional diffusion processes. However, Abundo did not obtain the quantities 𝑝𝑗 and 𝑑𝑗 for the corresponding discrete-time Markov chains. Also, it is worth mentioning that asymmetric diffusion processes need not be defined in an interval that includes the origin. A process defined in the interval (𝑎,𝑏) can be asymmetric with respect to any 𝑎<𝑐<𝑏.

Next, Lefebvre and Guilbault [5] and Guilbault and Lefebvre [6] computed 𝑝𝑗 and 𝑑𝑗, respectively, for a discrete-time Markov chain that tends to the Ornstein-Uhlenbeck process. The authors also computed the quantity 𝑝𝑗 in the case when the Markov chain is asymmetric (as in Lefebvre [3]).

Asymmetric processes can be used in financial mathematics to model the price of a stock when, in particular, the infinitesimal variance (i.e., the volatility) tends to increase with the price of the stock. Indeed, it seems logical that the volatility is larger when the stock price 𝑋(𝑡) is very large than when it is close to zero. The prices of commodities, such as gold and oil, are also more volatile when they reach a certain level.

In order to check the validity of the expressions obtained by Abundo [4] for 𝑝(𝑥) and 𝑚(𝑥), it is important to obtain the corresponding quantities for the discrete-time Markov chains and then proceed by taking the limit as Δ𝑥 and Δ𝑡 decrease to zero appropriately. Moreover, the formulas that will be derived in the present paper are interesting in themselves, since in reality stock or commodity prices do not vary completely continuously.

First passage problems for Markov chains have many applications. For example, in neural networks, an important quantity is the interspike time, that is, the time between spikes of a firing neuron (which means that the neuron sends a signal to other neurons). Discrete-time Markov chains have been used as models in this context, and the interspike time is the number of steps it takes the chain to reach the threshold at which firing occurs.

2. Computation of the Probability 𝑝𝑗

Assume first that Δ𝑥=1, so that the state space is {1,2,,𝑁} and the transition probabilities become 𝑝𝑗,𝑗+1=1𝑗2𝐴2𝜎2+𝑗𝜇,𝑝𝑗,𝑗1=1𝑗2𝐴2𝜎2𝑗𝜇,𝑝𝑗,𝑗𝑗=12𝜎2𝐴(2.1)

for 𝑗{2,,𝑁1}. The probability defined in (1.17) satisfies the following difference equation: 𝑝𝑗=𝑝𝑗,𝑗+1𝑝𝑗+1+𝑝𝑗,𝑗1𝑝𝑗1+𝑝𝑗,𝑗𝑝𝑗.(2.2) That is, 2𝑗𝑝𝑗=(𝑗+𝑐)𝑝𝑗+1+(𝑗𝑐)𝑝𝑗1,(2.3) where 𝑐=𝜇/𝜎2. The boundary conditions are 𝑝1=0,𝑝𝑁=1.(2.4)

In the special case when 𝜇=0, (2.3) reduces to the second-order difference equation with constant coefficients 𝑝𝑗+1=2𝑝𝑗𝑝𝑗1.(2.5) We easily find that the (unique) solution that satisfies the boundary conditions (2.4) is 𝑝𝑗=𝑗1𝑁1for𝑗=1,2,,𝑁.(2.6)

Assume now that 𝜇0. Letting 𝑤𝑗=𝑝𝑗+1𝑝𝑗.(2.7) Equation (2.3) can be rewritten as (𝑗+𝑐)𝑤𝑗=(𝑗𝑐)𝑤𝑗1.(2.8)

Using the mathematical software program Maple, we find that the solution of this first-order difference equation that satisfies the boundary condition 𝑤1=𝑝2 is given by 𝑤𝑗𝑝=2𝜋[(]𝑐𝑐sin2+𝑐)𝜋2Γ12(𝑐1)Γ(𝑗+1𝑐)Γ(𝑗+1+𝑐),(2.9) where Γ is the gamma function.

Next, we must solve the first-order difference equation 𝑝𝑗+1𝑝𝑗=𝑓(𝑐)Γ(𝑗+1𝑐)Γ(𝑗+1+𝑐),(2.10) where 𝑝𝑓(𝑐)=2𝜋[]𝑐𝑐sin(2+𝑐)𝜋2Γ12(𝑐1),(2.11) subject to the boundary conditions (2.4). We find that, if 𝑐1/2, then 𝑝𝑗=𝑓(𝑐)(12𝑐𝑗+𝑐)Γ(𝑗+1𝑐)𝑐Γ(𝑗+1+𝑐)+𝑓(𝑐)2𝑐1Γ(1𝑐)Γ(1+𝑐)+𝜅,(2.12) where 𝜅 is a constant. Applying the boundary conditions (2.4), we obtain that 𝑝𝑗=(𝑗+𝑐)(Γ(𝑗+1𝑐)/Γ(𝑗+1+𝑐))(1+𝑐)(Γ(2𝑐)/Γ(2+𝑐))(𝑁+𝑐)(Γ(𝑁+1𝑐)/Γ(𝑁+1+𝑐))(1+𝑐)(Γ(2𝑐)/Γ(2+𝑐))for𝑗=1,2,,𝑁.(2.13)

Remark 2.1. When 𝑐 tends to 1/2, the solution becomes 𝑝𝑗=Ψ(𝑗+1/2)2+𝛾+2ln2Ψ(𝑁+1/2)2+𝛾+2ln2,(2.14) where 𝛾 is Euler's constant and Ψ is the digamma function defined by ΓΨ(𝑧)=(𝑧)Γ(𝑧).(2.15)

Notice that Ψ(3/2)=2𝛾2ln2,(2.16) so that we indeed have 𝑝1=0, and the solution (2.14) can be rewritten as 𝑝𝑗=Ψ(𝑗+1/2)Ψ(3/2)Ψ(𝑁+1/2)Ψ(3/2)for𝑗=1,2,,𝑁.(2.17)

Now, in the general case when Δ𝑥>0, we must solve the difference equation 𝑝𝑗=12𝐴(1+𝑗Δ𝑥)2𝜎2𝑝+(1+𝑗Δ𝑥)𝜇Δ𝑥𝑗+1+12𝐴(1+𝑗Δ𝑥)2𝜎2𝑝(1+𝑗Δ𝑥)𝜇Δ𝑥𝑗1+11𝐴(1+𝑗Δ𝑥)2𝜎2𝑝𝑗,(2.18) which can be simplified to 2(1+𝑗Δ𝑥)𝑝𝑗=[]𝑝(1+𝑗Δ𝑥)+𝑐Δ𝑥𝑗+1+[]𝑝(1+𝑗Δ𝑥)𝑐Δ𝑥𝑗1.(2.19) The boundary conditions become 𝑝0=0,𝑝𝑘=1.(2.20)

When 𝜇=0 (which implies that 𝑐=0), the difference equation above reduces to the same one as when Δ𝑥=1, namely (2.5). The solution is 𝑝𝑗=𝑗𝑘for𝑗=0,1,,𝑘.(2.21) Writing 𝑛=1+𝑗Δ𝑥(2.22) and using the fact that (by hypothesis) 𝑁=1+𝑘Δ𝑥, we obtain that𝑝𝑛=𝑛1𝑁1for𝑛=1,1+Δ𝑥,,1+𝑘Δ𝑥=𝑁.(2.23) Notice that this solution does not depend on the increment Δ𝑥. Hence, if we let Δ𝑥 decrease to zero and 𝑘 tend to infinity in such a way that 1+𝑘Δ𝑥 remains equal to 𝑁, we have that 𝑝𝑛𝑛1𝑁1for1𝑛𝑁,(2.24) which is the same as the function 𝑝(𝑥) in (1.6) when 𝑐=0/𝜎2=0.

Next, proceeding as above, we obtain that, if 𝑐1/2, the probability 𝑝𝑗 is given by 𝑝𝑗=(1+𝑗Δ𝑥+𝑐Δ𝑥)(Γ((1+(𝑗+1)Δ𝑥𝑐Δ𝑥)/Δ𝑥)/Γ((1+(𝑗+1)Δ𝑥+𝑐Δ𝑥)/Δ𝑥))𝒜(1+𝑘Δ𝑥+𝑐Δ𝑥)(Γ((1+(𝑘+1)Δ𝑥𝑐Δ𝑥)/Δ𝑥)/Γ((1+(𝑘+1)Δ𝑥+𝑐Δ𝑥)/Δ𝑥))𝒜,(2.25) where 𝒜 denotes(1+𝑐Δ𝑥)(Γ((1+Δ𝑥𝑐Δ𝑥)/Δ𝑥)/Γ((1+Δ𝑥+𝑐Δ𝑥)/Δ𝑥)). In terms of 𝑛 and 𝑁, this expression becomes𝑝𝑛=(𝑛+𝑐Δ𝑥)(Γ((𝑛+Δ𝑥𝑐Δ𝑥)/Δ𝑥)/(Γ(𝑛+Δ𝑥+𝑐Δ𝑥)/Δ𝑥))𝒜(𝑁+𝑐Δ𝑥)(Γ((𝑁+Δ𝑥𝑐Δ𝑥)/Δ𝑥))/(Γ((𝑁+Δ𝑥+𝑐Δ𝑥)/Δ𝑥))𝒜(2.26) for 𝑛{1,1+Δ𝑥,,1+𝑘Δ𝑥=𝑁}. The solution reduces to 𝑝𝑛=Ψ((2𝑛+Δ𝑥)/2Δ𝑥)Ψ((2+Δ𝑥)/2Δ𝑥)Ψ((2𝑁+Δ𝑥)/2Δ𝑥)Ψ((2+Δ𝑥)/2Δ𝑥)if𝑐=1/2.(2.27) We can now state the following proposition.

Proposition 2.2. Let 𝑛=1+𝑗Δ𝑥 for 𝑗{0,1,,𝑘}, with 𝑘 such that 1+𝑘Δ𝑥=𝑁. The probability 𝑝𝑛 that the discrete-time Markov chain defined in Section 1, starting from 𝑛, will hit 𝑁 before 1 is given by (2.23) if 𝜇=0, and by (2.26) if 𝑐=𝜇/𝜎20. The value of 𝑝𝑛 tends to the function in (2.27) when 𝜇/𝜎2 tends to 1/2.

To complete this section, we will consider the case when Δ𝑥 decreases to zero. We have already mentioned that when 𝑐=0, the probability 𝑝𝑛 does not depend on Δ𝑥, and it corresponds to the function 𝑝(𝑥) in (1.6) with 𝑐=0.

Next, when 𝑐=1/2, making use of the formula Ψ(𝑧)ln𝑧for𝑧large,(2.28)we can write that limΔ𝑥0𝑝𝑛=limΔ𝑥0ln(2𝑛+Δ𝑥)ln(2+Δ𝑥)=ln(2𝑁+Δ𝑥)ln(2+Δ𝑥)ln𝑛[].ln𝑁for𝑛1,𝑁(2.29)Again, this expression corresponds to the function 𝑝(𝑥) given in (1.7), obtained when 𝑐=1/2.

Finally, we have: Γ(𝑧+𝑎)Γ(𝑧+𝑏)𝑧𝑎𝑏11+𝑂𝑧(2.30)as |𝑧| tends to infinity (if |Arg(𝑧+𝑎)|<𝜋). Hence, in the case when 𝑐0,1/2, we can write that limΔ𝑥0𝑝𝑛=limΔ𝑥0(𝑛+𝑐Δ𝑥)(𝑛+Δ𝑥)2𝑐(1+𝑐Δ𝑥)(1+Δ𝑥)2𝑐(𝑁+𝑐Δ𝑥)(𝑁+Δ𝑥)2𝑐(1+𝑐Δ𝑥)(1+Δ𝑥)2𝑐=𝑛12𝑐1𝑁12𝑐1(2.31)for 1𝑛𝑁. Therefore, we retrieve the formula for 𝑝(𝑥) in (1.6).

In the next section, we will derive the formulas that correspond to the function 𝑚(𝑥) in Section 1.

3. Computation of the Mean Number of Transitions 𝑑𝑗 Needed to End the Game

As in Section 2, we will first assume that Δ𝑥=1. Then, with 𝑛=1+𝑗 for 𝑗=0,1,,𝑘 (and 1+𝑘=𝑁), the function 𝑑𝑛 satisfies the following second-order, linear, nonhomogeneous difference equation: 𝑑𝑛=𝑝𝑛,𝑛+1𝑑𝑛+1+𝑝𝑛,𝑛1𝑑𝑛1+𝑝𝑛,𝑛𝑑𝑛+1for𝑛=2,,𝑁1.(3.1)

The boundary conditions are𝑑1=𝑑𝑁=0.(3.2) We find that the difference equation can be rewritten as (𝑛+𝑐)𝑑𝑛+12𝑛𝑑𝑛+(𝑛𝑐)𝑑𝑛1=2𝐴𝑛𝜎2.(3.3)

Let us now assume that 𝜇=0, so that we must solve the second-order, linear, nonhomogeneous difference equation with constant coefficients 𝑑𝑛+12𝑑𝑛+𝑑𝑛1=2𝐴𝑛2𝜎2.(3.4)With the help of the mathematical software program Maple, we find that the unique solution that satisfies the boundary conditions (3.2) is 𝑑𝑛=𝑛1𝑁12𝐴𝜎2𝜋Ψ(𝑁)+𝑁Ψ(1,𝑁)(1𝛾)𝑁1+26+2𝐴𝜎2𝜋Ψ(𝑛)+𝑛Ψ(1,𝑛)(1𝛾)𝑛1+26,(3.5) where 𝑑Ψ(1,𝑥)=𝑑𝑥Ψ(𝑥)(3.6) is the first polygamma function.

Next, in the general case Δ𝑥>0, we must solve (with 𝑐=0) 𝑑𝑗+12𝑑𝑗+𝑑𝑗1=2𝐴(1+𝑗Δ𝑥)2𝜎2(3.7) for 𝑗=0,1,,𝑘. The solution that satisfies the boundary conditions 𝑑0=𝑑𝑘=0(3.8) is given by 𝑑𝑗𝑗=𝑘2𝐴𝜎2(Δ𝑥)3(1+𝑘Δ𝑥)Ψ1,1+𝑘Δ𝑥Δ𝑥+Δ𝑥Ψ1+𝑘Δ𝑥1Δ𝑥Ψ1,1Δ𝑥Δ𝑥Ψ+Δ𝑥2𝐴𝜎2(Δ𝑥)3(1+𝑗Δ𝑥)Ψ1,1+𝑗Δ𝑥Δ𝑥+Δ𝑥Ψ1+𝑗Δ𝑥1Δ𝑥Ψ1,1Δ𝑥Δ𝑥Ψ.Δ𝑥(3.9) In terms of 𝑛=1+𝑗Δ𝑥 and 𝑁=1+𝑘Δ𝑥, this expression becomes 𝑑𝑛=𝑛1𝑁12𝐴𝜎2(Δ𝑥)3𝑁𝑁Ψ1,𝑁Δ𝑥+Δ𝑥Ψ1Δ𝑥Ψ1,1Δ𝑥Δ𝑥Ψ+Δ𝑥2𝐴𝜎2(Δ𝑥)3𝑛𝑛Ψ1,𝑛Δ𝑥+Δ𝑥Ψ1Δ𝑥Ψ1,1Δ𝑥Δ𝑥Ψ.Δ𝑥(3.10)

Finally, the mean duration of the game is obtained by multiplying 𝑑𝑛 by Δ𝑡. Making use of the fact that (see (1.14)) Δ𝑡=(Δ𝑥)2/𝐴, we obtain the following proposition.

Proposition 3.1. When Δ𝑥>0 and 𝜇=0, the mean duration 𝐷𝑛 of the game is given by 𝐷𝑛=𝑛12𝑁1𝜎2𝑁Δ𝑥𝑁Ψ1,𝑁Δ𝑥+Δ𝑥Ψ1Δ𝑥Ψ1,1Δ𝑥Δ𝑥Ψ+2Δ𝑥𝜎2𝑛Δ𝑥𝑛Ψ1,𝑛Δ𝑥+Δ𝑥Ψ1Δ𝑥Ψ1,1Δ𝑥Δ𝑥ΨΔ𝑥(3.11) for 𝑛=1,1+Δ𝑥,,1+𝑘Δ𝑥=𝑁.

Next, using the fact that1Ψ(𝑥)ln𝑥,Ψ(1,𝑥)𝑥for𝑥large,(3.12) we obtain that, as Δ𝑥 decreases to zero and 1+𝑘Δ𝑥 remains equal to 𝑁, 𝐷𝑛2𝜎2𝑛1[]𝑁1ln𝑁+ln𝑛for𝑛1,𝑁.(3.13)Notice that 𝐷𝑛 indeed corresponds to the function 𝑚(𝑥) given in (1.11) if 𝑐=0.

To complete our work, we need to find the value of the mean number of transitions 𝑑𝑗 in the case when 𝜇0 and Δ𝑥>0. To do so, we must solve the nonhomogeneous difference equation with nonconstant coefficients (3.3). We can obtain the general solution to the corresponding homogeneous equation. However, we then need to find a particular solution to the nonhomogeneous equation. This entails evaluating a difficult sum. Instead, we will use the fact that we know how to compute 𝑑𝑗 when 𝜇=0.

Let us go back to the geometric Brownian motion {𝑋(𝑡),𝑡0} defined in (1.1), and let us define, for 𝑐1/2, []𝑌(𝑡)=𝑋(𝑡)12𝑐.(3.14)Then, we find (see Karlin and Taylor [7, page 173]) that {𝑌(𝑡),𝑡0} remains a geometric Brownian motion, with infinitesimal variance 𝜎2𝑌=(12𝑐)2𝜎2𝑦2, but with infinitesimal mean 𝜇𝑌=0. In the case when 𝑐=1/2, we define []𝑌(𝑡)=ln𝑋(𝑡),(3.15) and we obtain that {𝑌(𝑡),𝑡0} is a Wiener process with 𝜇𝑌=0 and 𝜎2𝑌=𝜎2.

Remark 3.2. When 𝑐=1/2, we find that {𝑋(𝑡),𝑡0} can be expressed as the exponential of a Wiener process {𝑊(𝑡),𝑡0} having infinitesimal mean 𝜇𝑊=0 and infinitesimal variance 𝜎2𝑊=𝜎2.

When we make the transformation 𝑌(𝑡)=[𝑋(𝑡)]12𝑐, the interval [1,𝑁] becomes [1,𝑁12𝑐], respectively [𝑁12𝑐,1], if 𝑐<1/2, respectively 𝑐>1/2. Assume first that 𝑐<1/2. We have (see (1.2)) 𝜏(𝑥)=inf𝑡>0𝑌(𝑡)=1o𝑟𝑁12𝑐𝑌(0)=𝑥12𝑐.(3.16)

Now, we consider the discrete-time Markov chain with state space {1,1+Δ𝑥,,1+𝑘Δ𝑥=𝑁12𝑐} and transition probabilities given by (1.13). Proceeding as above, we obtain the expression in (3.9) for the mean number of transitions 𝑑𝑗 from state 1+𝑗Δ𝑥. This time, we replace 1+𝑗Δ𝑥 by 𝑛12𝑐 and 1+𝑘Δ𝑥 by 𝑁12𝑐, so that 𝑑𝑛𝑛=12𝑐1𝑁12𝑐12𝐴𝜎2(Δ𝑥)3𝑁12𝑐Ψ𝑁1,12𝑐𝑁Δ𝑥+Δ𝑥Ψ12𝑐1Δ𝑥Ψ1,1Δ𝑥Δ𝑥Ψ+Δ𝑥2𝐴𝜎2(Δ𝑥)3𝑛12𝑐Ψ𝑛1,12𝑐𝑛Δ𝑥+Δ𝑥Ψ12𝑐1Δ𝑥Ψ1,1Δ𝑥Δ𝑥ΨΔ𝑥(3.17) for 𝑛=1,(1+Δ𝑥)1/(12𝑐),,(1+𝑘Δ𝑥)1/(12𝑐)=𝑁.

Assume that each displacement takes Δ𝑡=(Δ𝑥)2(12𝑐)2𝐴(3.18) time units. Taking the limit as Δ𝑥 decreases to zero (and 𝑘), we obtain (making use of the formulas in (3.12)) that 𝐷𝑛2(12𝑐)𝜎2𝑛12𝑐1𝑁12𝑐[]1ln𝑁+ln𝑛for𝑛1,𝑁.(3.19)

This formula corresponds to the function 𝑚(𝑥) in (1.11) when 𝑐<1/2.

When 𝑐>1/2, we consider the Markov chain having state space 1𝑁2𝑐1=1,11+𝑘Δ𝑥11+(𝑘1)Δ𝑥,,1+Δ𝑥,1(3.20) (and transition probabilities given by (1.13)). To obtain 𝑑𝑗, we must again solve the difference equation (3.7), subject to the boundary conditions 𝑑0=𝑑𝑘=0. However, once we have obtained the solution, we must now replace 1+𝑗Δ𝑥 by (1+𝑗Δ𝑥)1 (and 1+𝑘Δ𝑥 by (1+𝑘Δ𝑥)1). Moreover, because 𝑗=(1+𝑗Δ𝑥)1Δ𝑥,(3.21) we replace 𝑗 by 1/(1+𝑗Δ𝑥)1𝑗Δ𝑥=1+𝑗Δ𝑥(3.22)

(and similarly for 𝑘).

Remark 3.3. The quantity 𝑑𝑗 here actually represents the mean number of steps needed to end the game when the Markov chain starts from state 1/(1+𝑗Δ𝑥), with 𝑗{0,,𝑘}.

We obtain that 𝑑𝑗𝑗=1+𝑗Δ𝑥1+𝑘Δ𝑥𝑘2𝐴𝜎2(Δ𝑥)31Ψ11+𝑘Δ𝑥1,1(1+𝑘Δ𝑥)Δ𝑥+Δ𝑥Ψ1(1+𝑘Δ𝑥)Δ𝑥Ψ1,1Δ𝑥Δ𝑥Ψ+Δ𝑥2𝐴𝜎2(Δ𝑥)31Ψ11+𝑗Δ𝑥1,1(1+𝑗Δ𝑥)Δ𝑥+Δ𝑥Ψ1(1+𝑗Δ𝑥)Δ𝑥Ψ1,1Δ𝑥Δ𝑥Ψ.Δ𝑥(3.23)

Next, since 𝑁2𝑐1=1+𝑘Δ𝑥, setting 𝑛2𝑐1=1+𝑗Δ𝑥 we deduce from the previous expression that𝑑𝑛𝑛=2𝑐11𝑛2𝑐1𝑁2𝑐1𝑁2𝑐112𝐴𝜎2(Δ𝑥)31𝑁2𝑐1Ψ𝑁1,12𝑐𝑁Δ𝑥+Δ𝑥Ψ2𝑐11Δ𝑥Ψ1,1Δ𝑥Δ𝑥Ψ+Δ𝑥2𝐴𝜎2(Δ𝑥)31𝑛2𝑐1Ψ𝑛1,12𝑐𝑛Δ𝑥+Δ𝑥Ψ2𝑐11Δ𝑥Ψ1,1Δ𝑥Δ𝑥ΨΔ𝑥(3.24) for 𝑛{1,(1+Δ𝑥)1/(2𝑐1),,(1+𝑘Δ𝑥)1/(2𝑐1)=𝑁}.

Finally, if we assume, as above, that each step of the Markov chain takes Δ𝑡=(Δ𝑥)2(2𝑐1)2𝐴(3.25) time units, we find that, when Δ𝑥 decreases to zero, the mean duration of the game tends to 𝐷𝑛=2(2𝑐1)𝜎2ln𝑛1𝑛12𝑐1𝑁12𝑐[]ln𝑁forn1,𝑁.(3.26)

This last expression is equivalent to the formula for 𝑚(𝑥) in (1.11) when 𝑐>1/2.

Remark 3.4. Actually, the formula for 𝑚(𝑥) is the same whether 𝑐<1/2 or 𝑐>1/2.

At last, in the case when 𝑐=1/2, we consider the random walk with state space {0,Δ𝑥,,𝑘Δ𝑥=ln𝑁} and transition probabilities𝑝𝑗Δ𝑥,(𝑗+1)Δ𝑥=𝑝𝑗Δ𝑥,(𝑗1)Δ𝑥=𝜎2,𝑝2𝐴𝑗Δ𝑥,𝑗Δ𝑥𝜎=12𝐴.(3.27) Then, we must solve the nonhomogeneous difference equation 𝑑𝑗+12𝑑𝑗+𝑑𝑗1=2𝐴𝜎2,(3.28) subject to the boundary conditions 𝑑0=𝑑𝑘=0. We find that 𝑑𝑗𝑗=𝑘𝐴𝜎2𝐴𝑘(1𝑘)+𝜎2𝑗(1𝑗).(3.29)

With ln𝑛=𝑗Δ𝑥 and ln𝑁=𝑘Δ𝑥, we get that 𝑑𝑛=𝐴𝜎2(Δ𝑥)2{ln𝑛(ln𝑁ln𝑛)}(3.30) for 𝑛{1,𝑒Δ𝑥,,𝑒𝑘Δ𝑥=𝑁}. Assuming that Δ𝑡=(Δ𝑥)2/𝐴, we deduce at once that, as Δ𝑥 decreases to zero, 𝐷𝑛1𝜎2[].{ln𝑛(ln𝑁ln𝑛)}for𝑛1,𝑁(3.31) Thus, we retrieve the formula (1.12) for 𝑚(𝑥) when 𝑐=1/2.

We can now state the following proposition.

Proposition 3.5. If the state space of the Markov chain is 1,(1+Δ𝑥)1/(12𝑐),,(1+𝑘Δ𝑥)1/(12𝑐)=𝑁,(3.32)respectively, 1,(1+Δ𝑥)1/(2𝑐1),,(1+𝑘Δ𝑥)1/(2𝑐1)=𝑁,(3.33) where 𝑐<1/2, respectively 𝑐>1/2, and the transition probabilities are those in (1.13), then the value of the mean number of steps 𝑑𝑛 needed to end the game is given by (3.17), respectively, (3.24). If 𝑛{1,𝑒Δ𝑥,,𝑒𝑘Δ𝑥=𝑁} and the transition probabilities are the ones in (3.27), then the value of 𝑑𝑛 is given by (3.30).

4. Concluding Remarks

We have obtained explicit and exact formulas for the quantities 𝑝𝑗 and 𝑑𝑗 defined respectively in (1.17) and (1.18) for various discrete-time Markov chains that converge, at least in a finite interval, to a geometric Brownian motion. In the case of the probability 𝑝𝑗 of hitting the boundary 𝑁 before 1, because the appropriate difference equation is homogeneous, we were able to compute this probability for any value of 𝑐=𝜇/𝜎2 by considering a Markov chain with state space {1,(1+Δ𝑥),,(1+𝑘Δ𝑥)=𝑁}. However, to obtain 𝑑𝑗 we first solved the appropriate difference equation when 𝑐=0. Then, making use of the formula that we obtained, we were able to deduce the solution for any 𝑐 by considering a Markov chain that converges to a transformation of the geometric Brownian motion. The transformed process was a geometric Brownian motion with 𝜇=0 (if 𝑐1/2), or a Wiener process with 𝜇=0 (if 𝑐=1/2). In each case, we showed that the expression that we derived tends to the corresponding quantity for the geometric Brownian motion. In the case of the mean duration of the game, the time increment Δ𝑡 had to be chosen suitably.

As is well known, the geometric Brownian motion is a very important model in financial mathematics, in particular. In practice, stock or commodity prices vary discretely over time. Therefore, it is interesting to derive formulas for 𝑝𝑗 and 𝑑𝑗 for Markov chains that are as close as we want to the diffusion process.

Now that we have computed explicitly the value of 𝑝𝑗 and 𝑑𝑗 for Markov chains having transition probabilities that involve parameters 𝜇 and 𝜎2 that are the same for all the states, we could consider asymmetric Markov chains. For example, at first the state space could be {1,,𝑁1,,𝑁2}, and we could have 𝜇𝜇=1if𝑛1,,𝑁1,𝜇12𝑁if𝑛1+1,,𝑁2(4.1)

(and similarly for 𝜎2). When the Markov chain hits 𝑁1, it goes to 𝑁1+1, respectively 𝑁11, with probability 𝑝0, respectively 1𝑝0. By increasing the state space to {1,1+Δ𝑥,,1+𝑘1Δ𝑥=𝑁1,,1+𝑘2Δ𝑥=𝑁2}, and taking the limit as Δ𝑥 decreases to zero (with 𝑘1 and 𝑘2 going to infinity appropriately), we would obtain the quantities that correspond to 𝑝𝑗 and 𝑑𝑗 for an asymmetric geometric Brownian motion. The possibly different values of 𝜎2 depending on the state 𝑛 of the Markov chain reflect the fact that volatility is likely to depend on the price of the stock or the commodity.

Finally, we could try to derive the formulas for 𝑝𝑗 and 𝑑𝑗 for other discrete-time Markov chains that converge to important one-dimensional diffusion processes.