Abstract
We consider a discrete-time Markov chain with state space . We compute explicitly the probability that the chain, starting from , will hit N before 1, as well as the expected number of transitions needed to end the game. In the limit when and the time between the transitions decrease to zero appropriately, the Markov chain tends to a geometric Brownian motion. We show that and tend to the corresponding quantities for the geometric Brownian motion.
1. Introduction
Let be a one-dimensional geometric Brownian motion defined by the stochastic differential equation where , , and is a standard Brownian motion. Assume that , where (for simplicity), and define As is well known (see, e.g., Lefebvre [1, page 220]), the probability satisfies the ordinary differential equation subject to the boundary conditions We easily find that, if , When , the solution is Moreover, the function satisfies the ordinary differential equation (see, again, Lefebvre [1, page 220]) subject to This time, if we find that and, for , Now, it can be shown (see Cox and Miller [2, page 213]) that the discrete-time Markov chain with state space , where is such that , and transition probabilities where , converges to the geometric Brownian motion as and decrease to zero, provided that
Remarks 1.1. (i) We assume that all the probabilities defined by (1.13) are well defined; that is, they all belong to the interval .
(ii) The condition in (1.15) implies that .
Let
In the next section, we will compute the quantity for . We will show that converges to the function for the geometric Brownian motion as decreases to zero and tends to infinity in such a way that remains equal to .
In Section 3, we will compute the mean number of transitions needed to end the game, namely, By making a change of variable to transform the diffusion process into a geometric Brownian motion with infinitesimal mean equal to zero and by considering the corresponding discrete-time Markov chain, we will obtain an explicit and exact expression for that, when multiplied by , tends to if the time between the transitions is chosen suitably.
The motivation for our work is the following. Lefebvre [3] computed the probability and the expected duration for asymmetric Wiener processes in the interval , that is, for Wiener processes for which the infinitesimal means and , and infinitesimal variances and , are not necessarily the same when or . To confirm his results, he considered a random walk that converges to the Wiener process. Lefebvre's results were extended by Abundo [4] to general one-dimensional diffusion processes. However, Abundo did not obtain the quantities and for the corresponding discrete-time Markov chains. Also, it is worth mentioning that asymmetric diffusion processes need not be defined in an interval that includes the origin. A process defined in the interval can be asymmetric with respect to any .
Next, Lefebvre and Guilbault [5] and Guilbault and Lefebvre [6] computed and , respectively, for a discrete-time Markov chain that tends to the Ornstein-Uhlenbeck process. The authors also computed the quantity in the case when the Markov chain is asymmetric (as in Lefebvre [3]).
Asymmetric processes can be used in financial mathematics to model the price of a stock when, in particular, the infinitesimal variance (i.e., the volatility) tends to increase with the price of the stock. Indeed, it seems logical that the volatility is larger when the stock price is very large than when it is close to zero. The prices of commodities, such as gold and oil, are also more volatile when they reach a certain level.
In order to check the validity of the expressions obtained by Abundo [4] for and , it is important to obtain the corresponding quantities for the discrete-time Markov chains and then proceed by taking the limit as and decrease to zero appropriately. Moreover, the formulas that will be derived in the present paper are interesting in themselves, since in reality stock or commodity prices do not vary completely continuously.
First passage problems for Markov chains have many applications. For example, in neural networks, an important quantity is the interspike time, that is, the time between spikes of a firing neuron (which means that the neuron sends a signal to other neurons). Discrete-time Markov chains have been used as models in this context, and the interspike time is the number of steps it takes the chain to reach the threshold at which firing occurs.
2. Computation of the Probability
Assume first that , so that the state space is and the transition probabilities become
for . The probability defined in (1.17) satisfies the following difference equation: That is, where . The boundary conditions are
In the special case when , (2.3) reduces to the second-order difference equation with constant coefficients We easily find that the (unique) solution that satisfies the boundary conditions (2.4) is
Assume now that . Letting Equation (2.3) can be rewritten as
Using the mathematical software program Maple, we find that the solution of this first-order difference equation that satisfies the boundary condition is given by where is the gamma function.
Next, we must solve the first-order difference equation where subject to the boundary conditions (2.4). We find that, if , then where is a constant. Applying the boundary conditions (2.4), we obtain that
Remark 2.1. When tends to 1/2, the solution becomes where is Euler's constant and is the digamma function defined by
Notice that so that we indeed have , and the solution (2.14) can be rewritten as
Now, in the general case when , we must solve the difference equation which can be simplified to The boundary conditions become
When (which implies that ), the difference equation above reduces to the same one as when , namely (2.5). The solution is Writing and using the fact that (by hypothesis) , we obtain that Notice that this solution does not depend on the increment . Hence, if we let decrease to zero and tend to infinity in such a way that remains equal to , we have that which is the same as the function in (1.6) when .
Next, proceeding as above, we obtain that, if , the probability is given by where denotes. In terms of and , this expression becomes for . The solution reduces to We can now state the following proposition.
Proposition 2.2. Let for , with such that . The probability that the discrete-time Markov chain defined in Section 1, starting from , will hit before 1 is given by (2.23) if , and by (2.26) if . The value of tends to the function in (2.27) when tends to 1/2.
To complete this section, we will consider the case when decreases to zero. We have already mentioned that when , the probability does not depend on , and it corresponds to the function in (1.6) with .
Next, when , making use of the formula we can write that Again, this expression corresponds to the function given in (1.7), obtained when .
Finally, we have: as tends to infinity (if ). Hence, in the case when , we can write that for . Therefore, we retrieve the formula for in (1.6).
In the next section, we will derive the formulas that correspond to the function in Section 1.
3. Computation of the Mean Number of Transitions Needed to End the Game
As in Section 2, we will first assume that . Then, with for (and ), the function satisfies the following second-order, linear, nonhomogeneous difference equation:
The boundary conditions are We find that the difference equation can be rewritten as
Let us now assume that , so that we must solve the second-order, linear, nonhomogeneous difference equation with constant coefficients With the help of the mathematical software program Maple, we find that the unique solution that satisfies the boundary conditions (3.2) is where is the first polygamma function.
Next, in the general case , we must solve (with ) for . The solution that satisfies the boundary conditions is given by In terms of and , this expression becomes
Finally, the mean duration of the game is obtained by multiplying by . Making use of the fact that (see (1.14)) , we obtain the following proposition.
Proposition 3.1. When and , the mean duration of the game is given by for .
Next, using the fact that we obtain that, as decreases to zero and remains equal to , Notice that indeed corresponds to the function given in (1.11) if .
To complete our work, we need to find the value of the mean number of transitions in the case when and . To do so, we must solve the nonhomogeneous difference equation with nonconstant coefficients (3.3). We can obtain the general solution to the corresponding homogeneous equation. However, we then need to find a particular solution to the nonhomogeneous equation. This entails evaluating a difficult sum. Instead, we will use the fact that we know how to compute when .
Let us go back to the geometric Brownian motion defined in (1.1), and let us define, for , Then, we find (see Karlin and Taylor [7, page 173]) that remains a geometric Brownian motion, with infinitesimal variance , but with infinitesimal mean . In the case when , we define and we obtain that is a Wiener process with and .
Remark 3.2. When , we find that can be expressed as the exponential of a Wiener process having infinitesimal mean and infinitesimal variance .
When we make the transformation , the interval becomes , respectively , if , respectively . Assume first that . We have (see (1.2))
Now, we consider the discrete-time Markov chain with state space and transition probabilities given by (1.13). Proceeding as above, we obtain the expression in (3.9) for the mean number of transitions from state . This time, we replace by and by , so that for .
Assume that each displacement takes time units. Taking the limit as decreases to zero (and ), we obtain (making use of the formulas in (3.12)) that
This formula corresponds to the function in (1.11) when .
When , we consider the Markov chain having state space (and transition probabilities given by (1.13)). To obtain , we must again solve the difference equation (3.7), subject to the boundary conditions . However, once we have obtained the solution, we must now replace by (and by ). Moreover, because we replace by
(and similarly for ).
Remark 3.3. The quantity here actually represents the mean number of steps needed to end the game when the Markov chain starts from state , with .
We obtain that
Next, since , setting we deduce from the previous expression that for .
Finally, if we assume, as above, that each step of the Markov chain takes time units, we find that, when decreases to zero, the mean duration of the game tends to
This last expression is equivalent to the formula for in (1.11) when .
Remark 3.4. Actually, the formula for is the same whether or .
At last, in the case when , we consider the random walk with state space and transition probabilities Then, we must solve the nonhomogeneous difference equation subject to the boundary conditions . We find that
With and , we get that for . Assuming that , we deduce at once that, as decreases to zero, Thus, we retrieve the formula (1.12) for when .
We can now state the following proposition.
Proposition 3.5. If the state space of the Markov chain is respectively, where , respectively , and the transition probabilities are those in (1.13), then the value of the mean number of steps needed to end the game is given by (3.17), respectively, (3.24). If and the transition probabilities are the ones in (3.27), then the value of is given by (3.30).
4. Concluding Remarks
We have obtained explicit and exact formulas for the quantities and defined respectively in (1.17) and (1.18) for various discrete-time Markov chains that converge, at least in a finite interval, to a geometric Brownian motion. In the case of the probability of hitting the boundary before 1, because the appropriate difference equation is homogeneous, we were able to compute this probability for any value of by considering a Markov chain with state space . However, to obtain we first solved the appropriate difference equation when . Then, making use of the formula that we obtained, we were able to deduce the solution for any by considering a Markov chain that converges to a transformation of the geometric Brownian motion. The transformed process was a geometric Brownian motion with (if ), or a Wiener process with (if ). In each case, we showed that the expression that we derived tends to the corresponding quantity for the geometric Brownian motion. In the case of the mean duration of the game, the time increment had to be chosen suitably.
As is well known, the geometric Brownian motion is a very important model in financial mathematics, in particular. In practice, stock or commodity prices vary discretely over time. Therefore, it is interesting to derive formulas for and for Markov chains that are as close as we want to the diffusion process.
Now that we have computed explicitly the value of and for Markov chains having transition probabilities that involve parameters and that are the same for all the states, we could consider asymmetric Markov chains. For example, at first the state space could be , and we could have
(and similarly for ). When the Markov chain hits , it goes to , respectively , with probability , respectively . By increasing the state space to , and taking the limit as decreases to zero (with and going to infinity appropriately), we would obtain the quantities that correspond to and for an asymmetric geometric Brownian motion. The possibly different values of depending on the state of the Markov chain reflect the fact that volatility is likely to depend on the price of the stock or the commodity.
Finally, we could try to derive the formulas for and for other discrete-time Markov chains that converge to important one-dimensional diffusion processes.