Research Article | Open Access
Moussa Kounta, "First Passage Time of a Markov Chain That Converges to Bessel Process", Abstract and Applied Analysis, vol. 2017, Article ID 7189826, 7 pages, 2017. https://doi.org/10.1155/2017/7189826
First Passage Time of a Markov Chain That Converges to Bessel Process
We investigate the probability of the first hitting time of some discrete Markov chain that converges weakly to the Bessel process. Both the probability that the chain will hit a given boundary before the other and the average number of transitions are computed explicitly. Furthermore, we show that the quantities that we obtained tend (with the Euclidian metric) to the corresponding ones for the Bessel process.
The study of the probability of the hitting times for stochastic differential equations is an active area of research and they are of great interest in many applications, for example, in finance, the study of path dependent exotic options as barrier options, in percolation theory , in optimal control problems , and in neuroscience . In this paper we investigate the discrete version of Bessel processes defined by an stochastic differential equation. It is well known  that, given a diffusion process defined by a stochastic differential equation, we can produce a discrete Markov chain that converges weakly to the solution of this stochastic differential equation (by making use of a binomial approximation). In this paper we show that the probability of the first passage times and the number of the transitions of this discrete Markov chain tend to the corresponding ones for the continuous times Bessel process.
The discrete versions of stochastic processes are interesting in themselves; for instance, in quantum mechanics the motion of a particle should be essentially discontinuous and random. Moreover, in  the authors show the application of the discrete version of Cox-Ingersoll-Ross process in hydrology.
In this paper, we consider the so-called gambler’s ruin problem for a discrete-time Markov chain that converges to a Bessel process. Phenomena governed by Bessel processes abound in the physical world, as in the case of growth phenomena governed by Stochastic Loewner Evolution (SLE) . Other phenomena include first hitting time of Bessel processes in the study of systems at or near the point of phase transition in statistical physics. In finance, a typical example is the study of stock price. It is well known that in a volatility-stabilized market the stock prices can be represented in terms of Bessel processes. Since the stock price does not vary completely continuously, the discrete formulas that will be derived in the present paper would be interesting.
The paper is organized as follows. In Section 2, we briefly describe the transition probability derived from the Bessel stochastic differential equation. Our main contribution is in the third section (Section 3.2): We find the explicit formula of the average number of transitions needed to end the game that was impossible to obtain in . We also show the sequence of the probability of the first passage times and the average number of transitions to end the game converges (with Euclidean metric) to the corresponding values in the continuous case.
2. Bessel Process Defined by a Stochastic Differential Equation and Simple Binomial Approximation
2.1. Preliminaries on Bessel Processes
We consider the Bessel process defined by the following differential equation: Next, let and if there is no ambiguity we remove the index on . Assume that , where (for simplicity), and define As is well known (see, e.g., , page 220), the probability satisfies the ordinary differential equation: We easily find that, if and when , the solutions is LetIn , we see the function satisfies the second-order ordinary differential equation: The general solution of this equation is where
2.2. Preliminaries: Binomial Approximation
In this section we recall binomial approximation briefly; for more details please see . We wish to find a sequence of stochastic processes that converges in distribution to process (1) over the time interval .
Take the interval , and chop it into equal pieces of length . Define a sequence of binomial approximations from (1), which is constant between nodes, such that at any given node the process jumps up to (resp., down to ) with probability (resp., ), and stays at the node with probability . The local drift of is given by and the local second moment of is given byOn the other hand, By solving (12), (13), and (14), we obtain Since is bounded, we can write ; hence we obtain We obtain the following transition probabilities , and given by We state the following assumptions, under which converges weakly to (see ).
Assumption 1. With probability 1, a solution of the stochastic integral equationexists for , and is distributionally unique.
Assumption 2. For all and all ,
Assumption 3. For all and all ,
Next, we assume is bounded for any .
3. Discrete Value of the Probability and the Average Number of Transitions of First Passage Time, of the Bessel Process
LetIn this section, we will compute the quantity for . We will show that converges to the function for the Bessel process as decreases to zero and tends to infinity in such a way that remains equal to .
3.1. Computation of the Probability
3.1.1. Assuming First That
Then the state space is and the transition probabilities becomefor . It is well known that the probability defined in (22) satisfies the following difference equation: Equation (24) can be rewritten asWe find that the solution of this first-order difference equation that satisfies the boundary condition is given by We have the following lemma.
Proof. We have Since and we obtain,By applying boundary conditions (4), we obtainand hence
Now we suppose .
Lemma 6. When tends to , the solution becomes where and is Euler’s constant.
Proof. Indeed, we haveSince (see ), we obtain
3.1.2. Now, in the General Case When
We must solve the difference equation:which can be simplified toThe boundary conditions becomeNext, proceeding as above, we obtain that if , the probability is given by Writingin terms of and , this expression becomes for . The solution reduces to We can now state the following proposition.
Proposition 7. Let for , with such that . The probability that the discrete-time Markov chain defined in Section 2.1, starting from , will hit before is given by (40) if . The value of the probability tends to the function in (41) when tends to .
Next, when , by making use of the following approximation for large , we can write that This expression corresponds to the function given in (6), obtained when . Finally, we haveas tends to infinity (if ; see ). Hence, in the case , we can write thatTherefore, we retrieve the formula for in (5). In the next section, we will derive the formulas that correspond to the function in Section 1.
3.2. Computation of the Mean Number of Transitions Needed to End the Game
We now turn to the problem of computing the mean number of transitions that the Markov chain , starting from , takes to reach either or . Unlike the results of , we find the explicit formula of the average number of transitions needed to end the game.
3.2.1. The Case
If , the state of the Markov chain is the set . Conditioning on the result of the first transition of the chain and taking this first transition into account, we obtain that the quantity satisfies the second-order linear, nonhomogeneous difference equation where the probabilities , , and are defined, respectively, in (17), with the boundary conditions:We haveLet , .
Proposition 8. In the case , the mean number of transitions needed by the Markov chain, starting from , to reach either or can be expressed as follows: for , where the constant is given by
3.2.2. The Case
To obtain the solution to our problem in the case when , we must find a particular solution of for , with ; proceeding as above by letting , we can rewrite this second-order linear, nonhomogeneous difference equation as follows: We deduce from the preceding subsection that To complete the work, we now show that converges to the function when we choose . To do so, we express the general solution of (53) in terms of and write as : since for and (see ) Then for small enough, we may write that Next, letNow, let , where is the least integer greater than or equal to . Then, for small , we may write that Thus, from what precedes, we deduce that and we get Hence, if we let , we can write (by making use of the boundary condition ) that Finally, to satisfy the boundary condition , we find that the constant must be chosen so that . That is, we retrieve formula (9) for the function .
4. Conclusion and Future Research
As it is well known, the Bessel process is a very important model in financial mathematics. In practice, stock or commodity prices vary discretely over time. Therefore, it is interesting to derive formulas for and for Markov chains that are as close as we want to the diffusion process. Next, we will try to extend the result in the case of two-dimensional diffusion process, which had a lot of applications in real life portfolio insurance [11, 12] and hydrology . It is a difficult task, since we have to solve some systems of difference equation in two variables.
Conflicts of Interest
The author declares that there are no conflicts of interest regarding the publication of this paper.
The authors acknowledge the University of the Bahamas Internal Grants Program for Research, Creative and Artistic Proposals (2016-2017) for generously supporting this project.
- G. F. Lawler, “Conformal invariance and 2D statistical physics,” American Mathematical Society. Bulletin. New Series, vol. 46, no. 1, pp. 35–54, 2009.
- A. Vollert, A Stochastic Control Framework for Real Options in Strategic Evaluation, Birkhäuser Boston, Inc., Boston, MA, 2003.
- H. C. Tuckwell, Stochastic processes in the neurosciences, vol. 56 of CBMS-NSF Regional Conference Series in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1989.
- D. B. Nelson and K. Ramaswamy, “Simple binomial processes as diffusion approximations in financial models,” Review of Financial Studies, vol. 3, no. 3, pp. 393–430, 1990.
- M. Kounta and M. Lefebvre, “On a discrete version of the CIR process,” Journal of Difference Equations and Applications, 2012.
- M. Lefebvre and M. Kounta, “Hitting problems for Markov chains that converge to a geometric Brownian motion,” ISRN Discrete Mathematics, vol. 2011, Article ID 346503, 15 pages, 2011.
- M. Lefebvre, Applied Stochastic Processes, Springer, NY, USA, 2007.
- D. W. Stroock and S. R. Varadhan, Multidimensional Diffusion Processes, Springer, Berlin, Germany, 1979.
- M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Dover Publications, NY, USA, 1965.
- P. M. Batchelder, An Introduction to linear Difference Equations, Dover Publications, Inc., NY, USA, 1967.
- A. Bick, “Quadratic-variation-based dynamic strategies,” Management Science, vol. 41, no. 4, pp. 722–732, 1995.
- H. Geman and M. Yor, “Bessel Processes, Asian Options, and Perpetuities,” Mathematical Finance, vol. 3, no. 4, pp. 349–375, 1993.
- M. Lefebvre, “Using a lognormal diffusion process to forecast river flows,” Water Resources Research, vol. 38, no. 6, pp. 121–128, 2002.
Copyright © 2017 Moussa Kounta. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.