#### Abstract

A Markov chain with state space and transition probabilities depending on the current state is studied. The chain can be considered as a discrete Ornstein-Uhlenbeck process. The probability that the process hits before 0 is computed explicitly. Similarly, the probability that the process hits before is computed in the case when the state space is and the transition probabilities are not necessarily the same when is positive and is negative.

#### 1. Introduction

The Ornstein-Uhlenbeck process is defined by the stochastic differential equation

where is a standard Brownian motion and is a positive constant. Discrete versions of this very important diffusion process have been considered by various authors. In particular, Larralde [1, 2] studied the discrete-time process for which

where the random variables are i.i.d. with zero mean and a common probability distribution. Larralde computed the probability that will hit the negative semiaxis for the first time at the th step, starting from . The problem was solved exactly in the case when the distribution of the random variables is continuous and such that

for and for all .

Versions of the discrete Ornstein-Uhlenbeck process have also been studied by, among others, Renshaw , Anishchenko et al. [4, page 53], Bourlioux et al. [5, page 236], Sprott [6, page 234], Kontoyiannis and Meyn , and Milstein et al. . In many cases, the distribution of the 's is taken to be .

For discrete versions of diffusion processes, in general, see Kac  and the references therein. A random walk leading to the Ornstein-Uhlenbeck process is considered in Section 4 of Kac's paper.

Next, consider a Markov chain for which the displacements take place every units of time. When the process is in state , it moves to (resp., ) with probability (resp., ) and remains in with probability .

Assume that , and let

where is a positive constant such that for all . Then, when and decrease to zero, the Markov chain converges to a diffusion process having infinitesimal mean and infinitesimal variance (see [10, page 213]). In the case of the Ornstein-Uhlenbeck process, (with ) and . Hence, with , we have that

In the present paper, we first consider the Markov chain with state space and

for . Notice that (resp., ) could be denoted by (resp., ). To respect the condition   for all , the positive constant must be such that

This Markov chain with state-dependent transition probabilities may also clearly be regarded as a discrete version of the Ornstein-Uhlenbeck process. It corresponds to the case when in (1.2) and

for .

In Section 2, the probability

where

and , will be computed explicitly. In Section 3, the problem will be extended by assuming that the state space of the Markov chain is . Furthermore, the transition probabilities will be assumed to be (possibly) sign-dependent (see ). Finally, some concluding remarks will be made in Section 4.

#### 2. First Hitting Place Probabilities

To obtain the first hitting place probability defined in (1.9), we may try to solve the following difference equation:

for , subject to the boundary conditions

For small, it is a relatively simple task to calculate explicitly for all by solving a system of linear equations. However, we want to obtain an exact expression for any positive .

Next, setting and letting , (2.1) can be rewritten as

for (with and ). This second-order homogeneous equation with linear coefficients is called the hypergeometric difference equation, due to the fact that its solutions can be expressed in terms of the hypergeometric function (see [12, page 68]).

Equation (2.3) can be transformed into its normal form, namely,

In our case, we have (see [12, pages 68-69])

so that we must solve

where

Furthermore, the variable now belongs to the set (because the new argument of the function is , where in our problem).

Using the results in Batchelder [12, Chapter III], we can state that a fundamental system of solutions of (2.6) is

where is the hypergeometric function defined by (see [13, page 556])

with

Remark 2 s. (i) The function is sometimes denoted by . It can also be expressed as (see, again, [13, page 556])
(ii) The ratio is an entire function of , and if is fixed and such that (see [14, page 68]).
Now, because of the term , the function defined previously is generally complex-valued. Since the function in our application is obviously real, we can take the real part of . That is, we simply have to replace by . Alternatively, because = = , where denotes the integer part, we can write that
With the difference equation (2.6) being homogeneous, we can state that is a real-valued function that is also a solution of this equation. Hence, the general solution of (2.6) can be expressed as where and are arbitrary (real) constants.
(iii) We must be careful when the constant is of the form
where . Indeed, because , (2.6) is reducible. Moreover, it is completely reducible if is also a negative integer (see [12, pages 123-124]), that is, if with . Since must be smaller than (see (1.7)), this condition translates into where . We find that the case when does not really cause any problem. However, when , we can show that although and defined in (2.13) are obviously linearly independent when we consider all possible values of the argument , it turns out that in our problem always takes on the same value. More precisely, we can show that where is a polynomial of degree , with given by with for any natural number .

RemarkThe formula for is valid if as well, so that we can set equal to , with , above.

Now, we find that

For example, suppose that , so that the state space of the Markov chain is , and that . Because , the possible values of are . Furthermore, . The solution can be written as

It is a simple matter to show that this function satisfies (2.6) with . However, we calculate

Thus, and are both constant for the values of interest of in our problem.

Actually we easily find that and in this example. Therefore, we cannot make use of and to obtain . Nevertheless, because is a continuous function of the parameter , we simply have to take the limit as tends to to get the solution we are looking for.

Next, we have obtained the general solution of (2.6) in (2.14). We must find the constants and for which the boundary conditions

are satisfied. We can state the following proposition.

Proposition 2.1. When for , the probability defined in (1.9) is given by where the function is defined in (2.14), with In the case when , the constants and become

Proof. We find (see [13, page 557]) that evaluated at (i.e., ) can be expressed as This is actually obtained as a limit when with . Moreover, it follows that as tends to , we have Hence, for any , the constants and are uniquely determined from the boundary conditions (2.25), while immediately yields that and that is as in (2.28).

Remark 2 s. (i) We see that the case when the difference equation is completely reducible is rather special. When , the constant vanishes, while when , the probability is obtained by taking the limit of the previous solution when tends to this particular value.
(ii) We can obtain an approximate formula for the probability , valid for large, by proceeding as follows. First, because (by assumption) , we can write that
where . Hence, Notice that the relative error committed by replacing by its approximate value is such that so that it is negligible when is large. Moreover, for this approximate value of the constant , we can express the solution in terms of the polynomial in (2.20), with . Making use of the boundary conditions, we deduce that We can simply write that Since is a polynomial of degree , we find that we have approximated the function by a polynomial of degree .
In the next section, the state space of the Markov chain will be extended to and the (possibly) asymmetric case will be treated.

#### 3. The Asymmetric Case

We extend the problem considered in the previous section by assuming that the state space of the Markov chain is the set

where . Furthermore, we set

where and .

When is a negative state, we define

for . In order to respect the condition   for all , we find that the positive constant must be such that

Let

We want to compute the first hitting place probability

for . We have

Let us denote the probability defined in (1.9) by and define

where and

Proceeding as in Section 2, we can show that

where the constants and are uniquely determined from the boundary conditions

Again, we must be careful in the case when the difference equation is completely reducible.

Next, we define the events

Assume first that is positive. Then, we can write that

When is negative, we have

Setting (resp., ) in (3.13) (resp., (3.14)), we obtain a system of two linear equations for and :

Proposition 3.1. The probability defined in (3.6) is given for (resp., ) by (3.13) (resp., (3.14)), in which

Remark 3 s. (i) If , the formulas for and reduce to Moreover, if and , then (by symmetry) and
(ii) The probability
is of course given by , for

#### 4. Concluding Remarks

In Section 2, we computed the probability that a Markov chain with transition probabilities given by (1.6) and state space will hit before 0, starting from . If we let decrease to 0 in (1.6), we obtain that

That is, the Markov chain is a (generalized) symmetric random walk having a probability of remaining in its current state on each transition. The fact that should not influence the probability . Taking the limit as decreases to 0 (i.e., ) in Proposition 2.1, we indeed retrieve the well-known formula

In Section 3, we were able to compute explicitly the probability defined in (3.6) for a (possibly) asymmetric Markov chain with state space . This type of Markov chain could have applications in mathematical finance, in particular. Indeed, if one is looking for the probability that the value of a certain stock reaches a given level before a lower one, it can be more realistic to assume that the stock price does not vary in the same way when the price is high or low. Hence, the assumption that the transition probabilities may be different when and seems plausible in some applications. In the application we have just mentioned, 0 could be the centered current value of the stock.

Next, another problem of interest is the determination of the average time the process, starting from , takes to hit either 0 or (in Section 2), or or (in Section 3). To obtain an explicit expression for , we must solve a nonhomogeneous linear difference equation. Finding a particular solution to this equation (in order to obtain the general solution by using the solution to the homogeneous equation obtained in the present work) is a surprisingly difficult problem.

Finally, we could try to take the limit of the Markov chain in such a way as to obtain the Ornstein-Uhlenbeck process as a limiting process. We should retrieve the known formula for the probability in the case of this process considered in the interval and generalize this formula to the asymmetric case, based on Section 3.

#### Acknowledgments

This work was supported by the Natural Sciences and Engineering Research Council of Canada. The authors are also grateful to the referee for constructive comments.