Abstract

This paper considers the optimal stopping problem for continuous-time Markov processes. We describe the methodology and solve the optimal stopping problem for a broad class of reward functions. Moreover, we illustrate the outcomes by some typical Markov processes including diffusion and Lévy processes with jumps. For each of the processes, the explicit formula for value function and optimal stopping time is derived. Furthermore, we relate the derived optimal rules to some other optimal problems.

1. Introduction

Let be a complete probability space; the problem studied in this paper is to find the optimum where is the discount rate and is a stopping time. The process is a continuous-time Markov process with starting state . When denotes the stock price and is the payoff function, is the pricing expression for American option (e.g., see Wong [1]). When is an investor’s utility function about the stock price , the optimum solution indicates the best time to buy or sell the stock (e.g., see McDonald and Siegel [2]).

Because of the Markov property, the value function can be written as where denotes the conditional expectation . The function is called the reward function. The possibility that is allowed, and making the convention that it means that if an option is never exercised, then its reward payment is valueless for the investor.

In the paper, we are interested in determining both an optimal stopping time and value function for a large class of reward functions. We show the conditions for reward functions and deduce the explicit optimal rules for general continuous-time Markov processes including diffusion and Lévy processes with jumps.

The study of optimal stopping time for stochastic processes, especially geometric Brownian motion, has a long history in finance literature. Under the assumption that is geometric Brownian motion, the seminal paper by McDonald and Siegel [2] puts forward the problem with the reward function as a model to illustrate the financial decision making. Hu and Øksendal [3] solved the problem in multidimensional cases, when , but they restricted the stopping time in a bounded interval. Recently, Nishide and Rogers [4] extended the problem by relaxing the restriction on the stopping time. For the other forms of value functions, Pedersen and Peskir [5] solved the problem by taking some special diffusion processes.

The purpose of our work is threefold. Firstly, we intend to make clear the assumptions on reward function , in such a way that the explicit value can be generalized to a larger class of issues. For this purpose, throughout this article we assume that the function is nonincreasing, concave and twice continuous differentiable. These properties are very powerful in the following proof, as we will see. Secondly, we pay attention to general Markov processes including diffusion and Lévy processes with jumps. With the help of the infinitesimal generator, we obtain an explicit formula for the value function and the stopping time. Thirdly, we find that the optimal problem (2) is equivalent to other optimal problems like This work is inspired by Pedersen and Peskir [5], who verified the equivalence of problems (2) and (4) for , where is an Ornstein-Uhlenbeck process. Note that we do not consider the case , so our approach to deal with the optimal problem is different from that of Pedersen and Peskir. Moreover, our work naturally explore the explicit solutions of the new optimal problem (4) for a larger class of reward functions and underlying processes .

The paper is organized as follows. In Section 2, the explicit value function and optimal stopping time are derived for a general Markov process along with the condition for the reward functions. Section 3 discusses some applications to diffusion, which include Brownian motion with drift, geometric Brownian motion, and the Ornstein-Uhlenbeck process. Section 4 displays some concrete examples of Lévy processes with jumps. In Section 5, we will link the outcomes with other optimal problems such that explicit solutions for the new problems can also be feasible to a general Markov process with a large class of reward functions. Finally, concluding remarks are given in Section 6.

2. Optimal Rule for Continuous-Time Markov Processes

For a Markov process , the infinitesimal generator of is defined as where is twice continuous differentiable. In the diffusion case, namely, where is a standard Brownian motion, , the infinitesimal generator is equivalent to In the case of Lévy process with jumps, driven by the equation where is a homogeneous Poisson process, , the infinitesimal generator is where is the Lévy measure (for jump diffusion and its generators, e.g., we can refer to Gihman and Skorohod [6]).

First, we make the assumption about the reward function.

Assumption 1. The reward function is nonincreasing, concave, and twice continuous differentiable; that is, , , and is .

Under Assumption 1, we present the explicit solutions for general continuous-time Markov processes.

Theorem 2. For a Markov process with infinitesimal generator , let be the solution of satisfying Given the reward function in Assumption 1, if there exists a point such that then the optimal problem has an explicit expression for the value function The optimal stopping time is

Proof. Define a functionand then we get Hence, attains the minimum at , So we have the inequality The value function is , exists, and is continuous except at . By Itô’s formula, for any time , where is the infinitesimal generator of the process .
Because the value function is bounded, the local martingale term on the right-hand side of  (21) is also bounded (all the other terms in (21) are clearly bounded), implying that it is in fact a martingale with zero expectation. Hence, by the optimal sampling theorem, we have, for any stopping time , (a) When , , we claim that For diffusion, , , , and the inequality in (23) holds naturally. For Lévy processes with jumps, As and is decreasing on states, then .
(b) When , , the function is the solution of so
Therefore, from (a) and (b), we have , and the equality holds for . It turns to be From which we can see that is an upper bound for the value starting from . This bound is achieved when . When the starting state is smaller than , the optimal stopping time is . That is, as it reaches its upper bound. If the starting state is greater than the point , it must wait until , which is the first hitting time to the point . At the stopping time , , and , so , reaching its upper bound.

Remark 3. Actually, for all diffusion, Theorem 2 holds for the drift term satisfying . We calculate that so for , As satisfies , we can arrive at The rest of the proof is the same as that in Theorem 2.

However, the condition for Lévy process will be more complicated. In order to have a uniform style, we restrict to the case . Moreover, we can see from (19) that if the point exists, then it is unique. Next we present results on some classical Markov processes as applications of Theorem 2.

3. Diffusion

3.1. Brownian Motion with Drift

For Brownian motion with drift , and variance , namely, the process is driven by the SDE by using Theorem 2, we get the following proposition.

Proposition 4. Let be the function in Assumption 1; if there exists a point such that then the value function has the form The optimal stopping time is

Proof. The infinitesimal generator for Brownian motion with drift is . It is well known that the ordinary differential equation (ODE) of has two linearly independent solutions So ( and are constants). Considering the boundary condition and , then must be equal to zero, , and . Equation (13) in Theorem 2 tells us that the point is determined by since . Thus, the expression for the value function is easily obtained by (15).

As the simplest example, we take the reward function and we take the parameters , , and . Then, the problem has the solution The optimal stopping time is We draw the value function and the point in Figure 1.

3.2. Geometric Brownian Motion

As for geometric Brownian motion, that is, the process is driven by we derive the following proposition.

Proposition 5. Let be the function in Assumption 1; if there exists a point such that where , then the value function has the form The optimal stopping time is

Proof. The infinitesimal generator for geometric Brownian motion is . The associated ODE of has two linearly independent solutions and , where and . By using the boundary conditions of Theorem 2, we find that , where is a constant bigger than zero. Then, by substituting in Theorem 2, we deduce the value function and the point .

Remark 6. If the volatility is equal to zero, then the drift should be smaller than zero. Or the problem cannot be solved by Theorem 2. We take the reward function as an illustration. For and , from Theorem 2, the solution for the optimal value function is The optimal stopping time is If and , , that is, , for any time . By direct computation, we find that with the optimal stopping time When the starting state is smaller than 1, it should take action at once. When the starting state is greater than 1, it will never invest, because the process is increasing, and it never reaches its optimal invest time. But, in this case, the point is not determined by (13). Because the left side is and the right side is if we replace and in (13).
Taking parameters , , and , the value function for the reward function is The optimal stopping time is This problem’s solution is the key to finding the optimum which has many applications in finance, and it has been considered for geometric Brownian motion from different points of view in a variety of articles. Let us just mention [14, 79].

3.3. Ornstein-Uhlenbeck Process

In this subsection we consider the optimal stopping problem when is an Ornstein-Uhlenbeck process,

Proposition 7. Let be as in Assumption 1; if there exists a point such that then the value function has the form where The optimal stopping time is

Proof. For the Ornstein-Uhlenbeck process, the infinitesimal generator is . The ODE of has two linearly independent solutions The solution satisfying the boundary conditions in Theorem 2 is , where is a constant bigger than zero. Thanks to the relations we obtain the value function and the point .

Regarding the reward function , if there exists a point such that and , the problem has the form The optimal stopping time is When parameters are chosen as , and , the point is the solution of

Thanks to and can be reduced to expressions of in Matlab. By numerical calculation, we get the point and we show the value function in Figure 2.

4. Lévy Processes with Jumps

4.1. The Jump Diffusion

Jump diffusion processes are processes of the form They are studied in Kou and Wang [10], as regards first passage times. Here, is a homogeneous Poisson process with intensity rate , drift , and volatility , and the jump sizes are independent and identically distributed random variables. We also assume that the standard Brownian motion is independent of . The common density of is given by where .

Proposition 8. Regarding the reward function in Assumption 1, if there exists a point such that where is the negative root of then the value function has the form The optimal stopping time is

Proof. In this case (a)When , for every , , , for , and .(b)When , Since for all , the rest of the proof is the same as in Theorem 2.

Remark 9. We assume that as Kou and Wang [10] do, and we do not know whether the results hold for .
Let be defined as , and it has three roots, , , and
Next, we prove that the equation for all has exactly a negative root . so is increased and convex on the interval with and , and there is exactly one root for . Furthermore, since , , there is at least one root on Similarly, there is at least one root on , as and . But the equation is actually a polynomial equation with degree three; therefore, it can have at most three real roots. It follows that, on each interval, and , there is exactly one root.

As an example, we take the reward function again. From Proposition 8, , where is the negative root of and the value function has the form

The optimal stopping time is

If the parameters are , , , , and , we show the value function and the point in Figure 3. Numerical calculation gets the negative solution for to be , and the point . Compared with the case of Brownian motion with drift, the decision point results larger after adding the jump process. Intuitively, as risk in the system is increasing, the investors are intent to wait and observe for a longer time, so that the optimal stopping occurs later and the decision point gets greater.

The more general problem, when the common density of is given by where , and , has the following result.

Proposition 10. Let be as in Assumption 1; if there exists a point such that where is the negative root of then the value function has the form The optimal stopping time is

Remark 11. The equation has roots, which are all real and distinct. Indeed, it has positive roots, , and a negative root , as follows: Define as and we have , , , , , and is continuous and increasing in every interval , . So it has positive roots, if it also has a negative root then the negative root is unique as it has at most real roots. The detailed proof can be found in Theorem of Kou and Cai [11].

4.2. The Exponential Lévy-Type Stochastic Integral

The exponential Lévy-type stochastic integral is given by Here, is a Poisson process with intensity rate , and and are positive constants. The jump sizes are independent and identically distributed random variables and independent of . The common density of is given by where ensures that has a finite expectation.

Proposition 12. Given the reward function in Assumption 1, if there exists a point such that where is the negative root of then the value function has the form The optimal stopping time is

Proof. In this case (a)When , because and , then and . Moreover, thanks to , , , we have .(b)When , Taking (a) and (b) together, we conclude that for every . Hence, is the upper bound for . At the optimal stopping time , , attaining the upper bound.

Remark 13. The equation has a unique negative root . This can be obtained proceeding as in Remark 11.
For the reward function , is the negative root of we find , and the value function is The optimal stopping time is For the choice of parameters , , , , and , we obtain and the point , and we draw the graph of the value function in Figure 4. The point is bigger than the value concerning geometric Brownian motion, after adding the jump risk.

5. Connection with the Other Optimal Problems

As the reward function is in Assumption 1, by Itô’s formula, Assume that where is a constant. Taking expectation in (90), As assumed in the paper, is bounded, so . For a stopping time , from which we can see that if the problem is equivalent to and if the problem is equivalent to We conclude the solution for the problems and in the following theorem.

Theorem 14. For a given reward function in Assumption 1, if and , the solution for has the form The optimal stopping time is If , then the solution for has the form The optimal stopping time is The function is given by satisfying and the point is determined by

Since the expression of can be represented as valuation of equity (see A.3 in Duffie [12]), we will consider the practical application in the stock market in a future work.

6. Conclusions

In this paper, we have studied the optimal stopping problems for a continuous-time Markov process, and we have presented the explicit value function and optimal rule.

Three main contributions of this paper are as follows. First, we constructed the value functions and optimal rules for the problems with a large class of reward functions, which are different from the previous researches using some specific functions. With simple conditions for the reward functions, we gave a rigorous mathematical proof to deduce the optimal rule based on variational inequalities. Second, we derived the solution of the problem for a general Markov process. Meanwhile, we showed the explicit value functions and optimal stopping times for some concrete Markov processes including diffusion and Lévy processes with jumps. Finally, we linked our results to some optimal problems. Also, we extended the explicit solution of the new optimal problem to a broad class of reward functions and to a general continuous-time Markov process.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.