Abstract

Optimal control problems for one-dimensional diffusion processes in the interval () are considered. The aim is either to maximize or to minimize the time spent by the controlled processes in (). Exact solutions are obtained when the processes are symmetrical with respect to . Approximate solutions are derived in the asymmetrical case. The one-barrier cases are also treated. Examples are presented.

1. Introduction

Let be a one-dimensional controlled diffusion process defined by the stochastic differential equation where is the control variable, and are Borel measurable functions, and are constants, and is a standard Brownian motion. The set of admissible controls consists of Borel measurable functions.

Remark 1.1. We assume that the solution of (1.1) exists for all and is weakly unique.

We define the first-passage time where . We want to find the control that minimizes the expected value of the cost function where and are constants. Notice that if is negative, then the optimizer wants to maximize the survival time of the controlled process in the interval , taking the quadratic control costs into account. In general, there is a maximal value that the parameter can take. Otherwise, the expected reward becomes infinite.

When the relation holds for some positive constant , using a theorem in Whittle [1, p. 289], we can express the value function in terms of a mathematical expectation for the uncontrolled process obtained by setting in (1.1). Actually, for the result to hold, ultimate entry of the uncontrolled process into the stopping set must be certain, which is not a restrictive condition in the case of one-dimensional diffusion processes considered in finite intervals.

In practice, the theorem in Whittle [1] gives a transformation that enables us to linearize the differential equation satisfied by the function .

In Lefebvre [2], using symmetry, the author was able to obtain an explicit and exact expression for the optimal control when is a one-dimensional controlled standard Brownian motion process (so that and ), and . Notice that the relation in (1.4) does not hold in that case. The author assumed that the parameter in the cost function is negative, and he found the maximal value that this parameter can take.

Previously, Lefebvre [3] had computed the value of when , but with the cost function rather than the function defined above. We cannot appeal to the theorem in Whittle [1] in that case either. However, the author expressed the function in terms of a mathematical expectation for an uncontrolled geometric Brownian motion.

In Section 2, we will generalize the results in Lefebvre [2] to one-dimensional diffusion processes for which the functions and are symmetrical with respect to zero and . Important particular cases will be considered.

Next, in Section 3, we will treat the general symmetrical case when is not necessarily equal to . In Section 4, we will consider processes for which the functions and are not symmetrical with respect to a certain . An approximate solution will then be proposed. In Section 5, we will present possible extensions, including the case of a single barrier. Finally, we will make some concluding remarks in Section 6.

2. Optimal Control in the Symmetrical Case with

In this section, we take . Assuming that it exists and that it is twice differentiable, we find that the value function defined in (1.5) satisfies the dynamic programming equation where . Differentiating with respect to and equating to zero, we deduce that the optimal control can be expressed as follows: Substituting the optimal control into the dynamic programming equation (2.1), we obtain that the function satisfies the nonlinear second-order ordinary differential equation subject to the boundary conditions

Now, in general solving nonlinear second-order differential equations is not an easy task. As mentioned previously, when the relation in (1.4) holds, there exists a transformation that enables us to linearize (2.3). Notice, however, that in order to obtain an explicit expression for the optimal control , one only needs the derivative of the value function . Hence, if we can find a boundary condition in terms of , rather than the boundary conditions in (2.4), then we could significantly simplify our problem, since we would only have to solve the first-order nonlinear (Riccati) differential equation: where

Proposition 2.1. Assume that the function is odd and that the function is even. Then the optimal control is given by where satisfies (2.5), subject to the condition

Proof. The condition (2.8) follows from the fact that, by symmetry, when the parameter is positive, then 0 is the value of for which the function has a maximum, whereas has a minimum at when is negative. Indeed, the origin is the worst (resp., best) position possible when the optimizer is trying to minimize (resp., maximize), taking the quadratic control costs into account, the time spent by in the interval .

Remarks 2.2. (i) The solution to (2.5), subject to (2.8), might not be unique.
(ii) Notice that the symmetrical case includes the one when is identical to 0, and is a constant, so that the uncontrolled process is a Wiener process with zero drift.

The previous proposition can be generalized as follows.

Corollary 2.3. If is replaced by in (1.1), where is even, and if the hypotheses in Proposition 2.1 are satisfied, then the optimal control can be expressed as where is a solution of that satisfies the condition .

We will now present an example for which we can determine the optimal control explicitly.

We consider the case when is a controlled Bessel process, so that Moreover, we assume that the parameter belongs to the interval . The origin is then a regular boundary (see [4, p. 239]) for the uncontrolled process obtained by setting in the stochastic differential equation above. Notice that if the parameter is equal to 1, then becomes a standard Brownian motion, which is the process considered in Lefebvre [2]. Therefore, this example generalizes the results in Lefebvre’s paper.

Here, the relation in (1.4) holds if there exists a positive constant such that Hence, we can appeal to the theorem in Whittle [1] when is equal to zero. We will treat the case when instead.

The differential equation that we must solve is We find that where and are Bessel functions and is an arbitrary constant,

The expression above for the function is appropriate when the parameter is negative. However, when , it is better to rewrite it as follows: where and are modified Bessel functions and

In order to determine the value of the constant , we will use the condition . First, we consider the special case when , and is negative. Then, we have that and where .

Next, when , we have the formula (see [5, p. 358]) It follows, with and , that

Finally, making use of the limiting form of the function when (see [5, p. 360]): we obtain that It follows that we must set the constant equal to 0, which implies that We then deduce from (2.7) (with ) that the optimal control is given by This formula for the optimal control is the same as the one obtained by Lefebvre [2].

Now, in the general case, proceeding as previously we find that when , the function defined in (2.14) is such that when decreases to zero, This expression may be rewritten as follows: where is a constant, for . Multiplying the numerator and the denominator by , we obtain that Hence, we deduce that if , we must set the constant equal to 0. This implies that , so that the constant is equal to 0 as well. It follows that the function is given by This expression is valid as long as the denominator is positive. This is tantamount to saying that the parameter , which represents the instantaneous reward given for survival in the interval , must not be too large.

Finally, the optimal control is

Now, if , it turns out that for any constant , so that the solution is not unique. However, this does not entail that we can choose any . For instance, in the particular case when , , , and , we find that If we choose , as in the case when , then the expression for the optimal control is One can check that if , then for , which is logical because the optimizer wants to maximize the survival time in . But if we let tend to infinity, the optimal control becomes which is strictly positive for . Thus, when the solution to (2.5), (2.8) is not unique, one must use other arguments to find the optimal control. One can obviously check whether the expression obtained for the optimal control does indeed correspond to a minimum (or a maximum in absolute value). In the particular case considered previously, if we let we find that this function satisfies all the conditions of the optimal control problem set up in Section 1 and leads to a valid expression for the optimal control.

Next, if the parameter is positive, we deduce from the function in (2.16) that the optimal control is given by when . However, when , again we do not obtain a unique solution to (2.5) and (2.8).

Moreover, contrary to the case when is negative, there is no constraint on this parameter when it is positive. That is, we can give as large a penalty as we want for survival in the continuation region.

3. Optimal Control in the Symmetrical General Case

In this section, we assume that and are not necessary such that . Moreover, we assume that there is a transformation of the stochastic process such that the functions and are symmetrical with respect to zero. Then the optimal control problem is reduced to the one presented in the previous section.

A simple example of such a situation is the case when is a one-dimensional controlled standard Brownian motion and . Then one can simply define with to obtain a controlled Brownian motion with zero drift and variance parameter in the interval . We can then apply Proposition 2.1 to find the optimal control.

A more interesting example is the following one: assume that the controlled process is defined by That is, is a controlled geometric Brownian motion. Since this process is strictly positive, we cannot have . Let us define where .

Notice that the relation in (1.4) only holds in the case when . To obtain the control that minimizes the expected value of the cost function defined in (1.3), we will transform the geometric Brownian motion process into a Wiener process by setting

The infinitesimal parameters of the process are given by (see, e.g., [6, p. 64])

Hence, we can write that satisfies the stochastic differential equation That is, is a controlled standard Brownian motion. Moreover, the random variable becomes where .

We can find the function , from which the optimal control is obtained at once, for any choice of . We will present the solution in the case when . Furthermore, we let , and . We then must solve the nonlinear ordinary differential equation The solution that satisfies the condition is Hence, from Corollary 2.3, we can state that the optimal control is given by . In terms of the original process, we have that Since , this solution is only valid as long as it remains finite. That is, because we chose the value of , the constant must not be too large. The optimal control is plotted in Figure 1 when . Notice that is positive when and negative when , which is logical because the optimizer wants to maximize the survival time in the interval . However, the optimal control is not symmetrical with respect to 1.

4. Approximate Optimal Control in the Asymmetrical Case

We will now consider the case when the infinitesimal parameters of the controlled process do not satisfy the hypotheses in Proposition 2.1. In order to obtain the optimal control without having to find the function explicitly, we need a condition on . If we could determine the value of in the interval for which the function has a maximum or a minimum, then we would set .

An approximate solution can be obtained by finding the value of that maximizes the expected value of the time it takes the uncontrolled process that corresponds to to leave the interval . Let denote this expected value. This function satisfies the ordinary differential equation (see [6, p. 220]) The boundary conditions are obviously

We can state the following proposition.

Proposition 4.1. Let be the value of that maximizes the function defined previously. The optimal control is approximately given by (2.7), where the function satisfies (2.5), subject to the condition .

To illustrate this result, we will present an example for which we can find the exact optimal control. We will then be able to assess the quality of the approximation proposed previously.

Let be the controlled Wiener process with drift and variance parameter defined by Because the relation in (1.4) holds with and ultimate entry of the uncontrolled process into the set is certain, we can indeed appeal to Whittle’s theorem to obtain the control that minimizes the expected value of the cost function defined in (1.3).

Assume that the parameter is positive, so that the optimizer wants to leave the interval as soon as possible. We deduce from Whittle’s theorem that the value function can be expressed as follows: where in which is the same as the random variable in (1.2), but for the uncontrolled process defined by

It is a simple matter to find that where for , and We then obtain that the exact optimal control is given by

Now, the function satisfies the ordinary differential equation The unique solution for which is The value of that maximizes is obtained by differentiation:

Next, from (2.2), the optimal control is given by where satisfies the nonlinear differential equation We find that the solution of this equation is the following: The constant is uniquely determined from the condition . We have that

The expression that we obtain for the approximate optimal control by multiplying the function by is quite different from the exact optimal control. To compare the two solutions, we consider the special case when , and . We then find that the constant is equal to , and the value that maximizes is approximately 0,7024. We plotted the two controls in Figure 2. Notice how close the two curves are.

5. Extensions

To complete this work, we will consider two possible extensions of the results presented. First, suppose that the random variable defined in (1.2) is replaced by That is, we want to solve a one-barrier, rather than a two-barrier problem. To do so, we can introduce a second barrier, at . In general, it will be necessary to find a transformation for which the infinitesimal parameters of the uncontrolled process that corresponds to satisfy the hypotheses in Proposition 2.1. If we can find such a transformation, then we can try to obtain the optimal control for the transformed process. Finally, we must express the optimal control in terms of the original variable and take the limit as tends to (resp., ) if (resp., ).

Remark 5.1. If there is a natural boundary at the origin, for example, and if , then we would take the limit as decreases to zero.

We will now present an example where the technique described previously is used. We consider the controlled geometric Brownian motion defined in Section 3 by and we assume that and that . Let where Notice that the boundaries and become, respectively:

Next, we set . We then find that is a controlled standard Brownian motion, and the first-passage time becomes Hence, we can appeal to Proposition 2.1 to determine the optimal value of the control .

Assume that in (5.2). Then Whittle’s theorem applies with The optimal control is given by and, as in Section 4, the value function can be expressed as where in which is the same as for the uncontrolled process that corresponds to .

The function satisfies the second-order ordinary differential equation The general solution of this equation can be written as We assume that the parameter is positive. Then, we can write that It follows that we must choose the constant in the general solution. Finally, making use of the boundary condition , we obtain that from which we deduce that the optimal control is constant:

If we do not appeal to Whittle’s theorem, we must solve the nonlinear first-order differential equation (see Section 3) subject to the boundary condition . We find that It follows that

In terms of the original variable , we can write that Since for any positive constant , we obtain that Thus, we retrieve the formula for the optimal control.

Next, we will treat the case when in (5.2), so that Whittle’s theorem does not apply. The optimal control becomes

In terms of the transformed variable , we have that where is a solution of (see Section 3) When is positive, the solution that satisfies the condition is where from which we deduce that It follows that Finally, using the asymptotic expansions for large arguments of the functions and (see [5, p. 377]), we find that which is the same optimal control as in the case when .

Remarks 5.2. (i) If we take the limit as decreases to zero in instead, then making use of the formulas (see [5, p. 375]) we obtain that (ii) We can also try to solve the differential equation satisfied by the function directly. However, the solution that we are looking for must be such that , because is the value of the original variable that corresponds to .

Now, in Corollary 2.3 we mentioned that Proposition 2.1 could be generalized by replacing by in (1.1). Another extension of Proposition 2.1 is to generalize the cost function defined in (1.3) to where the function is even.

To illustrate this result, we consider a particular controlled Ornstein-Uhlenbeck process defined by and we take in which Then, we find that the optimal control is given by and that the function satisfies the nonlinear differential equation subject to the condition .

Next, by symmetry, we can write that (and that ). Hence, we can restrict ourselves to the interval . The solution of the differential equation that is such that is where “erf” is the error function. It follows that the optimal control is given by for . This function is plotted in Figure 3.

6. Conclusion

We have shown that when the LQG homing problem that we want to solve possesses a certain symmetry, then it is not necessary to obtain the value function explicitly; only the derivative of is needed to determine the optimal control. Using this result, we were able to solve various problems for which Whittle’s theorem does not apply. In Section 4, we proposed an approximate solution in the case when the infinitesimal parameters of the controlled processes are not symmetrical with respect to the origin.

Many papers have been written on LQG homing problems, in particular by the first author (see, e.g., [7]) and recently by Makasu [8]. In most cases, the problems considered were only for one-dimensional processes, because to apply Whittle’s theorem a certain relation must hold between the noise and control terms. This relation is generally not verified in two or more dimensions. Furthermore, even if the relation in question holds, we still must solve a nontrivial probability problem. More precisely, we need to evaluate the moment-generating function of a first-passage time. To do so, we must find the solution of a Kolmogorov backward equation that satisfies the appropriate boundary conditions.

Proceeding as we did in this paper, we could simplify, at least in the symmetrical case, the differential equation problem, even in more than one dimension. Therefore, we should be able to solve more realistic problems. Such problems will also have interesting applications.

Finally, in order to be able to treat real-life applications, we should try to find a way to solve problems that are not symmetrical and for which Whittle’s theorem does not apply. This could be achieved by finding a transformation that linearizes the differential equations that we need to solve.

Acknowledgments

The authors are very grateful to the anonymous reviewer whose constructive comments helped to improve their paper.