#### Abstract

Six simple piecewise deterministic oil production models from economics are discussed by using solution tools that are available in the theory of piecewise deterministic optimal control problems.

#### 1. Introduction

Six simple piecewise deterministic models for optimal oil-owner behavior are presented. Their central property is sudden jumps in states. The aim of this paper is to show in admittedly exceedingly simple models how available tools for piecewise deterministic models, namely, the HJB equation and the maximum principle, can be used to solve these models analytically. We are looking for solutions given by explicit formulas. That can only be obtained if the models are simple enough. The models may be too simple to be of much interest in themselves, but they can provide some intuition about features optimal solutions may have in more complicated models.

Piecewise deterministic models have been used a number of times in economic problems in the literature; some few scattered references are given that contain such applications [1–4]. I have not been able to find references directly concerned with piecewise deterministic oil production problems. For different probability structures, and for discrete time, a host of related problems has been discussed in the literature; references to such literature have been left out, with one exception. Problems of control of jump diffusions, see [5], encompass piecewise deterministic problems, and some problems appearing in [5] are related to the ones discussed below. A classic reference to piecewise deterministic control problems is [2].

In all models below, an unbounded number of jumps in the state can occur at times , and, when is given, is exponentially distributed in (all independent). Sometimes, the size of the jumps is influenced by stochastic variables . Let . At time , we imagine that the control values chosen can be allowed to depend on what has happened, that is, on , for , but not on future events, that is, , for which . Such controls (written ) are called nonanticipative. Corresponding state solutions denoted by are then also nonanticipative. (A general set-up, with further explanations, is given in Appendix.) Frequently below, will be the wealth of the oil owner. In infinite horizon economic models, the weakest terminal condition that is natural to apply is an expectational no-Ponzi-game condition; namely, , where is the discount factor. Note that some stronger conditions will be used in some of the models presented in the sequel.

#### 2. Model 1

Consider the optimization problem where is the control, subject to the state dynamics, where and are fixed positive constants, fixed. (Formally, only is needed.) Let . There is a fixed intensity of jumps; that is, for , given , (all independent), and we assume .

The interpretation of the model is that is consumption rate, is the size (in dollars) of an oil field found at time , is wealth, and is interest. Oil fields are sold immediately after discovery.

Let us solve problem (1)–(3) by using the extremal method (see the appendix). Now, with , solving the first-order condition for maximum of , we get that maximizes the Hamiltonian.

The adjoint equation (see in Appendix) is we guess that we do not need the fact that depends on arbitrary initial points , as in appendix. Equation (4) is satisfied by , (we try the possibility that is even independent of ).

Now, for , , we get . Because is concave in , is linear, and is independent of , sufficient conditions based on concavity (see Theorem 3 in Appendix) then give us that the control is an optimal control for problem (1)–(3). The optimal control is independent of and the ’s.

Write . For , we have that . The solution of this equation equals, for , To see this, find and simply check that the differential equation is satisfied. Moreover, evidently satisfies . (To ensure that a.s., we could have postulated at the outset that is so large that .)

Write , for , where .

We are now going to replace the bequest function by the terminal constraint . For this purpose, we are now going to vary and hence also and . For , consider the expectation of . We will not give an explicit formula for this expectation, but we mention that it can be calculated in two steps, first given that jumps have happened in and then the expectation with being stochastic (the rule of double expectations is then used). The expectation of the sum containing as well as is positive, and the term in front of is negative. There thus exists a unique positive value of and, hence, of (denoted by ) such that ; that is, .

If we drop the term in the criterion but add the terminal condition , the free end optimality obtained in the original problem for evidently means optimality of in the end constrained problem (the found value appears in , though now not in the criterion). (Alternatively, we could use the sufficient conditions for end constrained problems in Theorem 3 in Appendix to obtain optimality in the present end constrained problem. This would require us to check condition , which is easily done.)

Let us discuss a little more the value of for large values of . Now, And, continuing, we get .

Then, .

Now, with probability 1 an infinite number of jumps occur in , and Denote by the value of for which , and define . For being large, , so is close to and is close to . So for being large, the optimal control is approximately . Note that belongs to and both and are increasing in , so a larger means we consume more in the beginning. In the problem where is replaced by the terminal constraint , the optimal control depends on , . It is immediately seen that is increasing in and in each , so also has these properties, indicating, for being large, that even and have these properties. This conclusion actually follows for any , because is evidently increasing in and in each .

We may assume that the jumps are stochastic, that is, that , where take values in a common bounded set and are independent, and are independent of the ’s. If we then assume that , the solution in this problem is the same as the one above for .

Note that we must assume that we have a deal with the bank in which our wealth is placed that it accepts the above behavior. That is, before time 0, we have got an acceptance for the possibility of operating with this type of admissible solutions, which means that only in expectation we leave a wealth in the bank . In actual runs, sometimes we leave in the bank a positive wealth (that the bank gets), and for other runs a negative wealth (debt) that the bank has to cover.

Consider now the case where , in (1). Then, for , when . In this case, is optimal in the problem where is added as an end constraint. (See the appendix, .) The latter condition is equivalent to the so-called no-Ponzi-game condition.

#### 3. Model 2

(This model Is related to exercise 4.1 in Øksendal and Sulem [5].)

Consider the following problem: where is the control, subject to where is a given positive number, , , , is exponentially distributed in with intensity and is decreasing towards zero (, all independent).

In contrast to problem (1)–(3), now the jumps are linearly dependent on . To defend such a feature, one might argue that the richer we are, the more we are able to generate large jumps (the jump may actually represent a collection of oil finds). On the other side, we will assume that such jumps occur with smaller and smaller intensities.

It is easy to see that the current value function is of the form when we replace by an arbitrary start value . For any pair satisfying (10), we can write , . Then, for , so times some function , . Then, the value of the criterion for this , denoted by , is proportional to and so is also . A similar argument works for , the optimal value of the criterion when jumps have already occurred at time . So , . Writing and instead of and , the current value function satisfies the following HJB equation: The first order condition for maximum is . So , for . Then, or dividing by and rearranging, Dividing by gives Let satisfy Evidently, relationship (16) would be obtained from (15) if all ; in that case, the optimal value function would be .

Now, assume first, for some , that . Then, is the optimal value in the problem with no jumps, so . Given , (14) determines , . It must be the case that is decreasing. Given the same start point , the optimal value when jumps have occurred already at time 0 must be smaller than the optimal value when jumps have occurred at time 0; in the former problem, prospective jumps have smaller probabilities for happening. (Let us show by backwards induction that , , is nonincreasing. Evidently, . Assume by induction that . If , then, by (15), + = . Because and is decreasing, a contradiction is obtained.) As we will let vary, denote the sequence defined by (14) for by .

Now, , so . In fact, as is easily seen, this holds for all : . (Compare the optimal values in the case where jump occurs at (before) in the two problems where , , and where , .) Assume now that for all and let . (It is shown below that, for some , , for all , .) Then, , and the ’s satisfy (14).

Let be the solution for . By (11), , where . Choose the smallest such that . If jumps occur at once at , while further jumps occur with intensity , then would equal . Hence, . For any admissible solution , , for all ; hence, for , ≤ when . So in the appendix is satisfied. Hence, sufficient conditions hold (see Theorem 2 in Appendix) and , , are optimal. That is, if and + for (see (11)), then is optimal ( evidently satisfies , for all ). Hence, . (In the appendix, for the sufficiency of the HJB equation to hold, it is required that is bounded if (for , we need boundedness for in all bounded intervals). This is not the case here. But we could have replaced by , being the control. Assume that we require to be bounded in the above manner. Now, is so bounded, and it is then optimal in the set of such bounded ’s.)

We can show that the ’s are increasing in and in each , . The simplest argument is that this must be so, when we now know that is the optimal value function after jumps at (before) .

If all , then, using (16), when , where . (The inequality implies , so . In particular, . The above calculations show that in the problem where , for , the optimal value functions are , where for , given by backward induction using (14) for , again . Evidently, . Now, for and, from (16), it follows that when .) Now, relate the present case to what happened in Model 1. As and , in each interval , ( is the optimal control in Model 1); moreover, when passes each changes by a factor .

#### 4. Model 3

In this model, the physical volume of oil production is constant , but the oil price jumps up and down at Poisson distributed time points , . Let . Consider the problem where is the control, subject to the differential equations below and where is taking a finite number of values with given probabilities , all are identically distributed, and is exponentially distributed in with intensity (all the random variables , independent). Let . The two states and are governed by (18) (no jumps in ) and , , where and are given constants; in fact, it will be convenient to assume that . The end constraint is required to hold. Here, is wealth and is interest earned.

Let us use the extremal method (see the appendix) for solving this problem. In what follows, it is guessed that adjoint functions do not depend on arbitrarily given initial points (which in the appendix are denoted by ). The adjoint function corresponding to is hence denoted by . We make the guess that simply equals (independent of ), being an unknown. It does satisfy the adjoint equation (, the expectation with respect to , simply equals , since does not depend on .) The adjoint function corresponding to the state variable satisfies , with ( is free), we do not need the formula for .

Next, maximum of the Hamiltonian is obtained for satisfying the first order condition , so , where , .

Now, , so where .

Note that, for any , . We want to satisfy the condition ; that is, which determines . From now on, denote the solutions and by and , with . Sufficient conditions (see Theorem 3 in the appendix) are easily seen to be satisfied. We need to check the end condition for all admissible solutions (see in the appendix). The terminal condition is , for any admissible , and by construction satisfies , so the just-mentioned end condition is satisfied. We hence have optimality of , and among admissible triples for which .

If , the end condition required for using the sufficient condition in Theorem 2 in the appendix is . The condition determines ; it now reduces to the condition . Moreover, we now have optimality among all triples for which . (This inequality means that the next-to-last inequality is satisfied.) Note that and so the control, quite expectably, are increasing in .

#### 5. Model 4

In this example, is the rate of production (production per unit of time) of oil. The oil price jumps up or down between two values at Poisson distributed time points. The income the owner does not consume, that is, , can be used to increase the rate of production, with the change being proportional to . (If is negative, it means that he takes money out of his business and runs it down.) For simplicity, the proportionality factor is put equal to 1, and, simply, jumps between the values 1 and 2. So consider , the control, subject to where is exponentially distributed in with intensity (all independent , ). There are no jumps in and , being fixed.

Let , be the current optimal value function sought, where . The current value HJB equation is as follows Let us try , . Then, the maximizing equals , . In the two cases and , we get the two equations The equations can be solved for positive and (shown in a moment). Automatically, the solution corresponding to satisfies a.s., for all .

To show the existence of a solution of (26) and (27), note that if we put in (26), then by dividing by , we get , and yields . Similarly, if we put in (27), we get , and yields . Now, . Denote the right-hand sides in (26) and (27) by and , respectively. Both are increasing in the second place and decreasing in the first place. When , , and (the only difference in and is the third term in the formulas). Thus, can be uniquely solved with respect to for in , denote the solution , it is evidently increasing in and , . Moreover, = . Consider the function . Now, and , so exists for which . So this and , , satisfy both (27) and (26).

For any admissible , , where is the solution of (i.e., ), so, for all admissible , ≤ . Hence, in the appendix holds; the sufficient conditions in Theorem 2 are satisfied, and if when , then is optimal. (Hence, if + and is the solution of , then is optimal.)

Define (i.e., satisfies ). When then , and one may show that when decreases, increases towards while decreases towards . When increases to infinity, increases to , and decreases to . In fact, and . To prove convergence to , note first that , are bounded uniformly in . If and did not converge to a common number, say , when , the two last terms in (27) and (26) would blow up to infinity for (certain) large values of , while the remaining terms would be bounded, a contradiction. Summing the two equations in (27) and (26) gives . Hence, satisfies ; that is, .

Two very simple models discussed below contain the feature that the owner can influence the chance of discovery but that it is costly to do so. In the first one, the intensity of discoveries is influenced by how much money is put into search at any moment in time; in the second one, it is costly buildup of expertise that matters for the intensity of discoveries.

#### 6. Model 5

Consider the problem and , where , are given constants, , is the control, and is the intensity of jumps. All fields found are of the same size . We imagine that fields are sold immediately when they are found or that they are produced over a fixed period of time, with a fixed production profile. In both cases, we let income from a field discounted back to the time of discovery be equal to , for a given . The oil owner wants to maximize the sum of discounted incomes earned over .

Let us try the proposal that the current value optimal value function , where , an unknown. The current value HJB equation is which implies that the maximizing satisfies , so , = , and . Trivially, in the appendix holds, so the sufficient conditions in Theorem 2 are satisfied and is optimal. Because , in expectation (but not in all runs), over , a positive sum of discounted incomes has been earned.

Often, it is the case that the best fields are exploited first hence, , where is decreasing and even . Then, when jumps are imagined to have happened already at time 0, we guess . Define . Then, the HJB equation is and yields maximum. Postulate for the moment that, for some , . Then, = . Thus, if we try , we get . If we try , we get . So a correct value of exists such that . This continues backwards and gives . If, for some , , for , then , so in this case, (30) gives . If all , letting , it is easily seen that there exists an infinite decreasing sequence when satisfying (30) (compare a similar argument in Model 2).

Still, trivially condition below holds, sufficient conditions in Theorem 2 in the appendix are satisfied and on , , are optimal. (Uniqueness of the ’s follows from optimality.)

#### 7. Model 6

Consider the problem no jumps in , , , are fixed positive numbers, , is the intensity of jumps in .

All fields found are of the same size . We again imagine that fields are sold immediately when they are found or that they are produced over a fixed period of time, with a fixed production profile. Let income from a field discounted back to the time of discovery be equal to , for a given . The state is the amount of expertise available for finding new fields, built up over time according to , where is money per unit of time spent on building expertise.

Let us try the proposal that the current value function equals , and being unknowns. The current value HJB equation is Then, , so . Moreover, the maximizing equals , and = , . The optimal solution equals , where . Evidently, below is satisfied, because for any admissible , and when . Hence, is optimal.

#### 8. Comparisons

In Models 1, 2, 5, and 6, oil finds are made at stochastic points in time; in Models 3 and 4, it is the price of oil that changes at stochastic points in time. In Model 1, we operate with the constraint (for , ), where is the oil-owners’ wealth. Here, for some runs, can be negative and, for other runs, positive. In Model 2, we required for all , all . (The results in that model would have been the same if we had required only for all .) In Model 2, the optimal control comes out as stochastic and not deterministic as in Model 1. Moreover, as a comment in Model 2 says, as a function of time, the optimal control decreases more rapidly in Model 1 as compared to Model 2. The latter feature stems from the fact that, in Model 2, we enhance future income prospects by not decreasing too fast, because the jump term (the right-hand side of the jump equation) depends positively on , which is not the case in Model 1.

In Models 3 and 4, the oil price exhibits sudden stochastic jumps. In Model 3, the rate of oil production is constant, but income earned (as well as interest) is placed in a bank after subtraction of consumption. In Model 4, income earned, after subtraction of consumption, is reinvested in the oil firm to increase production. In Model 4, the optimal control is stochastic; it depends on whether the current price is high or low. In Model 3, the control is deterministic, and it depends only on the expectation of the stochastic price. Consider the case where, in Model 3, the expected price is zero and . Then, , where and . We saw in Model 4 that when the intensity of jumps is very high, , where ; we hardly pay attention to what the current price is, because it changes so frequently. Now when the current price is , or when is large. When is large (so switches very frequently between 1 and 2), the stochastic path of the equation most often is very close to the deterministic path of = . The latter equation has the solution , and the corresponding equals = = , the same control as that obtained in Model 3 in the case , .

In the extremely simple Models 5 and 6, the frequency of oil finds is not fixed but influenced by a control. In Model 5, the current frequency (or intensity) is determined by how much money is put into search at that moment in time. In the simplest case considered in Model 5, a find today does not influence the possibility of making equally sized discoveries tomorrow. Then it, is not unreasonable that the optimal control (which equals ) is independent of the discount rate but dependent on the fixed value of the finds . (Here, actually could be the expected size of a find, in both Model 5 and Model 6; we could have had the sizes of the finds being independently stochastic, with being the expected value of the sizes.) In Model 6, it is the ability to discover oil that is built up over time, so with greater impatience (higher ), we should expect less willingness of devoting money to increase this ability, and this shows up in the formula .

#### Appendix

Consider the problem , , fixed, is the control, with criterion Here, and , , are random variables, with all and being independent . Given , the intensity of the jump in is . (So for given nonanticipative function , , for , the probability distribution of is .)

Assume that the five given functions , , , , and are continuous and Lipschitz continuous with respect to with a common rank for , independently of , and also that these functions satisfy an inequality of the form for all , , all . Also, is continuous and for all . Define to be admissible if , for all , , and if is nonanticipative and is separately piecewise continuous in each real variable on which it depends. Then, corresponding to , for each , there exists a nonanticipative function (also called admissible), which is piecewise continuous in , satisfying (A.2) and (A.1), for , ( becomes piecewise continuous in each real variable on which it depends). If there are terminal conditions in the problem, and are called admissible if also the terminal conditions are satisfied. For pairs and to be called admissible, we can allow additional restrictions; namely, for given sets , , , it can be required that has to belong to a.s. for .

*HJB Equations*. The following condition is sufficient for optimality. (Below, indicates the number of jumps that have already happened).

Theorem 1. *Assume that there exist functions and controls , , with , in , , such that the functions , for all , satisfy the HJB equation
**
and assume that each yields maximum in this equation. Moreover, assume that a solution of , with jumps given by (A.2), exists such that . Assume also that, for some constants and , for all . Then, is optimal.*

See pages 147 and 155 (and for a proof, see page 168) in [6]. (Formally, we only need to assume that the nonanticipative function is measurable and bounded.)

In case of restrictions of the form , we must assume that satisfies these restrictions, in this case needs to only be in a neighborhood of each point in , with in cl (cl = closure). Then, (A.4) needs to hold only at each point in .

If there are terminal conditions in the problem, replace by for all admissible .

If , is replaced by and if , , , and , , and are independent of , and given a sequence , then a sufficient condition for optimality of (catching up optimality in case infinite values of the criterion appear) is as follows.

Theorem 2. *For , assume that some functions and ( in ) exist, such that satisfy the current value HJB equation
**
for , with yielding maximum. The functions , , , , and are postulated to be continuous and Lipschitz continuous in with rank for and all and to satisfy a growth conditions of the form for all . Finally, *