- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Abstract and Applied Analysis

Volume 2013 (2013), Article ID 761306, 7 pages

http://dx.doi.org/10.1155/2013/761306

## Nonzero-Sum Stochastic Differential Game between Controller and Stopper for Jump Diffusions

^{1}School of Mathematical Sciences, Dalian University of Technology, Dalian 116023, China^{2}School of Science, Dalian Jiaotong University, Dalian 116028, China

Received 5 February 2013; Accepted 7 May 2013

Academic Editor: Ryan Loxton

Copyright © 2013 Yan Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We consider a nonzero-sum stochastic differential game which involves two players, a controller and a stopper. The controller chooses a control process, and the stopper selects the stopping rule which halts the game. This game is studied in a jump diffusions setting within Markov control limit. By a dynamic programming approach, we give a verification theorem in terms of variational inequality-Hamilton-Jacobi-Bellman (VIHJB) equations for the solutions of the game. Furthermore, we apply the verification theorem to characterize Nash equilibrium of the game in a specific example.

#### 1. Introduction

In this paper we study a nonzero-sum stochastic differential game with two players: a controller and a stopper. The state in this game evolves according to a stochastic differential equation driven by jump diffusions. The controller affects the control process in the drift and volatility of at time , and the stopper decides the duration of the game, in the form of a stopping rule for the process . The objectives of the two players are to maximize their own expected payoff.

In order to illustrate the motivation and the background of application for this game, we show a model in finance.

*Example 1. *Let be a filtered probability space, let be a -dimensional Brownian Motion, and let be -independent compensated Poisson random measures independent of , . For , , where is the Lévy measure of a Lévy process with jump measure such that for all . is the filtration generated by and (as usual augmented with all the -null sets). We refer to [1, 2] for more information about Lévy processes.

We firstly define a financial market model as follows. Suppose that there are two investment possibilities: (1)a risk-free asset (e.g., a bond), with unit price at time given by
(2)a risky asset (e.g., a stock), with unit price at time given by
where is -adapted with a.s., is a fixed given constant, , , is -predictable processes satisfying for a.a. , a.s. and

Assume that an investor hires a portfolio manager to manage his wealth from investments. The manager (controller) can choose a portfolio , which represents the proportion of the total wealth invested in the stock at time . And the investor (stopper) can halt the wealth process by selecting a stop-rule . Then the dynamics of the corresponding wealth process is
(see, e.g., [3–5]). We require that for a.a. , a.s. and that

At terminal time , the stopper gives the controller a payoff , where is a deterministic mapping. Therefore, the controller aims to maximize his utility of the following form:
where is the discounting rate, is a cost function, and is the controller’s utility. We denote the expectation with respect to and the probability laws of starting at .

Meanwhile it is the stopper’s objective to choose the stopping time such that his own utility
is maximized, where is the stopper’s utility.

As this game is typically a nonzero sum, we seek a Nash equilibrium, namely, a pair such that
This means that the choice is optimal for the controller when the stopper uses and vice verse.

The game (4) and (8) are a nonzero-sum stochastic differential game between a controller and a stopper. The existence of Nash equilibrium shows that, by an appropriate stopping rule design , the stopper can induce the controller to choose the best portfolio he can. Similarly, by applying a suitable portfolio , the controller can force the stopper to stop the employment relationship at a time of the controller’s choosing.

There have been significant advances in the research of stochastic differential games of control and stopping. For example, in [6–9] the authors considered the zero-sum stochastic differential games of mixed type with both controls and stopping between two players. In these games, each of the players chooses an optimal strategy, which is composed by a control and a stopping . Under appropriate conditions, they constructed a saddle pair of optimal strategies. For the nonsum case, the games of mixed type were discussed in [6, 10]. The authors presented Nash equilibria, rather than saddle pairs, of strategies [10]. Moreover, the papers [11–17] considered a zero-sum stochastic differential game between controller and stopper, where one player (controller) chooses a control process and the other (stopper) chooses a stopping . One player tries to maximize the reward and the other to minimize it. They presented a saddle pair for the game.

In this paper, we study a nonzero-sum stochastic differential game between a controller and a stopper. The controller and the stopper have different payoffs. The objectives of them are to maximize their own payoffs. This game is considered in a jump diffusion context under the Markov control condition. We prove a verification theorem in terms of VIHJB equations for the game to characterize Nash equilibrium.

Our setup and approach are related to [3, 17]. However, their games are different from ours. In [3], the authors studied the stochastic differential games between two controllers. The work in [17] was carried out for a zero-sum stochastic differential game between a controller and a stopper.

The paper is organized as follows: in the next section, we formulate the nonzero-sum stochastic differential game between a controller and a stopper and prove a general verification theorem. In Section 3, we apply the general results obtained in Section 2 to characterize the solutions of a special game. Finally, we conclude this paper in Section 4.

#### 2. A Verification Theorem for Nonzero-Sum Stochastic Differential Game between Controller and Stopper

Suppose the state at time is given by the following stochastic differential equation: where , , are given functions, and denotes a given subset of .

We regard and as the control processes, assumed to be càdlàg, -adapted and with values in , for a.a. , , . And we put . Then is a controlled jump diffusion (see [18] for more information about stochastic control of jump diffusion).

Fix an open solvency set . Let be the bankruptcy time. is the first time at which the stochastic process exits the solvency set . Similar optimal control problems in which the terminal time is governed by a stopping criterion are considered in [19–21] in the deterministic case.

Let and be given functions, for . Let be a family of admissible controls, contained in the set of such that (9) has a unique strong solution and for all , where denotes expectation given that . Denote by the set of all stopping times . Moreover, we assume that

Then for and we define the performance functionals as follows: We interpret as if . We may regard as the payoff to the controller who controls and as the payoff to the stopper who decides .

*Definition 2 (nash equilibrium). * A pair is called a Nash equilibrium for the stochastic differential game (9) and (13), if the following holds:

Condition (14) states that if the stopper chooses , it is optimal for the controller to use the control . Similarly, condition (15) states that if the controller uses , it is optimal for the stopper to decide . Thus, is an equilibrium point in the sense that there is no reason for each individual player to deviate from it, as long as the other player does not.

We restrict ourselves to Markov controls; that is, we assume that , . As customary we do not distinguish between and , and . Then the controls and can simply be identified with functions and , where and .

When the control is Markovian, the corresponding process becomes a Markov process with the generator of given by where is the gradient of , and is the th column of the matrix .

Now we can state the main result of this section.

Theorem 3 (verification theorem for game (9) and (13)). * Suppose there exist two functions ; such that*(i)*, , where is the closure of and is the interior of ; *(ii)* on , .* *Define the following continuation regions:
* *Suppose spends time on a.s., , that is, *(iii)* for all , , , *(iv)* is a Lipschitz surface, , *(v)*. The second-order derivatives of are locally bounded near , respectively, , *(vi)*, *(vii)* there exists such that, for ,
*(viii)*; , for all , .* *For define
* *and, in particular,
* *Suppose that *(ix)*the family is uniformly integrable, for all and , . ** Then is a Nash equilibrium for game (9), (13), and
*

*Proof. *From (i), (iv), and (v) we may assume by an approximation theorem (see Theorem 2.1 in [18]) that , .

For a given , we define, with ,
In particular, let be as in (vii). Then,

We first prove that (21) holds. Let be as in (24). For arbitrary , by (vii) and the Dynkin’s formula for jump diffusion (see Theorem 1.24 in [18]) we have
where . Therefore, by (11), (12), (i), (ii), (viii), and the Fatou lemma,
Since this holds for all , we have
In particular, applying the Dynkin’s formula to we get an equality, that is,
where and . Hence we deduce that
Since we always have
we conclude by combining (27), (29), and (30) that
which is (21).

Next we prove that (22) holds. Let be as in (vii). For , by the Dynkin’s formula and (vii), we have
where ; .

Letting gives, by (11), (12), (i), (ii), (viii), and the Fatou Lemma,
The inequality (33) holds for all . Then we have
Similarly, applying the above argument to the pair we get an equality in (34), that is,
We always have
Therefore, combining (34), (35), and (36) we get
which is (22). The proof is completed.

#### 3. An Example

In this section we come back to Example 1 and use Theorem 3 to study the solutions of game (4) and (8). Here and in the following, all the processes are assumed to be one-dimension for simplicity.

To put game (4) and (8) into the framework of Section 2, we define the process ; by Then the performance functionals to the controller (6) and the stopper (7) can be formulated as follows: In this case the generator in (16) has the form

To obtain a possible Nash equilibrium for game (4) and (8), according to Theorem 3, it is necessary to find a subset of and ; , such that (i) and , for all ; (ii) and , for all ; (iii) and , for all and ; (iv)there exists such that , for all .

Imposing the first-order condition on and , we get the following equations for the optimal control processes : With as in (41), we put

Thus, we may reduce game (4) and (8) to the problem of solving a family of nonlinear variational-integro inequalities. We summarize as follows.

Theorem 4. *Suppose there exist satisfying (41) and two -functions ; such that *(1)*
*(2)*, ; *(3)* and for all ; *(4)* and for all and for all ; *(5)* for all , where is given by (42); *(6)* for all , where is given by (42). ** Then the pair is a Nash equilibrium of the stochastic differential game (4) and (8), where
**
Moreover, the corresponding equilibrium performances are
*

In this paper we will not discuss general solutions of this family of nonlinear variational-integro inequalities. Instead we discuss a solution in special case when

Let us try the functions , , of the form and a continuation region of the form Then we have where

By conditions (1) and (3) in Theorem 4, we get

From conditions (4), (5), and (6) of Theorem 4, we get the candidate for the optimal control as follows:

Let on , . By condition (5) in Theorem 4, we have for . Substituting (52) into , we obtain Similarly, we obtain for by condition (6) in Theorem 4. And we substitute (53) in to get

Therefore, we conclude that where and are the solutions of (56) and (57), respectively.

According to Theorem 4, we use the continuity and differentiability of at to determine , , that is,

At the end of this section, we summarize the above results in the following theorem.

Theorem 5. *Let , and let be the solutions of equations (56)–(58). Then the pair given by
**
is a Nash equilibrium of game (4) and (8). The corresponding equilibrium performances are
*

#### 4. Conclusion

A verification theorem is obtained for the general stochastic differential game between a controller and a stopper. In the special case of quadratic cost, we use this theorem to characterize the Nash equilibrium. However, the question of the existence and uniqueness of Nash equilibrium for the game remains open. It will be considered in our subsequent work.

#### Acknowledgment

This work was supported by the National Natural Science Foundation of China under Grant 61273022 and 11171050.

#### References

- J. Bertoin,
*Lévy Processes*, Cambridge University Press, Cambridge, UK, 1996. View at Zentralblatt MATH · View at MathSciNet - D. Applebaum,
*Lévy Processes and Stochastic Calculus*, Cambridge University Press, Cambridge, UK, 2003. - S. Mataramvura and B. Øksendal, “Risk minimizing portfolios and HJBI equations for stochastic differential games,”
*Stochastics*, vol. 80, no. 4, pp. 317–337, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - R. J. Elliott and T. K. Siu, “On risk minimizing portfolios under a Markovian regime-switching Black-Scholes economy,”
*Annals of Operations Research*, vol. 176, pp. 271–291, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - G. Wang and Z. Wu, “Mean-variance hedging and forward-backward stochastic differential filtering equations,”
*Abstract and Applied Analysis*, vol. 2011, Article ID 310910, 20 pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - I. Karatzas and W. Sudderth, “Stochastic games of control and stopping for a linear diffusion,” in
*Random Walk, Sequential Analysis and Related Topics: A Festschrift in Honor of Y.S. Chow*, A. Hsiung, Z. L. Ying, and C. H. Zhang, Eds., pp. 100–117, World Scientific Publishers, 2006. View at Publisher · View at Google Scholar · View at MathSciNet - S. Hamadène, “Mixed zero-sum stochastic differential game and American game options,”
*SIAM Journal on Control and Optimization*, vol. 45, no. 2, pp. 496–518, 2006. View at Publisher · View at Google Scholar · View at MathSciNet - M. K. Ghosh, M. K. S. Rao, and D. Sheetal, “Differential games of mixed type with control and stopping times,”
*Nonlinear Differential Equations and Applications*, vol. 16, no. 2, pp. 143–158, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. K. Ghosh and K. S. Mallikarjuna Rao, “Existence of value in stochastic differential games of mixed type,”
*Stochastic Analysis and Applications*, vol. 30, no. 5, pp. 895–905, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - I. Karatzas and Q. Li, “BSDE approach to non-zero-sum stochastic differential games of control and stopping,” in
*Stochastic Processes, Finance and Control*, pp. 105–153, World Scientific Publishers, 2012. View at Publisher · View at Google Scholar · View at MathSciNet - J.-P. Lepeltier, “On a general zero-sum stochastic game with stopping strategy for one player and continuous strategy for the other,”
*Probability and Mathematical Statistics*, vol. 6, no. 1, pp. 43–50, 1985. View at Zentralblatt MATH · View at MathSciNet - I. Karatzas and W. D. Sudderth, “The controller-and-stopper game for a linear diffusion,”
*The Annals of Probability*, vol. 29, no. 3, pp. 1111–1127, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. Weerasinghe, “A controller and a stopper game with degenerate variance control,”
*Electronic Communications in Probability*, vol. 11, pp. 89–99, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - I. Karatzas and I.-M. Zamfirescu, “Martingale approach to stochastic differential games of control and stopping,”
*The Annals of Probability*, vol. 36, no. 4, pp. 1495–1527, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - E. Bayraktar, I. Karatzas, and S. Yao, “Optimal stopping for dynamic convex risk measures,”
*Illinois Journal of Mathematics*, vol. 54, no. 3, pp. 1025–1067, 2010. View at Zentralblatt MATH · View at MathSciNet - E. Bayraktar and V. R. Young, “Proving regularity of the minimal probability of ruin via a game of stopping and control,”
*Finance and Stochastics*, vol. 15, no. 4, pp. 785–818, 2011. View at Publisher · View at Google Scholar · View at MathSciNet - F. Bagherya, S. Haademb, B. Øksendal, and I. Turpina, “Optimal stopping and stochastic control differential games for jump diffusions,”
*Stochastics*, vol. 85, no. 1, pp. 85–97, 2013. - B. Øksendal and A. Sulem,
*Applied Stochastic Control of Jump Diffusions*, Springer, Berlin, Germany, 2nd edition, 2007. View at Publisher · View at Google Scholar · View at MathSciNet - Q. Lin, R. Loxton, K. L. Teo, and Y. H. Wu, “A new computational method for a class of free terminal time optimal control problems,”
*Pacific Journal of Optimization*, vol. 7, no. 1, pp. 63–81, 2011. View at Zentralblatt MATH · View at MathSciNet - Q. Lin, R. Loxton, K. L. Teo, and Y. H. Wu, “Optimal control computation for nonlinear systems with state-dependent stopping criteria,”
*Automatica*, vol. 48, no. 9, pp. 2116–2129, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - C. Jiang, Q. Lin, C. Yu, K. L. Teo, and G.-R. Duan, “An exact penalty method for free terminal time optimal control problem with continuous inequality constraints,”
*Journal of Optimization Theory and Applications*, vol. 154, no. 1, pp. 30–53, 2012. View at Publisher · View at Google Scholar · View at MathSciNet