Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014 (2014), Article ID 283418, 9 pages
http://dx.doi.org/10.1155/2014/283418
Research Article

A Variational Formula for Nonzero-Sum Stochastic Differential Games of FBSDEs and Applications

Department of Mathematical Sciences, Huzhou University, Zhejiang 313000, China

Received 4 December 2013; Revised 20 March 2014; Accepted 20 March 2014; Published 28 April 2014

Academic Editor: Gerhard-Wilhelm Weber

Copyright © 2014 Maoning Tang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A nonzero-sum stochastic differential game problem is investigated for fully coupled forward-backward stochastic differential equations (FBSDEs in short) where the control domain is not necessarily convex. A variational formula for the cost functional in a given spike perturbation direction of control processes is derived by the Hamiltonian and associated adjoint systems. As an application, a global stochastic maximum principle of Pontryagin’s type for open-loop Nash equilibrium points is established. Finally, an example of a linear quadratic nonzero-sum game problem is presented to illustrate that the theories may have interesting practical applications and the corresponding Nash equilibrium point is characterized by the optimality system. Here the optimality system is a fully coupled FBSDE with double dimensions (DFBSDEs in short) which consists of the state equation, the adjoint equation, and the optimality conditions.

1. Introduction

Bismut [1] first investigated linear backward stochastic differential equations (BSDEs in short) as the adjoint equation of the forward stochastic system. The existence and uniqueness of BSDEs with nonlinear generators under Lipschitz condition were first proved by Pardoux and Peng [2] in 1990. Since then, the theory of BSDEs has extensive applications in both mathematical finance and stochastic control.

Forward-backward stochastic differential equations (FBSDEs in short) consist of forward stochastic differential equations (SDEs in short) of Itô type and BSDEs of Pardoux-Peng. The main motivations of studying FBSDEs mainly come from stochastic control theories and practical applications of finance. In the stochastic optimal control problem, FBSDEs arise as the Hamilton system which is composed of the optimality conditions, the adjoint equation, and the state equation; see, for example, [3]. In mathematical finance, FBSDEs can be formulated as the price equations of financial assets in model uncertainty and risk minimizing strategy for economic management problems; see, for example, [4, 5]. It now becomes more clear that certain important problems in mathematical economics and mathematical finance, especially in the optimization problem, can be formulated to be forward-backward stochastic system.

It is well known that in an optimal control problem, there is single control and single criterion to be optimized. And the so-called differential game is to generalize single control and single criterion in the optimal control problem to two controls and two criteria, one for each player. Each player attempts to control the state of the system so as to achieve his goal. The optimal control problem for forward-backward stochastic system is extensively studied; see, for example, [612] and references therein. But, to the best of our knowledge, very little work has been published to discuss the maximum principle of stochastic differential games for forward-backward stochastic systems. In 2012, Hui and Xiao [13] established the maximum principle of differential games for forward-backward stochastic systems under convex control domain by means of a convex variation method and a duality technique. And in 2012, Øksendal and Sulem [5] studied optimal control problems with jumps under model uncertainty and partial information. They rewrite such problems as stochastic differential games of FBSDEs and obtained the corresponding stochastic maximum principle and verification theorem under convex control domain. Different from the decoupled forward-backward stochastic system studied in [5, 13], Tang [14] studied differential games for fully coupled FBSDEs under convex control domain and established the local maximum principle and the verification theorem. In this paper, we will also consider two-person nonzero differential games of fully coupled forward-backward stochastic systems. Different from [5, 13, 14], the control domain discussed in this paper is not necessarily convex. The main contribution of our paper is to get directly a variation formula for the cost functional in a given spike perturbation direction of control processes in terms of the Hamiltonian and the associated adjoint system which is a linear FBSDEs, and neither the variational systems nor the corresponding Taylor type expansions of the state process and the cost functional will be considered. As an application, a global stochastic maximum principle for open-loop Nash equilibrium points is established. And as a result, a linear quadratic nonzero-sum game problem is studied to illustrate that the established theories and the corresponding Nash equilibrium point are characterized by the optimality system.

The paper is organized as follows. In Section 2, we formulate the problem and give various assumptions used throughout the paper. In Section 3, we obtain the representation for the difference of the performance functional in terms of the Hamilton and adjoint processes. In Section 4, we use the representation in Section 3 to derive a representation for the variation and “directional derivative" of the difference of the performance functional along with spike variation. Section 5 is devoted to deriving the global stochastic maximum principles by the “directional derivative" formula established in Section 4. A linear quadratic nonzero-sum game problem is studied in Section 6. In Section 7, we conclude this paper.

2. Formulation of the Problem

Let be a complete probability space, on which a -dimensional standard Brownian motion is defined with being its natural filtration, augmented by all -null sets in . Let be a fixed time horizon. Let be a Euclidean space. The inner product in is denoted by , and the norm in is denoted by . We further introduce some other spaces that will be used in the paper. Denote by the set of all -valued -measurable random variable such that . Denote by the set of all -valued -adapted stochastic processes which satisfy . Denote by set of all -valued -adapted continuous stochastic processes which satisfy . Finally, we define . Then is a Banach space with respect to the norm given by for .

In the following, we specify two-person nonzero-sum differential game problem of fully coupled forward-backward stochastic systems. More precisely, for , we consider a fully coupled nonlinear FBSDE The processes and in the system (1) are the open-loop control processes which present the controls of the two players.

For each one of the two players, there is a cost functional

In the above, , , , , and , , are given Borel measurable mapping (). Here and are nonempty Borel subsets. The admissible control process is defined as a -adapted process with values in such that . The set of all admissible control processes is denoted by .

Before giving the basic assumptions on the coefficients throughout this paper, we first introduce some abbreviations. Let be a given full-rank matrix. Denote and , where is the transpose matrix of . For all and , we denote , , .

Assumption 1. (i) The mappings , , , and are continuously differentiable with respect to and the corresponding derivatives are bounded. Moreover, , , and are bounded by , and is bounded by .
(ii) The mappings , , , and satisfy the following Monotonicity conditions: or Here , , and are given nonnegative constants with , . Moreover, we have , (resp., ), when (resp. ).
(iii) The mappings , , and are continuously differentiable with respect to . And is bounded by . And the derivatives of with respect to are bounded by . And and are bounded by and , respectively. And the derivatives of and with respect to and are bounded by and , respectively .
Under Assumption 1, we see that for any given admissible control , the system (1) admits a unique solution (see [15]). Then we call , or if its dependence on admissible control is clear from context, the state process corresponding to the control process and the admissible pair.
Then we can pose the following two-person nonzero-sum stochastic differential game problem.

Problem 2. Find an open-loop admissible control such that

Any satisfying the above is called an open-loop Nash equilibrium point of Problem 2. Such an admissible control allows two players to play individual optimal control strategies simultaneously.

3. Representation for Difference of the Cost Functional

This section is devoted to establishing a representation for the difference of the cost functional according to Hamiltonian and the adjoint processes of Problem 2.

To simplify our notation, for any admissible control , we write as the corresponding state process. We define the Hamiltonian functions by

For any admissible pair , we define the corresponding adjoint process as the solution to the following FBSDEs: where we have used the short hand notation And similarly we can define and .

Under Assumptions 1, it is easy to see that the above adjoint equations have unique solution ,  .

Now let be another admissible pair. In the following we will give the presentation for the difference in terms of the Hamiltonian () and adjoint process () associated with the admissible control pair , as well as other relevant expressions. We state our result as follows.

Theorem 3. Under Assumptions 1, one has the representation for the difference of the cost functional as follows:

Proof. First, from the definition of (see (7)), it is easy to check that
From (1), we know that
Furthermore, recalling (8) and applying Itô formula to , we deduce that On the other hand, by the definition of the Hamilton function (see (7)), we deduce that
Now putting (11) and (13) into (14), we deduce that (10) holds. The proof is complete.

4. A Variational Formula for Stochastic Differential Games

In this section, we will obtain a directional derivative at a given admissible control process in some given control process direction. The choice of the given control process direction depends on the convexity of the control domain . If the control domain is convex, a classical way of treating such a problem consists of using the convex perturbation method. More precisely, if is a given admissible control and is an arbitrary given admissible control, we can define a convex perturbed admissible control as where is a sufficiently small positive constant. Then one can prove the cost functional is Gâteaux differentiable at in the direction , and get a local stochastic maximum principle for open-loop Nash equilibrium points; see, for example, [13, 14]. Different from [13, 14], our control domain in the present paper is not necessarily convex, so the convex perturbed control may no longer be admissible and the convex perturbation method cannot be used to obtain the corresponding variational formula and maximum principle. A classical way of treating the nonconvex control domain consists of using the spike variations perturbation method. More precisely, let be any given admissible pair with the corresponding adjoint process ,  . We define the following spike variations: with fixed , sufficiently small positive , and any given admissible control , .

Now we state the following variational formula for the cost functional (2) associated with the spike variation (16) in a unified way.

Theorem 4. Under Assumption 1, one has a variational formula for the cost functional (5) and (6) as follows:

Proof. We first prove the equality (17). Let be the state process corresponding to the admissible control . Under Assumption 1, by the continuous dependence theory of FBSDEs (see Proposition  3.2 in [16]), we have In Theorem 3, replacing by , we have Combining (19) and Assumption 1, by Taylor Expansions on and the dominated convergence theorem, from (20) we conclude that which imply that (17) holds.
Similarly, we can prove that (18) holds. The proof is complete.

5. Stochastic Maximum Principle

In this section, applying the variational formulas (17) and (18), we will state and prove the global maximum principle for the Nash equilibrium points of Problem 2.

Theorem 5. Under Assumption 1, let be a Nash equilibrium point of Problem 2 with the state process . Let be the unique solution of the adjoint equation (8) corresponding to . Then hold for a.e. .

Proof. Since is a Nash equilibrium point of Problem 2, by (5), we have Using the notation in Theorem 3, for any arbitrary admissible control and , we have which implies that (22) holds.
Similarly, we can prove that (23) holds. The proof is complete.

6. An Example: Linear Quadratic Case

In this section, we work out an example of linear quadratic nonzero-sum differential games to illustrate our stochastic maximum principle. More precisely, consider the following one-dimensional linear fully coupled forward-backward stochastic system: with the quadratic cost functional where , , , , , , , , , and are one-dimensional deterministic bounded measurable functions and , , , and are constants. Also assume , , , , , , , , , where is a positive constant.

Under the above assumptions on the coefficients of (26) and (27), it is easy to check that, for any admissible , the state system (26) has a unique solution and the corresponding stochastic differential game problem is well defined. For this case, the corresponding becomes The corresponding adjoint equation associated with an admissible control pair becomes It is easy to check that the state system (26) has a unique solution ,   .

Suppose that is a Nash equilibrium point. By the maximum principle (see Theorem 5), putting the optimality conditions (22) and (23), the corresponding state equation (26), and the adjoint equations (28) associated with together, we obtain the following optimality system for a Nash equilibrium point:

This is called coupled forward-backward stochastic differential equations with double dimensions (DFBSDE for short). Note that the coupling comes from the two last relations (which is the maximum condition). We also refer to Yu [17] for the general theory of this kind equation. The 8-tuple , , , , , , , of -adapted processes satisfying the above is called an adapted solution of (30). We now look at the sufficiency of the existence of a Nash equilibrium point.

Theorem 6. Suppose that is an adapted solution to DFBSDE (30). Then is a Nash equilibrium point.

Proof. Let be an adapted solution to DFBSDE (30). For any admissible control pair , from Theorem 3, we have which implies that Similarly, we can get Therefore, is a Nash equilibrium point.

Proposition 7. Suppose that and is an adapted solution to DFBSDE (30). Then has the following representation:

Proof. From Theorem 6, we deduce that is a Nash equilibrium point. Then from and the optimality conditions (22) and (23), it follows that which imply that The proof is complete.

Remark 8. In summary, DFBSDE (30) completely characterizes the Nash equilibrium point. Therefore, solving the differential game problem is equivalent to solving the DFBSDE (30). Moreover, a candidate equilibrium point can be given by (34). We refer the reader to Yu [17] for the theory of solvability to DFBSDE (30). Since the linear quadratic control problem is an important and fascinating class of stochastic control ones, and the theoretical results of this problem have lots of significant impacts on a wide range of engineering, managing, and financial applications, in the future, we will focus on the study of financial application as the reviewers suggest.

7. Conclusion

In this paper, a two-person nonzero-sum differential game is studied for a fully coupled forward-backward stochastic system with the control process not appearing in the forward diffusion term, but the control domain not necessarily convex. For this case, we obtain a variation formula for the cost functional. As an application, the maximum principle for open-loop Nash equilibrium points is established. Finally, we work out an example of linear quadratic nonzero-sum differential games to illustrate our stochastic maximum principle. As the reviewers suggest, our system discussed in this paper may be extended to the discontinuous system such Markov-switching jump-diffusion system for which the optimal control problems and differential games have extensive applications in industry and finance (see, e.g., [18, 19] and the references therein). Some investigations on this topic will be studied and carried out in our future publications.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was partially supported by the National Natural Science Foundation of China (11101140, 11301177), the China Postdoctoral Science Foundation (2011M500721, 2012T50391), and the Natural Science Foundation of Zhejiang Province (nos. Y6110775 and Y6110789).

References

  1. J.-M. Bismut, “Conjugate convex functions in optimal stochastic control,” Journal of Mathematical Analysis and Applications, vol. 44, no. 3, pp. 384–404, 1973. View at Google Scholar · View at MathSciNet
  2. É. Pardoux and S. G. Peng, “Adapted solution of a backward stochastic differential equation,” Systems & Control Letters, vol. 14, no. 1, pp. 55–61, 1990. View at Publisher · View at Google Scholar · View at MathSciNet
  3. S. Tang, “General linear quadratic optimal stochastic control problems with random coefficients: linear stochastic Hamilton systems and backward stochastic Riccati equations,” SIAM Journal on Control and Optimization, vol. 42, no. 1, pp. 53–75, 2003. View at Publisher · View at Google Scholar · View at MathSciNet
  4. B. Øksendal and A. Sulem, “Maximum principles for optimal control of forward-backward stochastic differential equations with jumps,” SIAM Journal on Control and Optimization, vol. 48, no. 5, pp. 2945–2976, 2009/10. View at Publisher · View at Google Scholar · View at MathSciNet
  5. B. Øksendal and A. Sulem, “Forward backward stochastic differential games and stochastic control under model uncertainty,” Journal of Optimization Theory and Applications, 2012. View at Publisher · View at Google Scholar
  6. W. S. Xu, “Stochastic maximum principle for optimal control problem of forward and backward system,” Australian Mathematical Society Journal B, vol. 37, no. 2, pp. 172–185, 1995. View at Publisher · View at Google Scholar · View at MathSciNet
  7. Z. Wu, “Maximum principle for optimal control problem of fully coupled forward-backward stochastic systems,” Systems Science and Mathematical Sciences, vol. 11, no. 3, pp. 249–259, 1998. View at Google Scholar · View at MathSciNet
  8. J.-T. Shi and Z. Wu, “The maximum principle for fully coupled forward-backward stochastic control system,” Acta Automatica Sinica, vol. 32, no. 2, pp. 161–169, 2006. View at Google Scholar · View at MathSciNet
  9. Q. Meng, “A maximum principle for optimal control problem of fully coupled forward-backward stochastic systems with partial information,” Science in China A, vol. 52, no. 7, pp. 1579–1588, 2009. View at Publisher · View at Google Scholar · View at MathSciNet
  10. J. Shi and Z. Wu, “Maximum principle for forward-backward stochastic control system with random jumps and applications to finance,” Journal of Systems Science & Complexity, vol. 23, no. 2, pp. 219–231, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  11. Z. Wu, “A general maximum principle for optimal control of forward-backward stochastic systems,” Automatica, vol. 49, no. 5, pp. 1473–1480, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  12. J. Yong, “Optimality variational principle for controlled forward-backward stochastic differential equations with mixed initial-terminal conditions,” SIAM Journal on Control and Optimization, vol. 48, no. 6, pp. 4119–4156, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  13. E. C. M. Hui and H. Xiao, “Maximum principle for differential games of forward-backward stochastic systems with applications,” Journal of Mathematical Analysis and Applications, vol. 386, no. 1, pp. 412–427, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  14. M. Tang, “Maximum principle for non-zero sum stochastic differential games of fully coupled forward-backward stochastic systems,” preprint.
  15. S. Peng and Z. Wu, “Fully coupled forward-backward stochastic differential equations and applications to optimal control,” SIAM Journal on Control and Optimization, vol. 37, no. 3, pp. 825–843, 1999. View at Publisher · View at Google Scholar · View at MathSciNet
  16. Q. Lin, “Optimal control of coupled forward-backward stochastic system with jumps and related hamilton-jacobi-bellman equations,” http://arxiv.org/abs/1111.4642.
  17. Z. Yu, “Linear-quadratic optimal control and nonzero-sum differential game of forward-backward stochastic system,” Asian Journal of Control, vol. 14, no. 1, pp. 173–185, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  18. B. Z. Temoçin and G.-W. Weber, “Optimal control of stochastic hybrid system with jumps: a numerical approximation,” Journal of Computational and Applied Mathematics, vol. 259, pp. 443–451, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  19. N. Azevedo, D. Pinheiro, and G. W. Weber, “Dynamic programming for a Markov-switching jumpCdiffusion,” Journal of Computational and Applied Mathematics, vol. 267, pp. 1–19, 2014. View at Publisher · View at Google Scholar