Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article
Special Issue

Stochastic Process Theory and Its Applications

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 8563790 | https://doi.org/10.1155/2020/8563790

Fu Zhang, QingXin Meng, MaoNing Tang, "Partial Information Stochastic Differential Games for Backward Stochastic Systems Driven by Lévy Processes", Mathematical Problems in Engineering, vol. 2020, Article ID 8563790, 7 pages, 2020. https://doi.org/10.1155/2020/8563790

Partial Information Stochastic Differential Games for Backward Stochastic Systems Driven by Lévy Processes

Academic Editor: Wenguang Yu
Received20 May 2020
Accepted15 Jun 2020
Published18 Sep 2020

Abstract

In this paper, we consider a partial information two-person zero-sum stochastic differential game problem, where the system is governed by a backward stochastic differential equation driven by Teugels martingales and an independent Brownian motion. A sufficient condition and a necessary one for the existence of the saddle point for the game are proved. As an application, a linear quadratic stochastic differential game problem is discussed.

1. Introduction

Consider a partial information two-person zero-sum stochastic differential game problem, where the system is governed by the following nonlinear backward stochastic differential equation (BSDE), for any ,with the cost functionalwhere is a standard -dimensional Brownian motion and are Teugels martingales associated with a Lévy processes (see Section 2, for more details). The filtration generated by the underlying Brownian motion and the Lévy process is denoted by . The meaning of variables are given in Assumptions 1 and 2.

In the above, the processes and are open-loop control processes, which present the controls of the two players. Let and be two given nonempty convex sets. Under many situations, under which the full information is inaccessible for players, ones can only observe a partial information. For this, an admissible control process for the player is defined as a -predictable process with values in s.t. , where . Here, for all is a given subfiltration representing the information available to the controller at time t. For example, we could choose , , where is a fixed delay of information.

The set of all admissible open-loop controls for the player is denoted by . is called the set of open-loop admissible controls for the players. We denote the strong solution of (1) by , or if its dependence on admissible control is clear from context. Then, we call the state process corresponding to the control process and call the admissible quintuplet.

Roughly speaking, for the zero-sum differential game, Player I seeks control to minimize (2), while Player II seeks control to maximize (2). Let be an optimal open-loop control satisfyingfor all admissible open-loop controls . We denote this partial stochastic differential game by Problem (). We refer to as an open-loop saddle point of Problem (). The corresponding strong solution of (1) is called the saddle state process. Then, is called a saddle quintuplet.

Game theory had been an active area of research and a useful tool in many applications, particularly in biology and economic. For the partial information two-person zero-sum stochastic differential games, the objective is to find a saddle point, for which the controller has less information than the complete information filtration . Recently, An and Øksendal [1] established a maximum principle for stochastic differential games of forward systems with Poisson jumps for the type of partial information in our paper. Moreover, we refer to [2, 3] and the references therein, for more related results on the partial information stochastic differential games.

In 2000, Nualart and Schoutens [4] got a martingale representation theorem for a type of Lévy processes through Teugels martingales, where Teugels martingales are a family of pairwise strongly orthonormal martingales associated with Lévy processes. Later, Nualart and Schoutens [5] proved the existence and uniqueness theory of BSDE driven by Teugels martingales. The above results are further extended to the case for one-dimensional BSDE driven by Teugels martingales and an independent multidimensional Brownian motion by Bahlali et al. [6].

Since the theory of BSDE driven by Teugels martingales and an independent Brownian motion is established, it is natural to apply the theory to the stochastic optimal control problem. Now, the full information stochastic optimal control problem related to Teugels martingales has been in many literatures. For example, the stochastic linear quadratic problem with Lévy processes was studied by Mitsui and Tabata [7]. Motivated by [7], Meng and Tang [8] studied the general full information stochastic optimal control problem for the forward stochastic systems driven by Teugels martingales and an independent multidimensional Brownian motion and proved the corresponding stochastic maximum principle. Furthermore, Tang and Zhang [9] extended [8] to the Backward stochastic systems and obtained the corresponding stochastic maximum principle. For the case of the partial information, in 2012, Bahlali et al. [10] studied the stochastic control problem for forward system and obtained the corresponding stochastic maximum principle. In the meantime, Meng et al. [11] extended [9] to the partial information stochastic optimal control problem of backward stochastic systems and obtained the corresponding optimality conditions. For the recent results about stichastic differential control problems or games, the readers are referred to [1217] and the references therein.

However, to the best of our knowledge, there is little discussion on the partial information stochastic differential games for the system driven by Teugels martingales and an independent Brownian motion, which motives us to write this paper. The main purpose of this paper is to establish partial information necessary and sufficient conditions for optimality for Problem () by using the results in [9]. The results obtained in this paper can be considered as a generalized form of stochastic optimal control problem to the two-person zero-sum case. As an application, a two-person zero-sum stochastic differential game of linear backward stochastic differential equations with a quadratic cost criteria under partial information is discussed and the optimal control is characterized explicitly by the adjoint processes.

The rest of this paper is organized as follows. We introduce useful notations and give needed assumptions in Section 2. Section 3 is devoted to present the sufficient condition for the existence of the optimal control problem. In Section 4, we establish the necessary condition of optimality. In Section 5, a linear quadratic stochastic differential game problem is solved by applying the theoretical results.

2. Preliminaries and Assumptions

Let be a complete probability space. The filtration is right-continuous and generated by a -dimensional standard Brownian motion and a one-dimensional Lévy process . It is known that has a characteristic function of the form: , where , and is a measure on satisfying the following: (i) and (ii) there exists and , s.t. . These settings imply that the random variables have moments of all orders. We denote by the Teugels martingales associated with the Lévy process . Here, is given bywhere for all , are so called power-jump processes with , for , and the coefficients correspond to the orthonormalization of polynomials w.r.t. the measure . The Teugels martingales are pathwise strongly orthogonal and their predictable quadratic variation processes are given byFor more details of Teugels martingales, we invite the reader to consult Nualart and Schoutens [4, 5]. Denote by the predictable sub- field of ; then, we introduce the following notation used throughout this paper.

In the following, we introduce some basic spaces:(i): a Hilbert space with norm .(ii): the inner product in .(iii): the norm of .(iv): the inner product in , .(v): the norm of .(vi): the space of all real-valued sequences satisfying(vii): the space of all H-valued sequence satisfying(viii): the space of all -valued and -predictable processes satisfying(ix): the space of all -valued and -adapted processes satisfying(x): the space of all -valued and -adapted càdlàg processes satisfying(xi): the space of all -valued random variables on satisfying

The coefficients of the state equation (1) and the cost functional (2) are defined as follows:

Throughout this paper, we introduce the following basic assumptions on coefficients .

Assumption 1. and the random mapping is predictable w.r.t. , Borel measurable w.r.t. other variables, and . For almost all , is Fréchet differentiable w.r.t. and the corresponding Fréchet derivatives are continuous and uniformly bounded.

Assumption 2. The random mapping is predictable w.r.t. , Borel measurable w.r.t. other variables, and for almost all , is Fréchet differentiable w.r.t. with continuous Fréchet derivatives . The random mapping is measurable, and for almost all , is Fréchet differentiable w.r.t. with continuous Fréchet derivative . Moreover, for almost all , there exists a constant s.t. for all :Under Assumption 1, we can get from Lemma 2.3 in [9] that, for each , system (1) admits a unique strong solution. Furthermore, by Assumption 2 and a priori estimate for BSDE driven by Teugels martingales (see Lemma 3.2 in [9]), it is easy to check thatSo, Problem () is well defined.

3. A Partial Information Sufficient Maximum Principle

In this section, we want to study the sufficient maximum principle for Problem ().

In our setting, the Hamiltonian function is of the following form:

The adjoint equation, which fits into system (1) and (2) corresponding to the given admissible quintuplet , is given by the following forward stochastic differential equation driven by multidimensional Brownian motion and Teugels martingales :

Under Assumptions 1 and 2, the forward stochastic differential equation (18) has a unique solution by Lemma 2.1 in [9].

We now come to a verification theorem for Problem ().

Theorem 1 (partial information sufficient maximum principle). Let Assumptions 1 and 2 hold. Let be an admissible quintuplet and the unique strong solution of the corresponding adjoint equation (18). Suppose that the Hamiltonian function satisfies the following conditional maximum principle:(i)Suppose that, for all , is convex in , andis convex. Then, for all,(ii)Suppose that, for all , is concave in , andis concave. Then, for all ,(iii)If both cases (i) and (ii) hold (which implies, in particular, that is an affine function), then is an open-loop saddle point andProof. (i) In the following, we consider a stochastic optimal control problem over , where the system iswith the cost functionalOur optimal control problem is to minimize over , i.e., find such thatThen, for this case, it is easy to check that the Hamilton is , and for the admissible control , the corresponding sate process and the adjoint process is still and , respectively. And the optimality condition isThus, from the partial information sufficient maximum principle for optimal control (see Theorem 1 in [9]), we conclude that is the optimal control of the optimal control problem, i.e.,The proof of (i) is complete.
(ii) This statement can be proved in a similar way as (i).
(iii) If both (i) and (ii) hold, thenfor any , i.e.,On the contrary,Now, due to the inequality,we haveThe proof of the theorem is completed.
If the control process is admissible adopted to the filtration , we have the following full information sufficient maximum principle.

Corollary 1. Suppose that . Moreover, suppose that, for all , the following maximum principle holds:(i)Suppose that, for all , is convex in andis convex. Then, for all ,(ii)Suppose that, for all , is concave in , andis concave. Then, for all ,(iii)If both cases (i) and (ii) hold (which implies, in particular, that is an affine function), then is an open-loop saddle point based on the information flow and

4. Partial Information Necessary Maximum Principle

In this section, we give a necessary maximum principle for Problem ().

Theorem 2 (a partial information necessary maximum principle). Under Assumptions 1 and 2, let be an optimal control of Problem (). Suppose that is the state process of system (1) corresponding to the admissible control . Let be the unique solution of the adjoint equation (18) corresponding . Then, for , we have, for all Proof. Since is a saddle open-loop control, then is an open-loop saddle point, i.e.,So, we have

By (48), can be regarded as an optimal control of the optimal control problem, where the controlled system is (27) and the cost functional is (28). Then, for this case, it is easy to check that the Hamilton is , and for the optimal control , the corresponding optimal sate process and the adjoint process is still and , respectively. Thus, applying the partial necessary stochastic maximum principle for optimal control problems (see Theorem 2 in [9]), we can obtain (45) for . Similarly, from (49), we can obtain (45) for . The proof is complete.

5. Example: Linear Quadratic Problem

In this section, we will apply our stochastic maximum principles to a linear quadratic problem under partial information, i.e., consider the game problem to the following quadratic cost functional over valued in :where the state process is the solution to the controlled linear backward stochastic system below:

This problem is denoted by Problem (LQ). To study this problem, we need the assumptions on the coefficients as follows.

Assumption 3. The matrix-valued functions and the matrix are uniformly bounded. Moreover, is uniformly positive, i.e., for some positive constant .

Assumption 4. There is no further constraint imposed on the control processes; the set all admissible control processes isIn what follows, we will utilize the stochastic maximum principle to study the dual representation of the game Problem (LQ).
We first define the Hamiltonian function byThen, the adjoint equation corresponding to an admissible quintuplet can be rewritten asUnder Assumption 3, for any admissible quintuplet , the adjoint equation (54) has a unique solution in view of Lemma 2.1 in [9].
It is time to give the dual characterization of the optimal control.

Theorem 3. Let Assumptions 3 and 4 be satisfied. Then, a necessary and sufficient condition for an admissible quintuplet to be a saddle quintuplet of Problem (LQ) is that the control has the representationwhere is the unique solution of the adjoint equation (54) corresponding to the admissible quintuplet .
Proof. For the necessary part, let be an saddle quintuplet; then, by the necessary optimality condition (45) and , we have, a.e., a.s.,Noticing the definition of in (53), we obtain

So, the saddle point has the dual presentation (55).

For the sufficient part, let be an admissible quintuplet satisfying (55). By the classical technique of completing squares, from (55), we can claim that satisfies the optimality condition (19) in Theorem 1. Moreover, from Assumptions 3 and 4, it is easy to check that all other conditions in Theorem 1 are satisfied. Hence, is a saddle quintuplet by Theorem 1.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (nos. 11871121 and 11701369) and Natural Science Foundation of Zhejiang Province for Distinguished Young Scholar (no. LR15A010001).

References

  1. T. T. K. An and B. Øksendal, “Maximum principle for stochastic differential games with partial information,” Journal of Optimization Theory and Applications, vol. 139, no. 3, pp. 463–483, 2008. View at: Publisher Site | Google Scholar
  2. B. Øksendal and A. Sulem, “Forward-backward stochastic differential games and stochastic control under model uncertainty,” Journal of Optimization Theory and Applications, vol. 161, no. 1, pp. 22–55, 2014. View at: Publisher Site | Google Scholar
  3. G. Wang and Z. Yu, “A partial information non-zero sum differential game of backward stochastic differential equations with applications,” Automatica, vol. 48, no. 2, pp. 342–352, 2012. View at: Publisher Site | Google Scholar
  4. D. Nualart and W. Schoutens, “Chaotic and predictable representations for lévy processes,” Stochastic Processes and their Applications, vol. 90, no. 1, pp. 109–122, 2000. View at: Publisher Site | Google Scholar
  5. D. Nualart and W. Schoutens, “Backward stochastic differential equations and feynman-kac formula for levy processes, with applications in finance,” Bernoulli, vol. 7, no. 5, pp. 761–776, 2001. View at: Publisher Site | Google Scholar
  6. K. Bahlali, M. Eddahbi, and E. Essaky, “BSDE associated with lévy processes and application to pdie,” Journal of Applied Mathematics and Stochastic Analysis, vol. 16, no. 1, pp. 1–17, 2003. View at: Publisher Site | Google Scholar
  7. K.-I. Mitsui and Y. Tabata, “A stochastic linear-quadratic problem with lévy processes and its application to finance,” Stochastic Processes and their Applications, vol. 118, no. 1, pp. 120–152, 2008. View at: Publisher Site | Google Scholar
  8. Q. X. Meng and M. N. Tang, “Necessary and sufficient conditions for optimal control of stochastic systems associated with lévy processes,” Science China Information Sciences, vol. 52, no. 11, pp. 1982–1992, 2009. View at: Google Scholar
  9. M. Tang and Q. Zhang, “Optimal variational principle for backward stochastic control systems associated with lévy processes,” Science China Mathematics, vol. 55, no. 4, pp. 745–761, 2012. View at: Publisher Site | Google Scholar
  10. K. Bahlali, N. Khelfallah, and B. Mezerdi, “Optimality conditions for partial information stochastic control problems driven by lévy processes,” Systems & Control Letters, vol. 61, no. 11, pp. 1079–1084, 2012. View at: Publisher Site | Google Scholar
  11. Q. Meng, F. Zhang, and M. Tang, “Maximum principle for backward stochastic systems associated with lévy processes under partial information,” in Proceedings of the 31th Chinese Control Conference, pp. 1–6, Hefei, China, July 2012. View at: Google Scholar
  12. K. Du and Z. Wu, “Linear-quadratic stackelberg game for mean-field backward stochastic differential system and application,” Mathematical Problems in Engineering, vol. 2019, Article ID 1798585, 17 pages, 2019. View at: Publisher Site | Google Scholar
  13. J. Shi, G. Wang, and J. Xiong, “Leader-follower stochastic differential game with asymmetric information and applications,” Automatica, vol. 63, pp. 60–73, 2016. View at: Publisher Site | Google Scholar
  14. G. Wang, H. Xiao, and J. Xiong, “A kind of lq non-zero sum differential game of backward stochastic differential equation with asymmetric information,” Automatica, vol. 97, pp. 346–352, 2018. View at: Publisher Site | Google Scholar
  15. J. Wu and Z. Liu, “Maximum principle for mean-field zero-sum stochastic differential game with partial information and its applications to finance,” European Journal of Control, vol. 37, pp. 8–15, 2017. View at: Publisher Site | Google Scholar
  16. J. Wu and Z. Liu, “Optimal control of mean-field backward doubly stochastic systems driven by itô-lévy processes,” International Journal of Control, vol. 93, no. 4, pp. 953–970, 2020. View at: Publisher Site | Google Scholar
  17. W. Yu, F. Wang, Y. Huang, and H. Liu, “Social optimal mean field control problem for population growth model,” Asian Journal of Control, pp. 1–8, 2019. View at: Publisher Site | Google Scholar

Copyright © 2020 Fu Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views26
Downloads23
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.