Abstract

In this paper, we study a two-player zero-sum stochastic differential game with regime switching in the framework of forward-backward stochastic differential equations on a finite time horizon. By means of backward stochastic differential equation methods, in particular that of the notion from stochastic backward semigroups, we prove a dynamic programming principle for both the upper and the lower value functions of the game. Based on the dynamic programming principle, the upper and the lower value functions are shown to be the unique viscosity solutions of the associated upper and lower Hamilton–Jacobi–Bellman–Isaacs equations.

1. Introduction

The differential game is concerned with the problem that multiple players make decisions, according to their own advantages and trade-off with other partners in a dynamic system. Stochastic differential games (SDGs) have been well studied. Recently, Lv [1] studied the two-player zero-sum SDGs in a regime switching model with an infinite horizon. Compared with the traditional diffusion model, the regime switching model has two obvious advantages. First, the underlying Markov chain can be used to model discrete events with larger long-term system impact. For instance, in financial markets, it is easy to capture market trend by using finite state Markov chain. However, it is difficult to incorporate this dynamic into pure diffusion model. Second, when conducting numerical experiments, regime switching models require very limited data input. In recent years, due to the capacity for characterizing all kinds of random events and the tractability, regime switching models have attracted extensive attention [13]. In this paper, we introduce a new method, which is different from the method in [1]. We investigate two-player zero-sum SDGs with regime switching on a finite time horizon by using the backward stochastic differential equation (BSDE) methods.

Pardoux and Peng [4] first introduced the nonlinear BSDEs in 1990. The theory of BSDE was originally developed by Peng [5] for stochastic control theory. And later Hamadène and Lepeltier [6] and Hamadène et al. [7] introduced this theory to SDGs. Buckdahn and Li [8] studied a recursive SDG problem and interpreted the relationship between the controlled system and the Hamilton–Jacobi–Bellman–Isaacs (HJBI) equation. The theory of BSDEs has been well studied and applied to many fields, such as stochastic control, SDGs, mathematical finance, and partial differential equation theory (see [57, 911] for details). The readers interested in other topics about game theory are referred to [1215].

In this paper, let be a fixed probability space on which a d-dimensional Brownian motion and a Markov chain are defined on some sample space , is the completed Borel -algebra over , and is the Wiener measure. Here, we assume that , where . Let denote the filtration generated by Brownian motion . And denote the filtration generated by the Markov chain . Assume that and are independent. The Markov chain takes values in a finite state space and is observable. And the generator of the Markov chain is given bywhere is the transition rate from market regime to , , and , for every .

We will investigate a two-player zero-sum SDG with regime switching in the framework of BSDE on a finite time horizon. The dynamics of the SDG are described by the following functional stochastic differential equation (SDE): for ,where is a fixed finite time horizon, is regarded as the initial state, and . is the value of at time , and are the pair of -adapted processes, take their values in some compact metric spaces and , and are called admissible controls of the two players I and II, respectively. Precise assumptions on the coefficients and are given in the next section.

The cost functional is introduced by BSDE:where and are introduced in (2). The above BSDE has a unique solution . And for given control processes and , we introduce the associated cost functional:where is defined by BSDE (3). In the game, player I aims to maximize (4) and contrarily player II aims to minimize (4). We define the lower and the upper value functions and , respectively:

Precise definitions of and are given in the next section. In the case we say that the game admits a value. The main objective of this paper is to show that and are, respectively, the unique viscosity solutions of the following lower and upper HJBI equations, and both are systems consisting of m coupled equations:associated withwhere is defined asfor . If Isaacs’ condition holds, i.e., , then (5) and (6) coincide, and the uniqueness of viscosity solution implies , that is, the game admits a value.

The paper is organized as follows. In Section 2, we introduce some notations and preliminaries, which will be needed in what follows. In Section 3, we introduce the dynamic programming principle. In Section 4, based on the dynamic programming principle, we investigate that the upper and the lower value functions are the unique viscosity solutions of the associated upper and lower HJBI equations.

2. Preliminaries

Let us introduce the following spaces, which will be needed in what follows.

We consider the BSDE with data :

Here is such that, for any , is -progressively measurable. We make the following assumptions:(A1) There exists a positive constant such that for all ,(A2) .

Lemma 1. Let assumptions (A1) and (A2) hold; then, for any random variable , BSDE (12) has a unique solution:

We give the comparison theorem for solutions of BSDEs.

Lemma 2. (comparison theorem). Let and and satisfy (A1) and (A2). We denote by and the solutions of BSDEs with data and , respectively, and we suppose that(i).(ii).

Then, we have ., for all . Moreover, if , then , and in particular, .

With the notations in the above lemma, we assume that, for some satisfying (A1) and (A2), the drivers have the following form:

Then, we have the following lemma.

Lemma 3. The difference of the solutions and of BSDE (12) with the data and , respectively, satisfies the following estimate:where and is the Lipschitz constant in (A1).

For the proof of the above two lemmas, the readers can refer to [11, 16].

We now consider the assumptions on the coefficients and . The coefficients and are two given functions. The mappings and satisfy the following conditions:(A3)(i)For every fixed , and are continuous with respect to .(ii)For any , and , there exists a positive constant such that

From (A3), we can get the global linear growth conditions of and , i.e., the existence of some such that, for all , , , , ,

Suppose the above assumptions hold; for any and , control system (2) has a unique solution . And we have the following estimates.

Lemma 4. Under the assumptions of the mappings and , there exists a positive constant such that, for any , and ,

Suppose that the two functions and the terminal cost satisfy the following conditions:(A4)(i)For any fixed , is continuous with respect to .(ii)There exists such that, for all , , , , , and ,(iii)There is a constant such that, for all , ,

Under the above conditions, (3) has a unique solution . And we have the following estimates.

Lemma 5. For all , , , and , there exists a constant such that,For the proof of this lemma, the readers can refer to [17].

Now, we introduce the admissible controls and admissible strategies. Let , be two deterministic times, and .

Definition 1. An admissible control process (resp., ) for player (resp., player ) on is a process taking values in (resp., ), progressively measurable with respect to the filtration , where is the filtration generated by and .
The set of all admissible controls for player (resp., player ) on time is denoted by (resp., ).

Definition 2. A nonanticipative strategy for player on is a mapping such that, for any -stopping time and any , if on (with the notation ). In the same way, we define a nonanticipative strategy for player on .
The set of all nonanticipative strategies for player (resp., player ) on is denoted by (resp., ).
Now we give some properties about the lower and the upper value functions and . The following lemma was established in [8], and the situation was slightly different. For the proof of this lemma, the readers can refer to [8].

Lemma 6. Under the assumptions (A3) and (A4), for all , the value functions and are deterministic functions.

From (4), (5), and (22), we get the properties of the lower value function in the following.

Lemma 7. Under the assumptions (A3) and (A4), for all and , we have(i)(ii) is -Hölder continuous with respect to :(iii)

The same properties hold true for the function .

3. Dynamic Programming Principles

The dynamic programming principle is one of the principal and most commonly used methods to solve the optimal control problem. In this section, we present the dynamic programming principle for a two-player zero-sum SDG with regime switching in the framework of BSDE on a finite time horizon. It will be used in the next section.

We first introduce the backward stochastic semigroup. For given initial state , a positive number , for admissible control processes and , and a real value random variable , we definewhere is the solution of the following BSDE with terminal time :and is the solution of SDE (2). According to the uniqueness of the solution of the BSDE, we observe that for the solution of BSDE (3), we have

We now introduce the dynamic programming principle for the value functions of SDGs with regime switching.

Proposition 1. Under the assumptions (A3) and (A4), the following dynamic programming principle holds: for all , , ,

Put

We proceed with the proof that coincides with into the following steps.Step 1. Let be arbitrarily fixed. Then, given a , we define as follows the restriction of to :where extends to an element of . Obviously, . And, from the nonanticipative property of we deduce that is independent of the special choice of . Thus, from the definition of ,and we use the notation for some sequences such that,Let and set . Construct . Certainly, forms an -partition, and . Moreover, from the nonanticipativity of , we have . According to the existence and uniqueness of the BSDEs, it follows thatfor . Hence,We now focus on the interval . Because does not depend on , we can define , for any . From , we know that belongs to . Thus, from the definition of , for any ,From Lemmas 5 and 7, there exists a constant such that, for any , ,We can show by approximating thatTo estimate the right side of the latter inequality, we note that there exists some sequence such thatLet and set . Construct . Certainly, forms an -partition; moreover, . Therefore, from the nonanticipativity of , we have , and from the definition of , we know that . According the existence and uniqueness of our BSDE, it follows thatTherefore,where . From (35) and (41),Since has been arbitrarily chosen, we have (42) for all . Thus,Then, letting , we get .Step 2. We now deal with the other case: . From the definition of , we havefor some such that,

For any , we put , , and . Certainly, forms an -partition; moreover, . According the existence and uniqueness of our BSDE, we conclude thatfor all . Next,for all .

We now focus on the interval . From the definition of , we deduce that, for any , there exists such that

Now consider a decomposition of , namely, such that , for each . Take any fixed and define . Clearly, we always haveeverywhere on , for each . Moreover, for every and , there exists some such that (48) holds, and clearly,

Now we can define the new strategy , where . Obviously, .

Next we shall show that is nonanticipating: indeed, let be an -stopping time and be such that on . Decomposing into , such that and . We have on since on . On the other hand, on , and on , we have . Thus, from the definition, on and on . This yields on , from which it follows that .

Fix arbitrarily and decompose into , . Then, from (37), (47), (i), and (49), we have

From (37), (48), (ii), and (49), it follows thatfor any . Therefore, we obtain

Let , and we have .

The proof is complete.

4. Viscosity Solution of Isaacs’ Equation: Existence and Uniqueness Theorem

In this section, based on the dynamic programming principle, we want to prove that the lower value function introduced by (5) is the viscosity solution of (7), while the upper value function defined by (6) is the viscosity solution of (8). Moreover, if Isaacs’ condition holds, i.e., , then (7) and (8) coincide, and the uniqueness of viscosity solution implies that , that is, the game admits a value.

We first recall the definition of a viscosity solution of (7). The one for (8) can be defined in a similar way

Definition 3. A continuous is said to be a viscosity subsolution (resp., supersolution) of (7), if (resp., ), for all and if for all functions and such that attains its local maximum (resp., minimum) value zero at , it haswhere is called a viscosity solution if it is both a viscosity subsolution and a viscosity supersolution.

Remark 1. denotes the set of the real-valued functions that are continuously differentiable up to the third order and whose derivatives of order from 1 to 3 are bounded.
In the following, we prove that the lower value function is a viscosity solution of (7). We only focus on the lower value function , and the results hold for the upper value function in a similar procedure.

Theorem 1. Under the assumptions (A3) and (A4), the lower value function is a viscosity solution of (7).

First we prove some auxiliary lemmas. To abbreviate notations, for some arbitrarily chosen but fixed ,where and , and we consider the following BSDE defined on the interval :where the process has been introduced by (2), is regarded as the initial state, is the value of at time , and .

We can characterize the solution process as follows.

Lemma 8. For any , , , we define by , and , and for , we have the following relationship:

Proof. is defined with the help of the solution of the BSDE:by the following formula:Thus, we only need to prove . We haveby applying Dynkin’s formula to . And for ,Therefore, for any , we get the desired result.

We consider the following simple BSDE in which is replaced by its deterministic initial value :

Then, we have the following lemma.

Lemma 9. There is a constant independent of the control processes , and of , such that for every , ,

Proof. From Lemma 4, we have the existence of some constant such thatcombined withand we haveFrom (56) and (62), using Lemma 2, setIt is easy to know that is Lipschitz with respect to , and for , , . Thus,where . Therefore,The proof is complete.

Lemma 10. Let be the solution of the following ordinary differential equation:where

Then, .,

Proof. We first introduce the functionwhere . And we consider the following equation: for ,Since is Lipschitz in , for every , there exists a unique solution to (74). Then, for every ,In fact, from Lemma 2 and the definition of , for every , we haveMoreover, there exists a measurable function such thatLet , and we observe that , andThus, from the uniqueness of the solution of the BSDE, it follows that and ., for every . Then, for all ,Then, since , by a similar proof, we haveIt uses the fact that (70) can be considered as a BSDE with the solution . So, the proof is complete.

Lemma 11. For all , , we havewhere the constant is independent of the control processes , and of .

Proof. Since has a linear growth in , uniformly in , we get from Lemma 2 for some constant independent of and the controls , ,Moreover, from equation (62),and sincewe getTherefore,

Proof of Theorem 1. (i)It is easy to know that . We first will prove is a viscosity supersolution. For fixed , without loss of generality, we suppose that . According to Proposition 1,where and from and the monotonicity property of , we haveFrom Lemma 8,Thus, from Lemma 9, we obtainThen, sincewe haveand Lemma 10 implies ., where is the unique solution of (70). Thus,and from the definition of , we see that is a viscosity supersolution of (7).(ii)Now we prove is a viscosity subsolution. For fixed , without loss of generality, we suppose that . We must prove thatWe suppose that this is not true. Then, there exists some such thatand we can find a measurable function such thatAnd since is uniformly continuous on , there exists some such that for all and ,Moreover, due to Proposition 1, for all ,and similar to (i), from and the monotonicity property of , we haveTherefore, from Lemma 8,Thus,Here, by setting , we identify as an element of . Given any ¿0, we can choose such that . From Lemma 9, we getMoreover, from (62),We get from the Lipschitz property of in , (99), and Lemma 11 thatLetting and then , we get that , which induces a contradiction. Then,and from the definition of , we see that is a viscosity supersolution of (7). Therefore, from the above two steps, we derive that is a viscosity solution of (7).

Remark 2. We can also prove that U is a viscosity solution of (8) similarly by the method of Theorem 1.

We now study the uniqueness of the viscosity solution of Isaacs’ equation (7):wherefor . The functions , and are still supposed to satisfy (A3) and (A4), respectively.

We will prove the uniqueness for (108) in the following space of continuous functions:

This growth condition was introduced in [18]. And it was shown in [18] that this kind of growth condition is optimal for the uniqueness and can, in general, not be weakened. We adapt the idea developed in [18] to Isaacs’ equation (108) to prove the uniqueness of the viscosity solution in . We only focus on equation (7) in , and results hold for equation (8) in a similar procedure. For the proof of the uniqueness theorem, we need some auxiliary lemmas. Denoting by a Lipschitz constant of , we have the following.

Lemma 12. Let be a viscosity subsolution and be a viscosity supersolution of (108). Then, the function is a viscosity subsolution of the equation

The proof of this lemma follows directly from Lemma 3.7 in [18]; it is only slightly different.

Lemma 13. Let be a viscosity subsolution and be a viscosity supersolution of (108). Then, the function is a viscosity subsolution of the equation

Proof. For , without loss of generality, we suppose that , i.e.,where . Hence,Obviously, , for all . Moreover, from (114), for all functions ,On the other hand, for any such that attains its local maximum value zero at , we have that attains its local maximum value zero at . Since is a viscosity subsolution of (111), we haveConsequently,It means that is a viscosity subsolution of equation (112).

Similar to the proof of Theorem 6.1 in [8], we can prove that , i.e., . Then, we have the uniqueness theorem.

Theorem 2. Under the assumptions (A3) and (A4), the viscosity solution of (108) is unique in .

Remark 3. Since the lower value function is of at most linear growth, it belongs to , and it is easy to know that is the unique viscosity solution in of (108). And we can prove that the upper value function is the unique viscosity solution in of (8).

Remark 4. If Isaacs’ condition holds, that is, if for any , , then (8) and (108) coincide, i.e., (7) and (8) coincide, and from the uniqueness in of viscosity solution, it follows that which means the associated SDG has a value.

Data Availability

No data were used in this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was supported by the National Natural Science Foundation of China (nos. 12071292 and 11871121) and the Natural Science Foundation of Zhejiang Province (no. LY21A010001).