Abstract

This paper addresses a version of the linear quadratic control problem for mean-field stochastic differential equations with deterministic coefficients on time scales, which includes the discrete time and continuous time as special cases. Two coupled Riccati equations on time scales are given and the optimal control can be expressed as a linear state feedback. Furthermore, we give a numerical example.

1. Introduction

The linear quadratic control problem is one of the most important issues for optimal control problem. The study of the mean-field linear quadratic optimal control problem also has received much attention [1, 2], and it has a wide range of applications in engineering and finance [3, 4]. Until now, the mean-field linear quadratic control problem is well understood both from the continuous and discrete points of view. In this paper, the mean-field linear quadratic control problem is studied in the version of time scales.

Time scales were first introduced by Hilger [5] in 1988 in order to unite differential and difference equations into a general framework. Recently, time scales theory is extensively studied in many works [614]. It is well known that the optimal control problems on time scales are an important field for both theory and applications. Since the calculus of variations on time scales was studied by Bohner [15], results on related topics and their applications has become more and more. The existence of optimal control for the dynamic systems on time scales was discussed [1618]. Subsequently, maximum principles were studied in several work [19, 20], which specify the necessary conditions for optimality. In addition, Bellman dynamic programming on time scales for the deterministic optimal control problems was considered in [21]. At the same time, some results were obtained for the linear quadratic control problems for deterministic linear system on time scales in [22, 23]. In [24], the authors developed the linear quadratic control problems for stochastic linear system on time scales. To our best knowledge, the optimal control problems for the mean-field system on time scales have not been established.

We are interested in the mean-field stochastic linear quadratic control problem on time scales (MF-SLQ for short). To deal with the well posedness of the state equation on time scales, we use the similar iteration method as [25]. Very similar to continuous and discrete cases, we can also get the associated Riccati equations (see [26, 27], for continuous and discrete cases) on time scales, and the optimal control can be expressed as a linear state feedback through the solutions of the coupled Riccati equations.

The organization of this paper is as follows. In Section 2, we show some preliminaries about time scales’ theory and MF-SLQ problem. We study the well posedness of the state equation on time scales and show the feedback representation of the optimal control by the associated Riccati equations on time scales in Section 3. Finally, an example is presented.

2. Preliminaries

A time scales is a nonempty closed subset of real numbers set and we denote . In this paper, we always suppose is bounded. The forward jump operator and backward jump operator are, respectively, defined by

(supplemented by and , where denotes the empty set). If (, , and ), the point is called right-dense (right-scattered, left-dense, and left-scattered). Moreover, a point is called isolated if it is both left-scattered and right-scattered. For a function , we denote and . The definition of the graininess function is as follows:

We now present some basic concepts and properties about time scales (see [10, 11]).

Definition 1. Let be a function on time scales , and is called right-dense continuous function if is continuous at every right-dense point and has finite left-sided limits at every left-dense point. Similarly, is called left-dense continuous function if is continuous at every left-dense point and has finite right-sided limits at every right-dense point. If is right-dense continuous and also is left-dense continuous, then is called continuous function.

Remark 1. If a function is right-dense continuous, then has an antiderivative .
Define the set as follows:

Definition 2. Let : be a function and , and if for all , there exist a neighborhood of such thatWe call the derivative of at .

Remark 2. If the functions and are differentiable at , then the product is also differentiable at and the product rule is given byIn this paper, we adopt the stochastic integral defined by Bohner et al. [25]. Let be a complete probability space with an increasing and continuous filtration . We define that is the set of all -adapted, -valued measurable process such that .
A Brownian motion indexed by time scales is defined by Grow and Sanyal [13]. Although the Brownian motion on time scales is very similar to that on continuous time, but there are also some differences between them. For example, the quadratic variation of a Brownian motion on time scales (see [14]) is an increasing process yet, but it is not deterministic. In fact, , where is the Lebesgue measure.
Next, we give the definition of the stochastic integral and its properties.

Definition 3 (see [25]). The random process is stochastic integrable on if the corresponding is integrable, and define the integral value of aswhereand the Brownian motion on the right side of (7) is indexed by continuous time.
We also have the following properties.
Let and . Then,where the integral with respect to the quadratic variation of Brownian motion is defined by Stieltjes integral as .

Notation 1. The following notation will be used:: the transpose of any vector or matrix M: =  for any matrix or vector : M is a positive semidefinite matrix: the space of all symmetric matrices: the quadratic covariation process of and : the space of bounded, -Lebesgue integrable, and -valued functions on : the family of all -valued continuous functions defined on such that they are differentiable in Finally, we introduce our MF-SLQ problem. Consider the following stochastic -differential equation:where the coefficients are all deterministic matrix-valued functions and . The cost functional iswhere are symmetric matrices and are given deterministic matrix-valued functions.

Problem (MF-SLQ). For any given initial state , find a such that is called an optimal control of the MF-SLQ problems and the corresponding is called an optimal state process.

3. Main Results

First, we introduce the following assumptions which are necessary for the proofs of our main results.

(H1) Assume that

(H2) Assume thatand for some ,

Remark 3. Assumption (H1) can guarantee the existence and uniqueness of the solution of the mean-field stochastic linear system (10). Under Assumptions (H1) and (H2), we can establish two coupled Riccati equations to show the feedback control.
Now, we show the well posedness of the state equation (10) by the iteration method, which is very similar to the way as in [25].

Theorem 1. Let be given and be a standard Brownian motion. Suppose that (H1) holds, then system (10) has a unique solution for any .

Proof. For the existence, we adopt the iteration method and defineLet , and we claim thatwhere is a generic constant and is the generalized monomials defined in [28]. When , we obtainSuppose inequality (14) holds for , thenThis proves the claim.
Similarly, we haveBy a martingale inequality and by inequality (18) (see [25], for details), one haswhere . Note that a simple probability inequality is obtained from Markov’s inequality, , where , and is a random variable. Using the probability inequality, where and , we obtainAccording to Borel–Cantelli lemma, this implies thatwhere . is the abbreviation of “infinitely often”. Consequently, converges uniformly. Let , we haveFor the uniqueness, we assume and are both solution. Then,It follows thatBy Gronwall’s inequality [29], we obtain . Thus,
We are in a position to give the main results of the MF-SLQ optimal control problem. For this, we need a useful lemma. By some simple calculations, it is not hard for us to get the following product rule for stochastic processes on time scales, which is very similar to Du and Dieu [12].

Lemma 1. For any two n-dimensional stochastic processes and withwhere , we haveIn this case,where the function is the graininess function as defined in (3) on time scales.

Remark 4. Another form of the abovementioned product rule is as follows:where , , and . When , it is consistent with ’s formula.

Remark 5. As mentioned before, because the quadratic variation of a process depends on not only the process itself but also the structure of time, the quadratic variation of a process becomes a little more complicated than the classical one. For instance, the quadratic variation of a deterministic continuous process is no longer zero. Therefore, we can have different forms of the product rule on time scales. For example, the product rule (6) is equivalent toNow, we use the square completion technique to present a state feedback optimal control via two coupled Riccati equations on time scales.

Theorem 2. Let (H1) and (H2) hold; then, the following Riccati equations on time scales (REs) admit unique solution :where and are given asFurthermore, the optimal control of the MF-SLQ problems can be presented as

In this case, the optimal cost functional is

Proof. From the state equation, we haveAssume that and for some deterministic and differentiable functions and on time scales . Applying Lemma 1 to and , we can obtainBy property (9), integrating from 0 to and taking expectation from the both sides of the above two equations (41) and (42), we see thatMoreover, the cost functional can be rewritten asInserting (43) and (44) into the cost functional (45) givesIf and satisfy the Riccati equations (33) and (34), thenSince and , the optimal control should satisfyMaking some calculations, we get the optimal control as (37). Substituting it into (47), we have the optimal cost functional can be expressed as (38). For the existence and uniqueness of the solutions to the REs, it is assert [24] that Riccati equation (33) admits as a unique positive semidefinite solution since (H2) holds. It follows that . Using the similar method, we can get the solvability of the RE (34).

Remark 6. When , and are all equal to zero, then . This recovers the result of the classical SLQ problem [24].

Remark 7. When , the coupled REs (33) and (34) reduce to the result in [26]. On the contrary, when , the coupled REs are consistent with the case in [27].
Similarly, we have the following theorem which can be regarded as an equivalent form of Theorem 2.

Theorem 3. Let (H1) and (H2), then RE (33) and the following RE (50) admit unique solution :where and are given as before. For the MF-SLQ problem,is an optimal control. Moreover, the optimal cost functional with respect to is

Proof. As the statement in previous Theorem 2, we can obtain the solvability of the REs (33) and (50). We need only to prove (51) and (42). Taking integral of from 0 to and taking expectation, we obtainConsequently, by completing the squares, one hasSince and , we must select such thatTherefore, the optimal control satisfies (51). In this case, the optimal cost functional is (52). The conclusions are proved.

Remark 8. The solution of the RE (34) equals to the sum of and , where and are the solutions of the REs (33) and (50).

4. Example

The theorems in Section 3 tell us that we can solve the MF-SLQ problems if the corresponding Riccati equations can be solved. Now, we discuss a numerical example based on the method developed in the previous sections and compare the difference among the time scales, continuous time, and discrete time.

Consider the following example with one-dimensional state equation on time scales :and the cost functionalThe corresponding coupled Riccati equations areBy solving the coupled Riccati equations and using Theorem 2, the optimal control can be expressed as

If we regard the time as the continuous time , then the optimal control isOn the contrary, if we treat the time as discrete time , then the optimal control is

The comparison result of the optimal controls is shown in Figure 1.

The example implies that we should take an impulsive control when and in the time scales setting . Although the optimal control in the interval on the time scales is the same as in the continuous case, they are different in the interval . It is to say that the time gap influences not only the impulsive control but also the optimal control in the interval . We can see that the optimal control depends on the structure of time domain. This interesting result is hidden in the classical continuous and discrete formulation.

5. Conclusions

The linear quadratic optimal control problems for mean-field stochastic differential equations on time scales are studied. It unifies and extends the mean-field optimal control problems in continuous and discrete time formulations. Via two coupled REs on time scales, we get the corresponding optimal control with the state feedback representation. The optimal control problems established in this paper offer a more practical scheme in tackling directly the issue on the mixture of continuous time and discrete time.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Key R&D Program of China (Grant no. 2018YFA0703900), the Tian Yuan Projection of the National Natural Science Foundation of China (Grant no. 11626247) and the Major Project of National Social Science Foundation of China (Grant no. 19ZDA091).