Abstract

In this paper, we investigate the necessary optimality conditions of the discrete stochastic optimal control problems driven by both fractional noise and white noise. Here, the admissible control region is not necessarily convex. The corresponding variational inequalities are obtained by applying the classical variation method and Malliavin calculus. We also apply the stochastic maximum principle to a linear-quadratic optimal control problem to illustrate the main result.

1. Introduction

We consider a stochastic control problem for state process driven by both general white noise and fractional noise with Hurst parameter . More precisely, the state of the system is described as the following stochastic difference equation:where the functions , and and control variables are introduced in Section 2. The cost functional is defined as follows:where the functions and are also introduced in Section 2.

Optimal control problems have a variety of applications in fields such as engineering, financial mathematics, and physics. The maximum principle is one of the main contents of modern control theory. As a necessary condition of the deterministic optimal control, it was formulated by Pontryagin and his group [1]. It states that any optimal control along with the optimal state trajectory must solve a Hamiltonian system, which is a two-point boundary value problem, plus a maximum condition of Hamiltonian. The theory was then developed extensively, and different versions of the maximum principle were derived.

With the development of the optimal control theory, some researchers began to work on the discrete case by following the Pontryagin maximum principle for continuous optimal control problems. However, the fact has been verified that the discrete case was unlike the continuous case. By imposing convexity requirement, some researchers [2, 3] gave a derivation of the discrete maximum principle. A discrete optimization problem without assumptions of convexity and smoothness was shown by Mardanov et al. [4]. Taking into account the specific character of the discrete system, they obtained a necessary optimality condition which is not formulated in terms of the Hamilton–Pontryagin function.

Naturally, with the emergence of stochastic problems, more and more researchers extend the maximum principle to the stochastic case. Kushner [5] employed the spike variation and Neustadt’s variational principle [6] to derive a stochastic maximum principle. Based on Girsanov transformations, Haussmann [7] extensively investigated the necessary conditions of stochastic optimal state feedback controls for systems with nondegenerate diffusion coefficients. Bismut [8] derived the adjoint equation via the martingale representation theorem. Peng [9] first considered the second-order term in the “Taylor expansion” of the variation and obtained a stochastic maximum principle for systems that are possibly degenerate, with control-dependent diffusions and not necessarily convex regions. The form of his maximum principle is quite different from the deterministic ones and reflects the stochastic nature of the problem. With the development of the fractional calculus, Han et al. [10] obtained a maximum principle for the stochastic control problem of general controlled stochastic differential systems driven by fractional Brownian motions (of Hurst parameter ), and the maximum principle involves Malliavin derivatives.

However, as far as the discrete stochastic system, some results for the maximum principle are analogous to the deterministic systems, which are based on the Lagrange multiplier method [11]. Recently, Lin and Zhang [12] developed a maximum principle for optimal control of discrete-time stochastic systems, and the admissible control region was nonconvex. It can be found that, up to now, the existing results appearing on the stochastic discrete version mostly study the systems with general multiplicative noise. Inspiring from this, we study the maximum principle of discrete stochastic systems driven by both fractional noise and white noise by using the classical variational method and Malliavin calculus. The admissible control region is nonconvex.

The rest of this paper is organized as follows. In Section 2, we introduce some preliminaries and main assumptions needed to study the discrete stochastic control problem driven by both fractional noise and white noise. In Section 3, we derive the necessary conditions that the optimal control should satisfy. In Section 4, an example is given to illustrate the main results. In Section 5, we summarize the methods used and the results obtained.

2. Preliminaries

Let , be a -dimensional standard Brownian motion. Letwithand define

Then, , , is a -dimensional fractional Brownian motion.

Recall the operators (see Eq. (5.35) of Hu [13]):where

Let be an orthonormal basis of such that , , are smooth functions on . Let be the set of all polynomials of the standard Brownian motions over interval , namely, contains all elements of the formwhere is a polynomial of variables. If is of the above form, then its Malliavin derivative is defined as

For any , we denote the following norm:

Let denote the Banach space obtained by completing under the norm .

Let be an orthonormal basis of such that , , are smooth functions on . Let be the set of all polynomials of the fractional Brownian motions over interval , namely, contains all elements of the formwhere is a polynomial of variables. We define its Malliavin derivative by

Similarly, we define and .

The following duality formula will be used later to solve stochastic optimal control problems (15), (16), and (19).

Lemma 1. (Theorem 6.23 of [13]). Let be jointly measurable, and let . Then,

Let be a given complete probability space. Let be the -field generated by and , where is a sequence of the -dimension Brownian motion.

Similarly, let be a sequence of the -dimension fractional Brownian motion that satisfies the following conditions, where :(i) is -measurable(ii)The increments of are stationary and need not be independent(iii)For every are independent -valued Gaussian random variables(iv)E, E, where

Let be the white noise and be the fractional noise.

Let be a bounded domain, and the space of admissible controls is defined as

For the arbitrary bounded random variable and sufficient small , we define for some . Let the admissible control , and we can rewrite as , where is the Kronecker delta, i.e., when and when .

The controlled stochastic system is described as the following discrete stochastic difference equation:

The cost functional is

Assume that , and satisfy the following conditions:

(H1) The function and are continuous and differentiable with respect to variables and .(H2) Assume that there exists a constant such that(H3) The function and are continuous and differentiable with bounded derivatives.

According to equation (15) and the definition of admissible controls, and . Here, all derivations are for vectors. We havewhere and .

Our stochastic optimal control problem is to minimize the cost functional over, namely, to find the optimal satisfying

3. The Maximum Principle

We have the following theorem as the main result of this paper.

Theorem 1. Let the assumptions (H1–H3) hold. If is a solution to optimal control problems (15), (16), and (19) and satisfies the following equation,where , and are defined above, then there exists the following general maximum principle:

In order to prove Theorem 1, we begin with the estimation of the first-order variational equation of the state variables.

Lemma 2. Let the assumptions (H1–H3) hold. Then,where is the solution of

Proof. LetUnder assumption (H2), we have the following moment inequality:In fact,We also have . So, by repeating this step, moment inequality (25) is verified. Let ; we haveIn order to prove inequality (22), we considerAccording to (H2), we have thatNotice thatIt is easy to obtainwithwhich are uniformly bounded according to our assumptions. Through the above derivation, Lemma 2 is proved.

The estimation of the variational equation of the state equation is a critical point to obtain the maximum principle, and we need to solve the above stochastic difference equation.

Lemma 3. There exists a unique bounded solution to the following linear matrix-valued stochastic difference equations:

Proof. For the uniqueness of the solution to (33), assume that there exists another solution . We haveWhen , we have . By the inductive method, the uniqueness of the solution to the equations is obtained. For the boundedness of the solution, it is easy to get

According to Lemma 3, we express in an implicit form of as the following lemma.

Lemma 4. The solution of equations (23) has the following form:

Proof. According to equations (23) and equations (33), we haveThen, we multiply both sides of the equation by , and we obtainBy the iterative method, we haveEquation (36) is obtained by multiplying both sides of the equality by .

Lemma 5. We expand the cost functional as

Proof. Since is optimal, it is natural thatBy Lemma 2, we haveThen, variational inequality (40) is derived.

According to equation (36), we substitute the solution of (23) in the functional .

Lemma 6. To deal with the terms with fractional noise, we have the following duality formula of Malliavin calculus:

Proof. By Lemma 1, we haveLet ; then, the left hand of (45) is derived asThe right hand of equality (45) isThis completes the proof of Lemma 6.

By Lemma 6, we rewrite equality (43) as

For arbitrary that is not equal to zero, we have

Then, we obtain the following general maximum principle:

This completes Theorem 1.

4. Applications to the Linear-Quadratic Problem

In this section, we apply Theorem 1 to a stochastic linear-quadratic optimal control problem.

Consider the controlled system as follows:with the cost functional

Here, and and, and , are positive matrices. According to (23), the variational equation of the state equations is

In this case, we get . We express in an implicit form of as

Applying Theorem 1, we obtain the maximum principle as follows:

In this special case, we get the optimal control directly as

5. Conclusion

In this paper, the necessary optimality conditions of the discrete stochastic optimal control problems driven by both fractional noise and white noise are derived. The admissible control region is not necessarily convex. The stochastic variational inequalities are obtained by applying the classical variation method and the iterative method. The Malliavin calculus is used to derive the maximum principle for our problems. In fact, we obtain a more general maximum principle. We also apply our maximum principle to a discrete linear-quadratic optimal control problem, and the optimal control is obtained.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was partially supported by NSFC (grant 11871244), the Special Funds of Provincial Industrial Innovation of Jilin Province, China (no. 2017C028-1), Project of Science and Technology Development of Jilin Province, China (no. 20190201302JC), and China Automobile Industry Innovation and Development Joint Fund (U1564213).