About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis

Volume 2014 (2014), Article ID 452124, 8 pages

http://dx.doi.org/10.1155/2014/452124
Research Article

Stochastic Maximum Principle for Partial Information Optimal Control Problem of Forward-Backward Systems Involving Classical and Impulse Controls

1School of Science, Dalian Jiaotong University, Dalian 116028, China

2School of Mathematical Sciences, Dalian University of Technology, Dalian 116023, China

Received 1 January 2014; Revised 28 March 2014; Accepted 29 March 2014; Published 15 April 2014

Academic Editor: Xiaojie Su

Copyright © 2014 Yan Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We study the partial information classical and impulse controls problem of forward-backward systems driven by Lévy processes, where the control variable consists of two components: the classical stochastic control and the impulse control; the information available to the controller is possibly less than the full information, that is, partial information. We derive a maximum principle to give the sufficient and necessary optimality conditions for the local critical points of the classical and impulse controls problem. As an application, we apply the maximum principle to a portfolio optimization problem with piecewise consumption processes and give its explicit solutions.

1. Introduction

The classical and impulse controls problems have received considerable attention in recent years due to their wide applicability in different areas, such as optimal control of the exchange rate between different currencies (see, e.g., [13]), optimal financing and dividend control problem of an insurance company facing fixed and proportional transaction costs (see, e.g., [4, 5]), stochastic differential game (see, e.g., [6]), and dynamic output feedback controller design problem (see, e.g., [7] and the references therein).

In the existing literatures, the dynamic programming principle and the maximum principle are two main approaches in solving these problems.

In dynamic programming principle, the classical and impulse controls can be solved by a verification theorem and the value function is a solution to some quasi-variational inequalities. However, the dynamic programming approach relies on the assumption that the controlled system is Markovian; see, for example, [810].

There have been some pioneering works on deriving maximum principles for the classical and impulse controls problems. For example, Wu and Zhang [11] established maximum principle for stochastic recursive optimal control problems involving impulse controls; Wu and Zhang [12] gave maximum principle for classical and impulse controls of forward-backward systems. In their control problems, the information available to the controller is full information.

In many practical systems, the controller only gets partial information, instead of full information, such as delayed information (see, e.g., [1316]). The partial information stochastic control problem is not of a Markovian type and hence cannot be solved by dynamic programming. As a result, maximum principles are established to solve the partial information stochastic control problem. There is already a rich literature and versions of corresponding maximum principles for partial information control problems. For example, Baghery and Øksendal [17] derived the maximum principle for partial information stochastic control problem, where the stochastic system is described by stochastic differential equations (SDE hereafter). An and Øksendal [18] gave a maximum principle for the stochastic differential game under partial information. Øksendal and Sulèm [19] established maximum principles for stochastic control of forward-backward systems driven by Lévy processes. In their control problems, the control variable is just the classical stochastic control process . To the best of our knowledge, there is no literature on studying the maximum principle for partial information classical and impulse controls problems, which motivates our work.

In this paper, we study classical and impulse controls problems of forward-backward systems, where the stochastic systems are represented by forward-backward SDEs driven by Lévy processes, the control variable consists of two components: the stochastic control and the impulse control , and the information available to the controller is possibly partial information, rather than full information. Because of the non-Markovian nature of the partial information, we cannot use dynamic programming principle to solve the problems. Instead, we derive a maximum principle which allows us to handle the partial information case.

The similar maximum principle is also studied by Wu and Zhang [11] in the complete information case and with the Brownian motion setting. There are three main differences between our paper and [11]. Firstly, we study the more general cases: the forward-backward system is driven by Lévy processes and the information available to the controller is partial information. Secondly, their proof differs from ours. They used convex perturbation technique to establish the maximum principle. Thirdly, they assumed the concavity conditions of Hamiltonian and utility functional to make the necessary optimality conditions turn out to be sufficient. However, the concavity conditions may not hold in many applications. Consequently, in our maximum principle formulation, we give the sufficient and necessary optimality conditions for the local critical points, instead of global optimums, without the assumption of concavity condition.

The paper is organized as follows: in the next section we formulate the partial information classical and impulse controls of the forward-backward system driven by Lévy processes. In Section 3 we derive the stochastic maximum principle for the considered classical and impulse controls problem. In Section 4 we apply the general results obtained in Section 3 to give the solutions of the example. Finally we conclude this paper in Section 5.

2. Problem Formulation

Let be a filtered probability space and let be a Lévy process defined on it. Let be an -Brownian motion and let be compensated Poisson random measures independent of , where is the Lévy measure of Lévy process with jump measure such that for all , . is the filtration generated by and (as usual augmented with all the -null sets). We refer to [8] for more information about Lévy processes.

Suppose that we are given a subfiltration representing the information available to the controller at time , . It is remarked that the partial information of classical and impulse controls is different from the classical and impulse controls of delay systems, where the state function is described by the solution of stochastic differential delay equation (see, e.g., [20]).

Let be a given sequence of increasing -stopping times such that . At we are free to intervene and give the system an impulse , where is -measurable random variable. We define impulse process by It is worth noting that the assumption implies that at most finitely many impulses may occur on .

Now we consider the forward-backward systems involving classical and impulse controls. Given and , let , , ,   ,   , and be measurable mappings. is a nonempty convex set of . Then the forward-backward systems are described by forward-backward SDEs in the unknown processes , , , and as follows: The result of giving the impulse is that the state jumps from to . We call classical and impulse controls.

There are two different jumps in the system (2). One jump is the jump of stemming from the random measure , denoted by . The other jump is the jump caused by the impulse , given by . Let

Assumption 1. A jump at time , , satisfies

Let denote a given family of controls, contained in the set of -predictable controls such that the system (2) has a unique strong solution. We denote by the class of processes such that each is an -valued -measurable random variable. Let be the class of impulse process such that . We call the admissible control set.

Suppose we are given a performance functional of the form where denotes expectation with respect to and , , and are given functions such that Then the classical and impulse controls problem is to find the value function and optimal classical and impulse controls such that

3. Maximum Principle for Partial Information Classical and Impulse Controls Problems

In this section, we derive a maximum principle for the optimal control problems (7). We will give the necessary and sufficient conditions for the local critical points .

Firstly, we make the following assumptions.

Assumption 2. (1) For all and bounded -measurable random variables , the control defined by belongs to .

(2) For all where is bounded, there exists such that the control

Next we give the definition of the Hamiltonian process.

Definition 3 (see [19]). We define a Hamiltonian process as follows: where is Fréchet differentiable in the variables ; denotes the Fréchet derivative in of ; the adjoint processes , , , and are given by a pair of forward-backward SDEs as follows.

(i) Forward system in the unknown process

(ii) Backward system in the unknown processes , , and    ,

For the sake of simplicity, we use the short hand notation in the following: and similarly for , , , , , , , , , , , , and .

Theorem 4 (maximum principle). Let with corresponding solutions , , , , and of (2), (12), and (13). Assume that for all the following growth conditions hold: Then the following are equivalent.

(1)   is a critical point for , in the sense that

(2) Consider for a.a. and

Proof. Define Then we have

Firstly, we prove . Assume that (1) holds. Then we have By Itô formula, we get where . Now we consider Applying Itô formula to , we get where . By substituting (22), (23), and (24) into (21), we obtain Depending on the definition of Hamiltonian , we get Hence (25) simplifies to for all bounded . It is obvious that is independent of , . So we obtain from (27) that holds for all bounded and .

Now we prove that (17) holds for all . We know that (28) holds for all bounded . So (28) holds for all bounded of the form for a fixed , where is a bounded -measurable random variable. Then we have which holds for all bounded -measurable random variable . As a result, we conclude that Moreover, since (29) holds for all bounded -measurable random variable , we conclude that (18) holds. Therefore, we conclude that .

Each bounded can be approximated by linear combinations of controls of the form. Then we prove that by reversing the above argument.

Remark 5. Let , , , and be concave, for all . Then the local critical point , which is obtained by Theorem 4, is also a global optimum for the control problem (7).

Remark 6. Let , , and . Then our maximum principle (Theorem 4) coincides with the maximum principle (Theorem 3.1) in [11].

4. Application

Example 7 (portfolio optimization problem). In a financial market, we are given a subfiltration representing the information available to the trader at time . Let , , be a piecewise consumption process (see, e.g., [11]), where is a fixed sequence of increasing -stopping times and each is an -measurable random variable. Then the wealth process corresponding to the portfolio is given by where , , and and are -predictable processes such that for some and

Endowed with initial wealth , an investor wants to find a portfolio strategy and a consumption strategy minimizing an expected functional which composes of three parts: the first part is the total utility of the consumption ; the second part represents the risk of the terminal wealth , where is the value at of the solution of the following backward stochastic differential equation ([19]): and the third part is the utility derived from the consumption process . More precisely, for any admissible control , the utility functional is defined by where denotes the expectation with respect to the probability measure , and . Therefore, the control problem is to find and such that

The control problem (38) is a classical and impulse controls problem of forward-backward systems driven by Lévy processes under partial information . Next we solve the control problem (38) by Theorem 4. With the notation of the previous section we see that in Example 7 we have Then by (11) the Hamiltonian is where and is given by (12); that is, where . We can easily obtain the solution of (42) as follows:

If is a local critical point with corresponding , then, by the sufficient and necessary optimality condition (17) in Theorem 4, we get Since is -adapted, we have where , , and are given by (41).

On the other hand, by the sufficient and necessary optimality condition (17) in Theorem 4, we obtain That is, for each , we have Since is an -measurable random variable, we have where is given by (43) and is given by (41). Consequently, we summarize the above results in the following theorem.

Theorem 8. Let , , and be the solutions of (41) and let be the solution of (43). Then the pair is given by where given by (48) is the local critical point of the classical and impulse controls problem (38).

5. Conclusion

We consider the partial information classical and impulse controls problem of forward-backward systems driven by Lévy processes. The control variable consists of two components: the classical stochastic control and the impulse control. Because of the non-Markovian nature of the partial information, dynamic programming principle cannot be used to solve partial information control problems. As a result, we derive a maximum principle for this partial information problem. Because the concavity conditions of the utility functions and the Hamiltonian process may not hold in many applications, we give the sufficient and necessary optimality conditions for the local critical points of the control problem. To illustrate the theoretical results, we use the maximum principle to solve a portfolio optimization problem with piecewise consumption processes and give its explicit solutions.

In this paper, we assume that the two different jumps in our system do not occur at the same time (Assumption 1). This assumption makes the problem easier to analyze. However, it may fail in many applications. Without this assumption, it requires more attention to distinguish between the two different jumps. This will be explored in our subsequent work.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was supported by the National Natural Science Foundation of China under Grant 11171050.

References

  1. G. Mundaca and B. Øksendal, “Optimal stochastic intervention control with application to the exchange rate,” Journal of Mathematical Economics, vol. 29, no. 2, pp. 225–243, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  2. A. Cadenillas and F. Zapatero, “Classical and impulse stochastic control of the exchange rate using interest rates and reserves,” Mathematical Finance, vol. 10, no. 2, pp. 141–156, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  3. R. R. Lumley and M. Zervos, “A model for investments in the natural resource industry with switching costs,” Mathematics of Operations Research, vol. 26, no. 4, pp. 637–653, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  4. H. Meng and T. K. Siu, “Optimal mixed impulse-equity insurance control problem with reinsurance,” SIAM Journal on Control and Optimization, vol. 49, no. 1, pp. 254–279, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  5. H. Meng and T. K. Siu, “On optimal reinsurance, dividend and reinvestment strategies,” Economic Modelling, vol. 28, no. 1-2, pp. 211–218, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. F. Zhang, “Stochastic differential games involving impulse controls,” ESAIM: Control, Optimisation and Calculus of Variations, vol. 17, no. 3, pp. 749–760, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  7. L. G. Wu, X. J. Su, and P. Shi, “Output feedback control of Markovian jump repeated scalar nonlinear systems,” IEEE Transactions on Automatic Control, vol. 59, no. 1, pp. 199–204, 2014. View at Publisher · View at Google Scholar
  8. B. Øksendal and A. Sulem, Applied Stochastic Control of Jump Diffusions, Springer, Berlin, Germany, 2nd edition, 2007.
  9. R. C. Seydel, “Existence and uniqueness of viscosity solutions for QVI associated with impulse control of jump-diffusions,” Stochastic Processes and Their Applications, vol. 119, no. 10, pp. 3719–3748, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  10. Y. Wang, A. Song, C. Zheng, and E. Feng, “Nonzero-sum stochastic differential game between controller and stopper for jump diffusions,” Abstract and Applied Analysis, vol. 2013, Article ID 761306, 7 pages, 2013. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  11. Z. Wu and F. Zhang, “Stochastic maximum principle for optimal control problems of forward-backward systems involving impulse controls,” IEEE Transactions on Automatic Control, vol. 56, no. 6, pp. 1401–1406, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. Z. Wu and F. Zhang, “Maximum principle for stochastic recursive optimal control problems involving impulse controls,” Abstract and Applied Analysis, vol. 2012, Article ID 709682, 16 pages, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  13. R. Yang, Z. Zhang, and P. Shi, “Exponential stability on stochastic neural networks with discrete interval and distributed delays,” IEEE Transactions on Neural Networks, vol. 21, no. 1, pp. 169–175, 2010. View at Publisher · View at Google Scholar · View at Scopus
  14. R. Yang, H. Gao, and P. Shi, “Delay-dependent robust H control for uncertain stochastic time-delay systems,” International Journal of Robust and Nonlinear Control, vol. 20, no. 16, pp. 1852–1865, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  15. X. Gao, K. L. Teo, and G. Duan, “An optimal control approach to robust control of nonlinear spacecraft rendezvous system with θ-D technique,” International Journal of Innovative Computing, Information and Control, vol. 9, no. 5, pp. 2099–2110, 2013.
  16. X. Su, P. Shi, L. Wu, and Y. D. Song, “A novel control design on discrete-time Takagi–Sugeno fuzzy systems with time-varying delays,” IEEE Transactions on Fuzzy Systems, vol. 21, no. 4, pp. 655–671, 2013. View at Publisher · View at Google Scholar
  17. F. Baghery and B. Øksendal, “A maximum principle for stochastic control with partial information,” Stochastic Analysis and Applications, vol. 25, no. 3, pp. 705–717, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  18. T. T. K. An and B. Øksendal, “Maximum principle for stochastic differential games with partial information,” Journal of Optimization Theory and Applications, vol. 139, no. 3, pp. 463–483, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  19. B. Øksendal and A. Sulèm, “Maximum principles for optimal control of forward-backward stochastic differential equations with jumps,” SIAM Journal on Control and Optimization, vol. 48, no. 5, pp. 2945–2976, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. Z. Yu, “The stochastic maximum principle for optimal control problems of delay systems involving continuous and impulse controls,” Automatica, vol. 48, no. 10, pp. 2420–2432, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet