Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering

Volume 2014, Article ID 948134, 10 pages

http://dx.doi.org/10.1155/2014/948134
Research Article

Nonlinear Stochastic Control with Markov Jumps and -Dependent Noise: Finite and Infinite Horizon Cases

1College of Information and Control Engineering, China University of Petroleum (East China), Qingdao 266580, China

2College of Electrical Engineering and Automation, Shandong University of Science and Technology, Qingdao 266590, China

Received 8 April 2014; Accepted 1 June 2014; Published 18 June 2014

Academic Editor: Xuejun Xie

Copyright © 2014 Li Sheng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper is concerned with the control problem for nonlinear stochastic Markov jump systems with state, control, and external disturbance-dependent noise. By means of inequality techniques and coupled Hamilton-Jacobi inequalities, both finite and infinite horizon control designs of such systems are developed. Two numerical examples are provided to illustrate the effectiveness of the proposed design method.

1. Introduction

control is one of the most important robust control approaches, which can efficiently eliminate the effect of the exogenous disturbance [1, 2]. Since Hinrichsen and Pritchard introduced control to linear stochastic systems [3], the nonlinear stochastic control and filtering problems have received considerable attention in both theory and practical applications [49]. In [4], the nonlinear stochastic designs were first developed by solving a second-order nonlinear Hamilton-Jacobi inequality. The filtering problems for general nonlinear continuous-time and discrete-time stochastic systems were discussed in [6] and [7], respectively. In [8], the quantized control problem for a class of nonlinear stochastic time-delay network-based systems with probabilistic data missing is studied.

On the other hand, Itô stochastic systems with Markov jumps have attracted increasing attention due to their powerful modeling ability in many fields [10, 11]. For linear stochastic systems with Markov jumps, many important issues have been studied, such as stability and stabilization [12, 13], observability and detectability [14], optimal control [15], and control [16]. The control issues for nonlinear stochastic Markov jump systems (NSMJSs) have also been widely investigated. In [17], the notion of exponential dissipativity of NSMJSs was introduced, and it was used to estimate the possible variations of the output feedback control. In [18], the stabilization of nonlinear Markov jump systems with partly unknown transition probabilities was studied via fuzzy control. In [19], the control problems of NSMJSs were studied, which extended the results of [4] to stochastic systems with Markovian jump parameters.

Most of the existing literature was concerned with stochastic Markov jump systems with state-dependent noise or both state and disturbance dependent noise ( -dependent noise for short) [16, 19]. However, for most natural phenomena described by Itô stochastic systems, not only state but also control input or external disturbance maybe corrupted by noise. By introducing three coupled Hamilton-Jacobi equations (HJEs), the finite and infinite horizon control problems were solved for Itô stochastic systems with all system state, control, and disturbance-dependent noise ( -dependent noise for short) in [20] and [21], respectively. In [22], the finite/infinite horizon control of nonlinear stochastic systems with -dependent noise was solved by means of a Hamilton-Jacobi inequality (HJI) instead of three coupled Hamilton-Jacobi equations (HJEs). However, the control problems of nonlinear stochastic systems with Markov jumps and -dependent noise have never been tackled and deserved further research.

In this paper, the finite and infinite horizon control problems are studied for nonlinear stochastic Markov jumps systems with -dependent noise. Firstly, a very useful elementary identity is proposed. Then, by using the technique of squares completion, a sufficient condition for finite/infinite horizon control of NSMJSs is presented based on a set of coupled HJIs. By means of linear matrix inequalities (LMIs), a sufficient condition for infinite horizon control of linear stochastic Markov jump systems is derived. Finally, two numerical examples are provided to show the effectiveness of our obtained results.

For conveniences, we make use of the following notations throughout this paper. is the -dimensional Euclidean space. is the set of all real matrices. : is a positive definite (positive semi-definite) symmetric matrix. is the transpose of a matrix . is the identity matrix. is the Euclidean norm of a vector . (resp., ) is the space of nonanticipative stochastic processes with respect to increasing -algebras satisfying (resp., ).

2. Definitions and Preliminaries

Consider the following time-varying nonlinear stochastic Markov jump systems with -dependent noise: where , , , and represent the system state, control input, exogenous input, and regulated output, respectively. is the one-dimensional standard Wiener process defined on the complete probability space , with the natural filter generated by and up to time . The jumping process is a continuous-time discrete-state Markov process taking values in a finite set . The transition probabilities for the process are defined as where , , and denotes the switching rate from mode at time to mode at time and for all . In this paper, the processes and are assumed to be independent. For every , , , , , , , and are Borel measurable functions of suitable dimensions, which guarantee that system (1) has a unique strong solution [23]. The finite horizon control for system (1) is defined as follows.

Definition 1. For given , the state feedback control is called a finite horizon control of system (1), if for the zero initial state and any nonzero , we have with where is an operator associated with system (1) which is defined as

Consider the time-invariant nonlinear stochastic Markov jump systems with -dependent noise The infinite horizon control for system (5) is defined as follows.

Definition 2. For given , the control is called an infinite horizon control of system (1), if the following is considered.(i)For the zero initial state and any nonzero , we have with where is an operator associated with system (5) which is defined as (ii)System (5) is internally stable; that is, the following system is globally asymptotically stable in probability [10].

To give our main results, we need the following lemmas.

Lemma 3 (see [10]). (Generalized Itô formula). Let and be given -valued, -adapted process, , and . Then for given , , we have where

Lemma 4. If , is a symmetric matrix and exists, we have

Proof. This lemma is very easily proved by using completing squares technique, so the proof is omitted.

3. Main Results

3.1. Finite Horizon Case

The following sufficient condition is presented for the finite horizon control of system (1). For convenience, denote in this subsection.

Theorem 5. Assume that there exists a set of nonnegative functions , , and for all nonzero , . If solves the following coupled HJIs: then is a finite horizon control of system (1).

Proof. For any and the initial state , , applying Lemma 3, we have where denotes and Since , , we have which means that Considering the above inequality and (14), we have where

Applying Lemma 4 to and , we have where

Substituting (20) into (18), and considering (12), it yields Taking and considering the second item of (12), (22) leads to which means in Definition 1. The theorem is proved.

Remark 6. The proof of Theorem 5 is based on an elementary identity (11), which avoids using stochastic dissipative theory as done in [4, 5]. We believe that the identity (11) will have many other applications in system analysis and synthesis.

3.2. Infinite Horizon Case

In contrast to the finite horizon case, the infinite horizon control exhibits more complexity due to the additional requirement of stabilizing the closed-loop system internally. The following sufficient condition is derived for the infinite horizon control of system (5). In this subsection, denote .

Theorem 7. Assume that there exists a set of nonnegative functions , , and for all nonzero , . If solves the following coupled HJIs: then is an infinite horizon control of system (5).

Proof. Similar to the proof of Theorem 5, it is easy to show under condition (24). Next, we need to prove system (8) to be globally asymptotically stable in probability. Let and let be the infinitesimal generator of the system (8) which is similar to in Lemma 3; then where It can be checked that The following inequality is used during the calculation of (29):

Implementing (28) and (29) into (26) and considering (24), it yields which implies that (8) is globally asymptotically stable in probability from [10]. This theorem is completed.

Remark 8. The methods proposed in [19, 24] cannot be applied to study the problem of NSMJSs with -dependent noise, although they are suitable for NSMJSs with -dependent noise. One reason for this is that and are no longer separable in the conditions, when they enter the diffusion term simultaneously. In the proofs of Theorems 5 and 7, we resort to Lemma 4 to solve this problem.

Remark 9. When there is no Markov jump parameters, system (1)/(5) will reduce to general nonlinear stochastic systems with -dependent noise, which was studied in [20, 21]. However, all the conditions of control design for nonlinear stochastic systems in [20, 21] were given in terms of three coupled HJEs, which were difficult to be solved. According to this paper, the sufficient condition for control of nonlinear stochastic systems can be derived by means of a single set of coupled HJIs, which is easier to be verified than three coupled HJEs in [20, 21].

From Theorem 7, the following nonlinear stochastic bounded real lemma will be derived for Markov jump system:

Lemma 10. For a prescribed , system (32) is internally stable and , if there exists a set of nonnegative functions , , and for all nonzero , , satisfying the following coupled HJIs:

Proof. Letting , , and in Theorem 7, we obtain (33) easily.

Next, we present a sufficient condition for the following linear stochastic Markov jump systems with -dependent noise:

Corollary 11. System (34) is internally stable and for given , if there exist matrices , satisfying the following LMIs: where Moreover, the state feedback gain matrices are given by .

Proof. Firstly, consider the following unforced linear stochastic Markov jump systems:

Let ; (33) in Lemma 10 can be written as follows:

By Schur complement, (38) is equivalent to which also implies (39). Moreover, (40) can be rewritten as which yields according to Schur complement. Pre- and postmultiplying (42) by and denoting , we have where and are defined as in (35).

Considering closed-loop system (34) with state feedback control , . Replacing by , , respectively, and setting in (43) yield (35). Therefore, the state feedback gain matrices can be obtained by .

Remark 12. Although HJIs (12) in Theorem 5 or (24) in Theorem 7 can be solved by trial and error in some simple cases (see Example 13 in Section 4), they are difficult to be dealt with for high-dimensional systems. In order to avoid solving the HJIs, the Taylor series approach [25] or fuzzy approach based on Takagi-Sugeno model [24, 26] can be considered to design the nonlinear stochastic controller.

4. Numerical Example

In this section, two numerical examples are provided to illustrate the effectiveness of the developed results.

Example 13. Consider one-dimensional two-mode time-invariant nonlinear stochastic Markov jump systems with generator matrix , and the two subsystems are as follows: Set , , with and to be determined; then HJIs (24) become For given , the above inequalities have solutions and . According to Theorem 7, the infinite controllers of system (44) are and .

Example 14. Consider two-dimensional two-mode linear stochastic Markov jump systems (34) with the following parameters: With the choice of , a possible solution of LMIs (35) in Corollary 11 can be found by using Matlab LMI control toolbox: Then, the control gain matrices of system (34) are as follows:

Figure 1 shows the result of the changing between modes during the simulation with the initial mode at mode 1. The initial condition is chosen as and exogenous input . By means of Euler-Maruyama method [27], the state responses of unforced system (37) and controlled system (34) are shown in Figures 2 and 3, respectively. From Figure 3, one can find that the controlled system (34) can achieve stability and attenuation performance in the sense of mean square by using the proposed controller.

948134.fig.001
Figure 1: Result of the changing between modes.
948134.fig.002
Figure 2: The state responses of unforced system (37).
948134.fig.003
Figure 3: The state responses of controlled system (34).

5. Conclusions

In this paper, we have studied the control problem for NSMJSs with -dependent noise. A sufficient condition for finite/infinite horizon control has been derived in terms of a set of coupled HJIs. It can be found that Lemma 4 plays an essential role during the proof of Theorems 5 and 7. The validity of the obtained results has been verified by two examples.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by National Natural Science Foundation of China (nos. 61203053 and 61174078), China Postdoctoral Science Foundation (no. 2013M531635), Research Fund for the Doctoral Program of Higher Education of China (no. 20120133120014), Special Funds for Postdoctoral Innovative Projects of Shandong Province (no. 201203096), Fundamental Research Fund for the Central Universities (nos. 11CX04042A, 12CX02010A, and 14CX02093A), Research Fund for the Taishan Scholar Project of Shandong Province of China, and SDUST Research Fund (no. 2011KYTD105).

References

  1. G. Zames, “Feedback and optimal sensitivity: model reference transformations, multiplicative seminorms, and approximate inverses,” IEEE Transactions on Automatic Control, vol. 26, no. 2, pp. 301–320, 1981. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  2. I. R. Petersen, V. A. Ugrinovskii, and A. V. Savkin, Robust Control Design Using H-Methods, Springer, Berlin, Germany, 2000. View at Publisher · View at Google Scholar · View at MathSciNet
  3. D. Hinrichsen and A. J. Pritchard, “Stochastic H,” SIAM Journal on Control and Optimization, vol. 36, no. 5, pp. 1504–1538, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  4. W. Zhang and B.-S. Chen, “State feedback H control for a class of nonlinear stochastic systems,” SIAM Journal on Control and Optimization, vol. 44, no. 6, pp. 1973–1991, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  5. N. Berman and U. Shaked, “H-like control for nonlinear stochastic systems,” Systems & Control Letters, vol. 55, no. 3, pp. 247–257, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. W. Zhang, B.-S. Chen, and C.-S. Tseng, “Robust H filtering for nonlinear stochastic systems,” IEEE Transactions on Signal Processing, vol. 53, no. 2, pp. 589–598, 2005. View at Publisher · View at Google Scholar · View at MathSciNet
  7. B. Shen, Z. Wang, H. Shu, and G. Wei, “On nonlinear H filtering for discrete-time stochastic systems with missing measurements,” IEEE Transactions on Automatic Control, vol. 53, no. 9, pp. 2170–2180, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  8. Z. Wang, B. Shen, H. Shu, and G. Wei, “Quantized H control for nonlinear stochastic time-delay systems with missing measurements,” IEEE Transactions on Automatic Control, vol. 57, no. 6, pp. 1431–1444, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  9. B. Shen, S. X. Ding, and Z. Wang, “Finite-horizon H fault estimation for linear discrete time-varying systems with delayed measurements,” Automatica, vol. 49, no. 1, pp. 293–296, 2013. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  10. X. Mao and C. Yuan, Stochastic Differential Equations with Markovian Switching, Imperial College Press, London, UK, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  11. V. Dragan, T. Morozan, and A.-M. Stoica, Mathematical Methods in Robust Control of Discrete-Time Linear Stochastic Systems, Springer, New York, NY, USA, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  12. Z.-Y. Li, B. Zhou, Y. Wang, and G.-R. Duan, “On eigenvalue sets and convergence rate of Itô stochastic systems with Markovian switching,” IEEE Transactions on Automatic Control, vol. 56, no. 5, pp. 1118–1124, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  13. L. Sheng, M. Gao, and W. Zhang, “Spectral characterisation for stability and stabilisation of linear stochastic systems with Markovian switching and its applications,” IET Control Theory & Applications, vol. 7, no. 5, pp. 730–737, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  14. L. Shen, J. Sun, and Q. Wu, “Observability and detectability of discrete-time stochastic systems with Markovian jump,” Systems & Control Letters, vol. 62, no. 1, pp. 37–42, 2013. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  15. O. L. V. Costa and A. de Oliveira, “Optimal mean-variance control for discrete-time linear systems with Markovian jumps and multiplicative noises,” Automatica, vol. 48, no. 2, pp. 304–315, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  16. L. Sheng, W. Zhang, and M. Gao, “Relationship between Nash equilibrium strategies and H2/H control of stochastic Markov jump systems with multiplicative noise,” IEEE Transactions on Automatic Control, vol. 59, no. 10, 2014. View at Google Scholar
  17. P. V. Pakshin, “Exponential dissipativeness of the random-structure diffusion processes and problems of robust stabilization,” Automation and Remote Control, vol. 68, no. 10, pp. 1852–1870, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  18. L. Sheng and M. Gao, “Stabilization for Markovian jump nonlinear systems with partly unknown transition probabilities via fuzzy control,” Fuzzy Sets and Systems, vol. 161, no. 21, pp. 2780–2792, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  19. Z. Lin, Y. Lin, and W. Zhang, “A unified design for state and output feedback H control of nonlinear stochastic Markovian jump systems with state and disturbance-dependent noise,” Automatica, vol. 45, no. 12, pp. 2955–2962, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  20. W. Zhang, H. Zhang, and B.-S. Chen, “Stochastic H2/H control with (x,u,v)-dependent noise: finite horizon case,” Automatica, vol. 42, no. 11, pp. 1891–1898, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  21. W. Zhang and G. Feng, “Nonlinear stochastic H2/H control with (x,u,v)-dependent noise: infinite horizon case,” IEEE Transactions on Automatic Control, vol. 53, no. 5, pp. 1323–1328, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  22. W. Zhang, B.-S. Chen, H. Tang, L. Sheng, and M. Gao, “Some remarks on general nonlinear stochastic H control with state, control, and disturbance-dependent noise,” IEEE Transactions on Automatic Control, vol. 59, no. 1, pp. 237–242, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  23. V. Dragan, T. Morozan, and A.-M. Stoica, Mathematical Methods in Robust Control of Linear Stochastic Systems, Springer, New York, NY, USA, 2006. View at MathSciNet
  24. Z. Lin, Y. Lin, and W. Zhang, “H stabilisation of non-linear stochastic active fault-tolerant control systems: fuzzy-interpolation approach,” IET Control Theory & Applications, vol. 4, no. 10, pp. 2003–2017, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  25. M. Abu-Khalaf, J. Huang, and F. L. Lewis, Nonlinear H2/H Constrained Feedback Control, Springer, London, UK, 2006.
  26. L. Sheng, M. Gao, and W. Zhang, “Dissipative control for Markov jump non-linear stochastic systems based on T-S fuzzy model,” International Journal of Systems Science, vol. 45, no. 5, pp. 1213–1224, 2014. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  27. D. J. Higham, “An algorithmic introduction to numerical simulation of stochastic differential equations,” SIAM Review, vol. 43, no. 3, pp. 525–546, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet