Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014, Article ID 484732, 8 pages
http://dx.doi.org/10.1155/2014/484732
Research Article

Feedback Stabilization for a Class of Nonlinear Stochastic Systems with State- and Control-Dependent Noise

1College of Information and Control Engineering, China University of Petroleum (East China), Qingdao 266580, China
2College of Electrical Engineering and Automation, Shandong University of Science and Technology, Qingdao 266590, China

Received 8 August 2014; Accepted 23 September 2014; Published 5 November 2014

Academic Editor: Ramachandran Raja

Copyright © 2014 Yu-Hong Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper mainly studies the state feedback stabilizability of a class of nonlinear stochastic systems with state- and control-dependent noise. Some sufficient conditions on local and global state feedback stabilizations are given in linear matrix inequalities (LMIs) and generalized algebraic Riccati equations (GAREs). Some obtained results improve the previous work.

1. Introduction

Stability and stabilization are two important topics in modern control theory, which are first of considered issues in the systems analysis and synthesis. It is well known that stochastic control has become a very popular research area, which has been applied to mathematical finance [1], quantum systems [2], and so forth; stochastic stability and stabilization have been studied by many researchers; we refer the reader to the celebrated book [1] for the discussions of various stabilities. A series of works on robustly exponential stability can be found in [36]. While the th moment stability were discussed in [7, 8], in particular, the asymptotic mean square stability has been studied for a long time; see [913]. The stabilizability of linear stochastic control systems has been investigated by [9, 10, 1217]. In recent years, the study for stabilization of nonlinear stochastic systems has attracted great attention; the methods appearring in studying this topic can be summarized as follows: GARE-based method [9, 12, 18, 19]; control Lyapunov function method [1, 36, 20]; passive system method [21], and spectral analysis method based on generalized Lyapunov operators [13, 16, 17]. We refer the reader to [19] for the stabilization of general nonlinear stochastic systems, where a class of new Hamilton-Jacobi inequalities were presented.

It can be seen that most of the previous works were on the systems with only the state-dependent noise. In the present paper, we deal with a class of linearized systems with both the state- and control-dependent noise. Some sufficient conditions on local state feedback stabilization are given via LMIs and GAREs, respectively, which not only generalize but also improve the results of [18]. We also investigate the global state feedback stabilization and a sufficient condition is also given in terms of LMIs. A numerical example verifies the effectiveness of our results.

2. Problem Setting

Consider the following stochastic control system governed by Itô’s differential equation: In the above, is called the system state and the control input. is the standard Wiener process defined on the probability space with a natural filter . Without loss of generality, we can suppose are one-dimensional. Assume is an adapted and measurable process with respect to ,  ,  ; that is, is an equilibrium point of (1). Under very general conditions on , , and , stochastic control system (1) has a unique strong solution for any and the initial state ; see [1, 22]. We first introduce the following definition.

Definition 1. We say that the equilibrium point of system (1) is locally asymptotically stabilizable via a linear constant state feedback , if the solution of the closed-loop system, is asymptotically stable in probability [1]; that is, for any and where is a constant matrix of suitable dimension. In addition, if the solution of the closed-loop system (2) is asymptotically stable in the large (see, e.g., [1]), that is, if both (3) and hold, then we say that the equilibrium point of system (1) is globally asymptotically stabilizable via a linear state feedback .

It is well known [1] that if there exists a neighborhood of the origin, a Lyapunov function , in domain , which has an infinitesimal upper limit, that is, satisfying then the solution of system (2) is asymptotically stable in probability. If also admits the following property, then the solution of system (2) is asymptotically stable in the large. is the so-called infinitesimal generator of (2).

Now, suppose and can be linearized as Respectively; then the linearized stochastic system of (1) is as where , , , , , are constant matrices. In what follows, we will discuss the stabilization of (10).

3. Locally Asymptotic Stabilization

3.1. Main Results

In this section, we obtain two theorems on locally asymptotic stabilization of (10) as follows.

Theorem 2. Suppose and the following LMI,has a solution , ; then the equilibrium point of system (10) is locally asymptotically stabilizable in probability with control law

The following theorem is another description for locally asymptotic stabilization in probability via GARE.

Theorem 3. Under the condition of (11), if for any , , GARE, has a positive solution , then system (10) is locally asymptotically stabilizable in probability with control law

To prove our main results, we first consider the linear constant coefficient stochastic control system System (16) is said to be asymptotically mean square stable if, for any ,  .

Lemma 4 (see [23]). System (16) is asymptotically mean square stable if and only if the following Lyapunov-type inequality, has at least one solution .

Lemma 5 (see [13]). System (16) is asymptotically mean square stable if and only if its dual system, is asymptotically mean square stable.

Proof of Theorem 2. By Schur’s complement, LMI (12) is equivalent to the following inequality: having a pair of solutions ,  . Let ; then (19) becomes By Lemma 4, (20) implies to be asymptotically mean square stable, which yields to be also asymptotically mean square stable from Lemma 5. Again, by Lemma 4, there exists at least one solution satisfying Take the Lyapunov function ,  ; then, for system (10), Let then, by (23), . So By the given condition (11), for any , there exists , such that when , , . So If we take sufficiently small, such that then (26) together with (27) gives for . Therefore, the system (10) is locally asymptotically stabilizable in probability with control law The proof of Theorem 2 is completed.

Remark 6. If there is a constant matrix of suitable dimension such that system (22) is asymptotically mean square stable, then the following control system, is called stabilizable in mean square sense [9, 12, 13].

Proof of Theorem 3. Note that if we let then GARE (14) can be written as By repeating the proof of Theorem 2, Theorem 3 is easy to be proved.

In fact, Theorems 2 and 3 are equivalent; this can be seen from the following proposition.

Proposition 7. If, for some , , GARE (14) has a positive solution , then LMI (12) is feasible with at least a pair of solutions , ; conversely, if LMI (12) has a pair of solutions , , then, for any , , GARE (14) has a unique positive solution .

Proof. If, for some , GARE (14) has a positive solution , then from (33) together with Lemma 4, system (22) is asymptotically mean square stable. Accordingly, system (21) is also asymptotically mean square stable by Lemma 5. Again, by Lemma 4, there exists , such that Let ; then (34) follows By Schur’s complement, and are also the solutions of (12). Conversely, if (12) has a pair of solutions , , then, from the same discussion as above, system (22) is mean square stable. So (31) is stabilizable in mean square sense. From [9, 13], for any , , GARE (14) has a unique positive solution .

Remark 8. Although Theorem 2 is equivalent to Theorem 3, it seems that Theorem 2 is more convenient in actual use than Theorem 3, because we can easily test whether or not LMI (12) is feasible by existing convex optimization tools; see [10, 24]. However, we would like to point out that if GARE (14) has a positive solution , by applying Theorem 10 of [9], must solve the following semidefinite programming problem: subject to The semidefinite programming problem (36)-(37), as LMI (12), can also be verified via some convex optimization tools [10, 24].

3.2. Comparison with the Existing Results

In (10), if we let , for , then the linearized system of (1) becomes By means of GARE-based method, the following result was obtained in [18].

Theorem 9. If for any real matrix , there exists a constant , such that Moreover, suppose , ; is controllable; is observable with ; then system (38) is locally asymptotically stabilizable in probability with the control law where is a unique solution of GARE Based on Theorem 9, we give the following remarks.

Remark 10. It is not convenient to use Theorem 9 in practice, because the condition (39) is difficult to verify for all real nonnegative symmetric matrices.

Remark 11. Checking the proof of Theorem 9 in [18], we can find that Theorem 9 of [18] required that the smallest eigenvalue of should be larger than zero; that is, ; so is certainly observable.

GARE (41) is a special case of (14). We should point out that (39) and the controllability of are only sufficient but not necessary conditions for the existence of positive solutions of GARE (41) with , ; see [25] and the following counterexample.

Example 12. In GARE (41), we set , , , and In this case, GARE (41) reduces to It is easy to test that is stabilizable in mean square sense. By [9, 13], (43) must have a unique positive definite solution . However, (39) is not satisfied; this can be seen by setting Considering Proposition 7, Theorem 2 not only has computational advantage but also generalizes and improves Theorem 9 given in [18].

Remark 13. In general, feedback stabilizing control laws are not unique; for example, in Theorem 9, except for , is another locally feedback stabilizing control law of system (38).

4. Globally Asymptotic Stabilization

Theorem 14. Suppose there exists a scalar , such that, for any and , and the following LMI,has solutions , ; then the equilibrium point of system (10) is globally asymptotically stabilizable with the control law

Proof. Similar to the proof of Theorem 2, by Schur’s complement, (47) is equivalent to the fact that there exist ,  , such that Let ; then (49) implies that there exists a solution to Still take the Lyapunov function ,  ; then satisfies (8) and (24). It is well known that By (46), it concludes So Similarly, Repeating the same procedure as in Theorem 2, we can prove for all . The theorem is shown.

Remark 15. Obviously, (11) and (46) do not imply each other, which motivates us to search for other less conservative conditions in the future.

5. Numerical Example

In this section, we present the following numerical example to illustrate the effectiveness of our main results.

Example 1. Consider the following two-dimensional nonlinear stochastic system: with Obviously, and satisfy condition (11). According to Theorem 2, a feasible solution is derived by solving LMI (12): Therefore, the control gain matrix is
The state responses of the unforced system () and the controlled system () are shown in Figures 1 and 2, respectively. From Figure 2, it can be found that the controlled system can achieve stability by using the proposed controller.

484732.fig.001
Figure 1: The state responses of the unforced system ().
484732.fig.002
Figure 2: The state responses of the controlled system ().

6. Conclusion

In this paper, we have studied the feedback stabilizability of nonlinear stochastic systems with state- and control-dependent noise. Some sufficient conditions on stabilization have been derived in terms of LMIs and GAREs. A numerical example is presented to show the validity of the obtained results.

Notations

The set of all symmetric matrices
Transpose of a matrix
Positive semidefinite (positive definite) symmetric matrix
Identity matrix
Trace of a square matrix
Class of functions twice continuously differential with respect to and once continuously differential with respect to except possibly at the point .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (no. 61174078), the Research Fund for the Taishan Scholar Project of Shandong Province of China and SDUST Research Fund (no. 2011KYTD105), and State Key Laboratory of Alternate Electrical Power System with Renewable Energy Sources (Grant no. LAPS13018).

References

  1. X. Mao, Stochastic Differential Equations and Their Applications, Horwood Publishing Series in Mathematics & Application, Horwood Publishing, Chichester, UK, 1997. View at MathSciNet
  2. W. Zhang and B.-S. Chen, “Stochastic affine quadratic regulator with applications to tracking control of quantum systems,” Automatica, vol. 44, no. 11, pp. 2869–2875, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  3. Q. Zhu and J. Cao, “Stability analysis of markovian jump stochastic BAM neural networks with impulse control and mixed time delays,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 3, pp. 467–479, 2012. View at Publisher · View at Google Scholar · View at Scopus
  4. Q. Zhu, F. Xi, and X. Li, “Robust exponential stability of stochastically nonlinear jump systems with mixed time delays,” Journal of Optimization Theory and Applications, vol. 154, no. 1, pp. 154–174, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  5. Q. Zhu and J. Cao, “Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays,” IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 41, no. 2, pp. 341–353, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. Q. Zhu and J. Cao, “Robust exponential stability of markovian jump impulsive stochastic Cohen-Grossberg neural networks with mixed time delays,” IEEE Transactions on Neural Networks, vol. 21, no. 8, pp. 1314–1325, 2010. View at Publisher · View at Google Scholar · View at Scopus
  7. Q. Zhu, “Asymptotic stability in the pth moment for stochastic differential equations with Lévy noise,” Journal of Mathematical Analysis and Applications, vol. 416, no. 1, pp. 126–142, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. Q. Zhu, “pth moment exponential stability of impulsive stochastic functional differential equations with Markovian switching,” Journal of the Franklin Institute, vol. 351, no. 7, pp. 3965–3986, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  9. M. A. Rami and X. Y. Zhou, “Linear matrix inequalities, Riccati equations, and indefinite stochastic linear quadratic controls,” IEEE Transactions on Automatic Control, vol. 45, no. 6, pp. 1131–1143, 2000. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, vol. 15 of SIAM Studies in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, Pa, USA, 1994. View at Publisher · View at Google Scholar · View at MathSciNet
  11. A. Haghighi and S. M. Hosseini, “Analysis of asymptotic mean-square stability of a class of Runge-Kutta schemes for linear systems of stochastic differential equations,” Mathematics and Computers in Simulation, vol. 105, pp. 17–48, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  12. J. L. Willems and J. C. Willems, “Feedback stabilizability for stochastic systems with state and control dependent noise,” Automatica, vol. 12, no. 3, pp. 277–283, 1976. View at Google Scholar · View at MathSciNet · View at Scopus
  13. W. Zhang and B.-S. Chen, “On stabilizability and exact observability of stochastic systems with their applications,” Automatica, vol. 40, no. 1, pp. 87–94, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  14. E. K. Boukas, “Stabilization of stochastic singular nonlinear hybrid systems,” Nonlinear Analysis: Theory, Methods & Applications, vol. 64, no. 2, pp. 217–228, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. S. Sathananthan, C. Beane, G. S. Ladde, and L. H. Keel, “Stabilization of stochastic systems under Markovian switching,” Nonlinear Analysis. Hybrid Systems, vol. 4, no. 4, pp. 804–817, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  16. W. Zhang and B.-S. Chen, “-representation and applications to generalized Lyapunov equations and linear stochastic systems,” IEEE Transactions on Automatic Control, vol. 57, no. 12, pp. 3009–3022, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  17. W. Zhang and L. Xie, “Interval stability and stabilization of linear stochastic systems,” IEEE Transactions on Automatic Control, vol. 54, no. 4, pp. 810–815, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. Z. Y. Gao and N. U. Ahmed, “Feedback stabilizability of nonlinear stochastic systems with state-dependent noise,” International Journal of Control, vol. 45, no. 2, pp. 729–737, 1987. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  19. W. Zhang, B.-S. Chen, and Z. Yan, “Feedback stabilization for nonlinear affine stochastic systems,” International Journal of Innovative Computing, Information and Control, vol. 7, no. 9, pp. 5363–5375, 2011. View at Google Scholar · View at Scopus
  20. P. Florchinger, “Feedback stabilization of affine in the control stochastic differential systems by the control Lyapunov function method,” SIAM Journal on Control and Optimization, vol. 35, no. 2, pp. 500–511, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  21. P. Florchinger, “A passive system approach to feedback stabilization of nonlinear control stochastic systems,” SIAM Journal on Control and Optimization, vol. 37, no. 6, pp. 1848–1864, 1999. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  22. B. Øksendal, Stochastic Differential Equations: An Introduction with Applications, Springer, New York, NY, USA, 6th edition, 2010.
  23. G. Da Prato and J. Zabczyk, Stochastic Equations in Infinite Dimensions, vol. 44 of Encyclopedia of Mathematics and its Applications, Cambridge University Press, Cambridge, UK, 1992. View at Publisher · View at Google Scholar · View at MathSciNet
  24. P. Gahinet, A. Nemirovski, A. J. Laub, and M. Chilali, LMI Control Toolbox, Math Works, Natick, Mass, USA, 1995.
  25. W. Zhang, “A study on positive solutions of generalized algebraic Riccati equation,” Acta Automatica Sinica, vol. 27, no. 1, pp. 125–130, 2001 (Chinese). View at Google Scholar · View at MathSciNet · View at Scopus