About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 323806, 8 pages
http://dx.doi.org/10.1155/2013/323806
Research Article

Finite Time Inverse Optimal Stabilization for Stochastic Nonlinear Systems

1College of Mathematics, Physics, and Information Engineering, Zhejiang Normal University, Jinhua 321004, China
2Laboratory of Intelligent Control and Robotics, Shanghai University of Engineering Science, Shanghai 201620, China

Received 30 April 2013; Accepted 20 June 2013

Academic Editor: Ljubisa Kocinac

Copyright © 2013 Xiushan Cai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper deals with finite time inverse optimal stabilization for stochastic nonlinear systems. A concept of the stochastic finite time control Lyapunov function (SFT-CLF) is presented, and a control law for finite time stabilization for the closed-loop system is obtained. Furthermore, a sufficient condition is developed for finite time inverse optimal stabilization in probability, and a control law is designed to ensure that the equilibrium of the closed-loop system is finite time inverse optimal stable. Finally, an example is given to illustrate the applications of theorems established in this paper.

1. Introduction

In many engineering fields, it is desirable that trajectories of a dynamical system converge to an equilibrium in finite time. In order to achieve convergence in finite time, one notices immediately that finite time differential equations cannot be Lipschitz at the origin. As all solutions reach zero in finite time, there is nonuniqueness of solutions through zero in backwards time. Haimo [1] pointed out that this violates the uniqueness condition for solutions of the Lipschitz differential equations. Finite time stability was studied in [13]. Recently, finite time stability has been further extended to switched systems in Orlov [4], time-delay systems in Moulay et al. [5], and impulsive dynamical systems in Nersesov et al. [6]. The problem of finite time stabilization has been studied by Bhat and Bernstein [7] and Hong et al. [8]. Finite-time stabilization technique has been applied in tracking control of multiagent systems by Li et al. [9] and attitude tracking control of spacecraft by Du et al. [10, 11]. Moulay and Perruquetti [12] studied finite time stabilization of a class of continuous system using the control Lyapunov functions (CLFs). The CLF was introduced by Artstein [13] and Sontag [14] and made a tremendous impact on stabilization theory. In particular Sontag’s universal formula in [15] has played an important role in control theory. Florchinger [16] proved that the feedback control law defined in [15] can globally asymptotically stabilize stochastic nonlinear systems. This result was extended when the drift as well as the controlled part was corrupted by a noise in the works of Chabour and Oumoun [17].

After the success of finite time stability and stabilization theory for deterministic systems, how to extend them to the case of stochastic systems naturally became an important research area. Chen and Jiao [1820] presented a new concept of finite time stability for stochastic nonlinear systems, and a theorem concerning the finite time stability was proved. However, to the authors knowledge, no work on finite time inverse optimal stabilization for stochastic systems has been done at the present stage.

In this paper, for general stochastic systems affine in the control and noise inputs, a concept of the stochastic finite time control Lyapunov function (SFT-CLF) is given. Next, a sufficient condition is developed for finite time stabilization in probability, and a control law is designed. After considering the finite time stabilization of stochastic systems, an important problem is how to further design a stabilizing controller which is also optimal with respect to meaningful cost functionals, that is, the inverse optimal control. In this paper, we consider the finite time inverse optimal controller design. This result is extended from the inverse optimality result of Freeman and Kokotovic [21] to finite time inverse optimal controller design for stochastic nonlinear systems. Finally, the effectiveness of the proposed design technique is illustrated by an example.

2. System Description and Preliminaries

Consider the following stochastic nonlinear system: where , are the state and the input of the system, respectively,is an -dimensional independent standard Wiener process, and , , and are continuous with , .

Let be a continuous function. is said to be positive definite if and for;is said to be radially unbounded if as

For any given twice continuous differentiable function , associated with stochastic system (1), the infinitesimal generator is defined as follows:

In this paper, denotes the set of all functions , which are continuous, strictly increasing, and vanishing at zero.

Definitions 1 and 2 and Lemma 3 given in [19, 20] and Lemma 4 given in [22] will be useful throughout this paper.

Definition 1. Assume that the system has the unique and global solution denoted by , , where is the initial state. Define , which is called the stochastic settling time function. In particular, if , for all

Definition 2. For stochastic system (3), the equilibrium is said to be globally finite time stable in probability, if the following conditions hold: the equilibrium is globally stable in probability if for all there exists a class function such that , for all , for all ,, for any .

Lemma 3. Assume that system (3) has the unique global solution in forward time for all initial conditions. If there exist a positive definite, twice continuous differentiable and radially unbounded Lyapunov function and a continuous differentiable, function such that then the equilibrium of system (3) is globally finite time stable in probability, and the settling time function satisfies , which implies that almost surely (a.s.).

Lemma 4. Assume that , with , and are continuous, for each , it holds that , as , where , and , as , is nonrandom, strictly increasing, continuous, and concave such that . Then for any given constant , (3) has a pathwise unique strong solution.

The stochastic finite time control Lyapunov function is defined as follows.

Definition 5. A positive definite, twice continuous differentiable, and radially unbounded function is a stochastic finite time control Lyapunov function (SFT-CLF) of system (1) if there exist real numbers and , such that

is said to satisfy small control property with respect to system (1) if for each there is a such that, for all satisfying , there is somewith such that , , .

3. Main Results

Consider system (1). An explicit feedback control law is designed such that the equilibrium of the closed-loop system is globally finite time stable in probability. Moreover, a control law for finite time inverse optimal stabilization is dealt with.

3.1. Finite Time Stabilization in Probability

Theorem 6. Consider system (1). If is an SFT-CLF for system (1) such that the following inequality holds with let the control law be In addition, assume that the control law (8) is such that the closed-loop system has the unique global solution in forward time for all initial conditions. Then the equilibrium of the closed-loop system (1) and (8) is globally finite time stable in probability. And the settling time function satisfies

Proof. Take , with and . Since is an SFT-CLF for system (1), it implies that Consider system (1). We have . If and by (8), it can be deduced that If , , in view of (8) and (9), it yields when , . So ; that is, condition (i) of Lemma 3 holds. It is also easy to verify that conditions (ii) and (iii) in Lemma 3 hold.
We will prove that the control law given by (8) is differentiable away from the origin and continuous at the origin. For this objective, consider an open subset ofas follows: From implicit function existence theorem, the function defined by is differentiable on . By (9), for any , we have . So the control law (8) is differentiable away from the origin. If (6) holds, it implies that the control law (8) satisfies the small control property by [23]. Then it is continuous at the origin by [15].
In addition, under the control law (8), the equilibrium of the closed-loop system has the unique global solution in forward time. From Lemma 3, the closed-loop system (1) and (8) is globally finite time stable in probability. And the settling time function satisfies

3.2. Finite Time Inverse Optimal Stabilization in Probability

In this subsection, we consider finite time inverse optimal stabilization in probability. That is, a feedback control law for system (1) will be constructed such that the following properties hold: the closed-loop system is finite time stable in probability at the equilibrium , minimizes the cost functional where , for all and is the settling-time function, and is an initial value.

In the inverse approach, a finite time stabilizing feedback law is designed first, and then it is shown that the feedback law is to find and such that optimizes (14). The problem is inverse because the functions and are a posteriori determined by the stabilizing feedback law, rather than a priori chosen by the designer.

Theorem 7. Consider system (1). If is an SFT-CLF for system (1) such that (6) holds, then let the control law be where and , are defined as (7). In addition, assume that the control law (15) is such that the closed-loop system has the unique global solution in forward time for all initial conditions. Then the control law (15) solves the problem of finite time inverse optimal stabilization in probability for system (1) by minimizing the cost functional (14) with And the settling time function satisfies

Proof. It is easy to prove that . Next, we will prove that .
Substituting into , we have when ; since is an STF-CLF for system (1), it yields when , . In addition, it is easy to obtain if . In conclusion, it yields Thus the cost functional (14) with and given by (16) is meaningful.
Consider the closed-loop system (1) and (15). Since is an SFT-CLF for system (1), we get that whenever and otherwise. Since (6) holds, it can be deduced that the control law (15) is continuous on . In addition, the control law (15) is such that the closed-loop system has the unique global solution in forward time. Thus is such that the equilibrium of the closed-loop system (1) and (15) is globally finite time stable in probability, and the settling time function satisfies , which implies a.s.
Let be the unique and global solution of the closed-loop system (1) and (15) from the initial value . Since the equilibrium of the closed-loop system (1) and (15) is globally finite time stable in probability, it implies that , when . Consider , there must exist such that . Define an increasing stop time sequence as follows: recalling that the Itô differential ofis Since is bounded on , it yields According to the property of Itô’s integral [24, page 143], we get And in view of (21), one can deduce that Since is unique and global solution, noticing Definition 1, let . We get a.s. So it yields Next, we prove that
Denote . Noting that a.s. and by the continuity of the solution, it holds that where .
Since a.s., we have
Denote , and in view of , then is monotone decreasing sequence. So there exists a constantsuch that . If , then there exists a convergent subsequence such that , a.s. However, this contradicts the conclusion (27), so . By the continuity of the solution, we know that
Finally, we prove optimality. Substituting and into , we have Taking , that is, , we have

Corollary 8. Suppose that the conditions of Theorem 6 hold. Then the control law (8) is finite time inverse optimal stabilization in probability for system (1) by minimizing the cost functional (14) with and , being given as (7) and And the settling time function satisfies ,

Proof. Using the arguments as Theorem 7, one can prove it.

4. Simulation Example

Some designs of finite time stabilizing control laws employ cancellation and do not have satisfactory stability margins, let alone optimality properties. The inverse optimal approach is a constructive alternative to such designs, which achieves desired stability margins. Let us clarify this important issue by an example in this section.

Example 1. Consider the following first-order stochastic nonlinear system: One possible finite time stabilization design is to let canceland add a finite time stabilizing term. This is accomplished with the following control law which results in what appears to be a desirable closed-loop system: It is easy to verify that the equilibriumof system (33) is globally finite time stable in probability. However, because of the cancellation, this feedback control law does not have any stability margin: with a slightly perturbed feedback control law , the closed-loop system has solutions which escape to infinity in finite time for any

Let us consider finite time inverse optimal stabilization design for system (32). Let . It can be computed as One can deduce that is an SFT-CLF for system (32). By Theorem 7, we have

It can be verified that the closed-loop system (32) and (36) satisfies the conditions of Lemma 4. Thus it has the unique global solution in forward time for all initial conditions.

By Theorem 7, the control law (36) is such that the equilibrium of the closed-loop system (32) and (36) is finite time stable in probability. And the control law (36) minimizes the cost functional (14) with Moreover, the control law (36) has two desirable properties. For , it recognizes the beneficial effect of the nonlinearity to enhance the negativity of Instead of cancelling the destabilizing term for , the inverse optimal control (36) dominates it and, by doing so, achieves a stability margin.

Figures 1, 2, 3, and 4 show the state under state feedback control law (36) and the control law for the initial state , respectively, and the response curve of the Wiener process in the closed-loop system. One can observe that the stabilization is achieved.

fig1
Figure 1: The response of the state , the control , and the Wiener process in Example 1 for initial conditions as .
fig2
Figure 2: The response of the state , the control , and the Wiener process in Example 1 for initial conditions as .
fig3
Figure 3: The response of the state , the control , and the Wiener process in Example 1 for initial conditions as .
fig4
Figure 4: The response of the state , the control , and the Wiener process in Example 1 for initial conditions as .

5. Conclusion

In this paper, we study the finite time inverse optimal stabilization for stochastic nonlinear systems. First, a concept of the SFT-CLF is presented. Secondly, a control law for finite time stabilization for the closed-loop system is obtained. Furthermore, a sufficient condition is developed for finite time inverse optimal stabilization in probability, and a control law is designed to ensure that the equilibrium of the closed-loop system is finite time inverse optimal stable. Finally, a simulation result shows the effectiveness of the method.

Acknowledgments

The authors thank the anonymous reviewers for their many helpful suggestions. The authors are grateful for the support of the National Natural Science Foundation of China (Grant nos. 61074011 and 60774011).

References

  1. V. T. Haimo, “Finite time controllers,” SIAM Journal on Control and Optimization, vol. 24, no. 4, pp. 760–770, 1986. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  2. S. P. Bhat and D. S. Bernstein, “Finite-time stability of continuous autonomous systems,” SIAM Journal on Control and Optimization, vol. 38, no. 3, pp. 751–766, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  3. S. P. Bhat and D. S. Bernstein, “Finite-time stability of homogeneous systems,” in Proceedings of the American Control Conference, pp. 2513–2514, Albuquerque, NM, USA, 1997. View at Publisher · View at Google Scholar
  4. Y. Orlov, “Finite time stability and robust control synthesis of uncertain switched systems,” SIAM Journal on Control and Optimization, vol. 43, pp. 751–766, 2005.
  5. E. Moulay, M. Dambrine, N. Yeganefar, and W. Perruquetti, “Finite-time stability and stabilization of time-delay systems,” Systems & Control Letters, vol. 57, no. 7, pp. 561–566, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. S. G. Nersesov, C. Nataraj, and J. M. Avis, “Design of finite-time stabilizing controllers for nonlinear dynamical systems,” International Journal of Robust and Nonlinear Control, vol. 19, no. 8, pp. 900–918, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  7. S. P. Bhat and D. S. Bernstein, “Continuous finite-time stabilization of the translational and rotational double integrators,” IEEE Transactions on Automatic Control, vol. 43, no. 5, pp. 678–682, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  8. Y. Hong, J. Huang, and Y. Xu, “On an output feedback finite-time stabilization problem,” IEEE Transactions on Automatic Control, vol. 46, no. 2, pp. 305–309, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  9. S. Li, H. Du, and X. Lin, “Finite-time consensus algorithm for multi-agent systems with double-integrator dynamics,” Automatica, vol. 47, no. 8, pp. 1706–1712, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  10. H. Du, S. Li, and C. Qian, “Finite-time attitude tracking control of spacecraft with application to attitude synchronization,” IEEE Transactions on Automatic Control, vol. 56, no. 11, pp. 2711–2717, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  11. H. B. Du and S. H. Li, “Finite-time attitude stabilization for a spacecraft using homogeneous method,” Journal of Guidance, Control, and Dynamics, vol. 35, pp. 740–748, 2012.
  12. E. Moulay and W. Perruquetti, “Finite time stability and stabilization of a class of continuous systems,” Journal of Mathematical Analysis and Applications, vol. 323, no. 2, pp. 1430–1443, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  13. Z. Artstein, “Stabilization with relaxed controls,” Nonlinear Analysis, vol. 7, no. 11, pp. 1163–1173, 1983. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  14. E. D. Sontag, “A Lyapunov-like characterization of asymptotic controllability,” SIAM Journal on Control and Optimization, vol. 21, no. 3, pp. 462–471, 1983. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  15. E. D. Sontag, “A “universal” construction of Artstein's theorem on nonlinear stabilization,” Systems & Control Letters, vol. 13, no. 2, pp. 117–123, 1989. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  16. P. Florchinger, “A universal formula for the stabilization of control stochastic differential equations,” Stochastic Analysis and Applications, vol. 11, no. 2, pp. 155–162, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  17. R. Chabour and M. Oumoun, “On a universal formula for the stabilization of control stochastic nonlinear systems,” Stochastic Analysis and Applications, vol. 17, no. 3, pp. 359–368, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  18. W. Chen and L. C. Jiao, “Finite-time stability theorem of stochastic nonlinear systems,” Automatica, vol. 46, no. 12, pp. 2105–2108, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  19. W. Chen and L. C. Jiao, “Authors’ reply to “Comments on ‘Finite-time stability theorem of stochastic nonlinear systems,” Automatica, vol. 46, pp. 2105–2108, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  20. W. Chen and L. C. Jiao, “Authors’ reply to “Comments on ‘Finite-time stability theorem of stochastic nonlinear systems”,” Automatica, vol. 47, no. 7, pp. 1544–1545, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  21. R. A. Freeman and P. V. Kokotovic, “Inverse optimality in robust stabilization,” SIAM Journal on Control and Optimization, vol. 34, no. 4, pp. 1365–1391, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  22. R. Situ, Theory of Stochastic Differential Equations with Jumps and Applications, Mathematical and Analytical Techniques with Applications to Engineering, Springer, New York, NY, USA, 2005. View at MathSciNet
  23. X. Cai, L. Liu, J. Huang, and W. Zhang, “Globally asymptotical stabilisation for a class of feedback linearisable differential inclusion systems,” IET Control Theory & Applications, vol. 5, no. 14, pp. 1586–1596, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  24. P. Chen, C. Z. Hou, and Y. Feng, Stochastic Mathematics, National Defence Industry Press, Beijing, China, 2008.