Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2013, Article ID 251340, 10 pages
http://dx.doi.org/10.1155/2013/251340
Research Article

Output-Feedback and Inverse Optimal Control of a Class of Stochastic Nonlinear Systems with More General Growth Conditions

Department of Automation, China University of Petroleum, Beijing 102249, China

Received 8 May 2013; Accepted 3 June 2013

Academic Editor: Baocang Ding

Copyright © 2013 Liu Jianwei et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper investigates the problem of output-feedback stabilization for a class of stochastic nonlinear systems in which the nonlinear terms depend on unmeasurable states besides measurable output. We extend linear growth conditions to power growth conditions and reduce the control effort. By using backstepping technique, choosing a high-gain parameter, an output-feedback controller is designed to ensure the closed-loop system to be globally asymptotically stable in probability, and the inverse optimal stabilization in probability is achieved. The efficiency of the output-feedback controller is demonstrated by a simulation example.

1. Introduction

The design of output-feedback controller for stochastic nonlinear systems has achieved remarkable research development, because output feedback control is more suitable for practical engineering systems; for example, see [112] and references therein. In recent years such research hotspot has mainly focused on a class of special nonlinear stochastic systems in which the nonlinear vector terms depend on the unmeasurable states besides the measurable output; for example, see [1317] and references therein. The work of [13] discussed the output-feedback controller design by introducing a stability concept named globally asymptotically stable in probability. Based on the purpose of reducing the amount of control, [14] considered the output-feedback stabilization problem.

However, in [1317], the nonlinear vector terms satisfy the linear growth conditions strictly, which greatly narrows the scope of application of the research results. Naturally, one may ask about an interesting and challenging problem: can we further relax the linear growth conditions? To our knowledge, the existing research results on this problem are as in [1820]. In [18], authors discussed the output-feedback stabilization problem by introducing a rescaling transformation under more relaxed growth conditions. On the basis of [18], the work of [19] and [20] further considered the output-feedback controller design problem for high-order stochastic nonlinear systems. However, for [1820], the observer gain is usually larger than 1, and the choice can lead to a controller design which needs larger control effort. So another challenging problem is proposed that is whether the assumption can be removed.

In this paper, we investigate the output-feedback stabilization problem for a class of stochastic nonlinear systems satisfying power growth conditions. Inspired by [13, 14], we find the maximum value interval of observer gain for the desired controller by using backstepping technique. For this interval, the designed output-feedback controller ensures that the equilibrium at the origin is globally asymptotically stable in probability and the inverse optimal stabilization in probability is achieved. The main contributions of this paper are characterized as follows. (i) We extend the linear growth conditions to the power growth conditions. (ii) The assumption of in [1820] is removed so that we can get less control effort.

The paper is organized as follows. Section 2 provides some preliminary results. In Section 3, the problem to be investigated is presented. In Sections 4 and 5, an output-feedback controller is designed and analysed. In Section 6, the inverse optimal stabilization in probability is achieved. Section 7 provides a simulation example. Section 8 concludes this paper.

2. Notations and Preliminary Results

Throughout this paper, the following notations will be used. denotes the set of all real numbers; denotes the set of all nonnegative real numbers; denotes the real -dimensional space; denotes the real matrix space; denotes the trace for square matrix ; denotes the Euclidean norm of a vector , and is the Frobenius norm of matrix defined by ; for a given vector , denotes ; denotes the set of all function with continuous th partial derivatives; is the family of all nonnegative functions on , which are in and in ; denotes the set of all functions: , which are continuous, strictly increasing, and vanish at zero; denotes the set of all functions which are of class and unbounded; denotes the set of all functions : , which are of for each fixed and decrease to zero as for each fixed .

Lemma 1. The inequality is established for any , , , .

Proof. For an assumption of , , inspired by Holder inequality [21], we can get and further get Then the proof is completed.

Consider the following stochastic system: where is the state of the system. is the control input of the system. is an -dimensional standard Wiener process defined on a probability space . The nonlinear functions and are locally Lipschitz with ,  . For any given associated with stochastic system (3), the differential operator is defined as follows: In order to discuss the stability of stochastic nonlinear systems, we introduce the following stability notion.

Definition 2 (see [22]). For the stochastic nonlinear system (3) with , , the equilibrium of (3) is said to be globally asymptotically stable (GAS) in probability if, for any , there exists a class function such that , , .

The following lemmas give some sufficient conditions ensuring global asymptotical stability in probability.

Lemma 3 (see [23]). For system (3), if there exist , class functions , , and a class function such that , , then there exists an almost surely unique solution to system (3) on , and the equilibrium is globally asymptotically stable in probability.

Lemma 4 (see [24]). Consider the following control law: where is a class function, , and is a matrix valued function such that . If the control law (5) to be ensures system (3) globally asymptotically stable in probability, then the control law solves the problem of inverse optimal stabilization in probability for system (3) by minimizing the cost functional where is a positive definite radially unbounded function satisfying

3. Problem Formulation

Consider the following stochastic nonlinear systems: where , , and are the states, the control input, and the measurable output of the system, is defined as in (3), and are the unmeasurable states. , , are locally Lipschitz with and satisfy the following power growth conditions.

Assumption 5. For each , there exists the known positive constant such that , where is any positive integer.

Remark 6. Assumption 5 can be simplified into linear growth conditions when . Therefore, linear growth conditions as a special case are included in Assumption 5. This paper extends previous work and gets a new result.

The objective of this paper is to design a smooth output-feedback controller for system (9), such that the closed-loop system is globally asymptotically stable in probability at the origin and achieves the design of the inverse optimal stabilization in probability.

4. Output-Feedback Controller Design

Since are unmeasured, the following observer is introduced: where is the estimated value of , is the observer gain to be determined, and ,   , are chosen such that matrix is asymptotically stable; thus there exists a positive definite matrix satisfying . Let , where for each . By (9) and (10), we can get error system where .

Now we give the backstepping controller design procedure.

Step  0. Choosing the zeroth Lyapunov function , applying , , , , Lemma 1, Assumption 5, and (4), we can get where and .

We introduce a series of coordinate changes as follows: where    is the virtual control law to be designed.

Step  1. Constructing the 1st Lyapunov function using (3), (10), (12)–(15), and Young’s inequality [22], we can obtain Applying (14) and Lemma 1, choosing , we can get which one substitutes in (16) to obtain by choosing the 1st virtual control law

Step  2. Using (10), (14), and (18), we can get Constructing the 2nd Lyapunov function applying (14), (18)–(21), , Lemma 1, and Young’s inequality [20], we obtain then we can get the 2nd virtual control law which satisfies

Step   . Suppose at ()th, that there are a set of virtual control laws : as follows with    being independent of such that the th Lyapunov function satisfies In the sequel, we will prove that (27) still holds for Using (14) and (25), a direct calculation leads to Using , Lemma 1, Young’s inequality [22], (14), (27), (28), and (29), we obtain then we can choose the th smooth virtual control law and get

Step . Using repeatedly the previous arguments, at the step, we can get where At the end of the recursive procedure, choosing the controller where satisfies (25) and is independent of , we can get where

Remark 7. The item is canceled at step . By the following analysis, we obtain the maximum value interval of to ensure the system to be globally asymptotically stable in probability at the origin.

5. Performance Analysis

Next, we give the main result in this paper.

Theorem 8. If Assumption 5 holds for stochastic nonlinear system (9) under the controllers (10) and (35), then there always exists a constant , such that for any ,(1)the closed-loop system has an almost surely unique solution on for any ;(2)the equilibrium at the origin of the closed-loop system is globally asymptotically stable in probability.

Proof. Using , (18), (23), and (31), obviously, if holds, then conclusions and follow from (36), (37), and Lemma 3. In the following, we analyze (38). From (13), (25), and (31), it is easy to find that depends on . Substituting (13) into (38) leads to equivalently, which is equivalent to with the real numbers According to the factorization theorem of real coefficient polynomial, (41) can be further expressed as where ,   are positive integers with , , , are different real numbers, and , , satisfy for all . Obviously, for all . Now, we divide into two cases to discuss the choice of . If there is at least one positive number for under the condition of appropriate value of , one chooses . Otherwise, . Thus there always exists , such that for any , (38) holds.

6. Inverse Optimal Controller Design

In this section we will design the inverse optimal controller on the basis of Theorem 8 to meet specific performance indicators besides achieving control objectives.

Theorem 9. The control law solves the problem of inverse optimal stabilization in probability for (9) by minimizing the cost function where ,  , .

Proof. Equations (10) and (11) can be represented as where ,   are identified in Theorem 9. Choosing , we can get and . Applying Lemma 4, we get According to Theorem 8 and Lemma 4, the inverse optimal controller can be designed as follows: where .

7. Simulation Examples

In this section, for a numerical example, we design the output-feedback controller by using two methods, where one method is introduced in this paper and the other is introducted in [19, 20].

Consider the following stochastic system: where With the notation of Assumption 5, one can take , .

In line with design method discussed in Section 4, we can design the observer states as follows: According to the design procedure in Section 4, we construct the controller as follows: where will be chosen later, , and , . With Theorem 8, one gets . According to the design procedure in Section 6, we choose and construct the inverse optimal controller as follows:

In simulation, we choose the initial values , , , , and . Figure 1 shows the responses of the closed-loop system (49)~(53), which demonstrate the effectiveness of the control scheme.

fig1
Figure 1: The responses of the closed-loop systems (49)~(53).

If the method in [19, 20] is adopted for the same systems, Figure 2 gives the corresponding responses of the systems (here the controller design theory of [19, 20] is not tackled details, and the interested readers can consult the relevant literature).

fig2
Figure 2: The responses of the closed-loop systems (49)~(53) when adopting the method in [19, 20].

Remark 10. By comparing the two figures, we can observe that the value of the control of Figure 1 is far less than Figure 2. In other words, our method requires less control effort to ensure the closed-loop system to be globally asymptotically stable in probability, and it demonstrates the advantage of this method clearly.

8. Concluding and Outlook

In this paper, we have studied the output-feedback stabilization for a class of stochastic nonlinear systems. We have given a design of the output-feedback controller so as to make the equilibrium at the origin of the closed-loop system globally asymptotically stable in probability by using the backstepping design technique and choosing a high-gain parameter, and the inverse optimal stabilization in probability is achieved. Our main contribution is extending the linear growth conditions to the more general power growth conditions so as to enable the result to be more general and to have a broader field of use.

There are two problems to be investigated.(1)By extending the value of in Assumption 5 from positive integers to the rationales, it can further weaken the conditions of the system (5). For this system, output-feedback problem deserves further research.(2)Another is to extend stochastic nonlinear systems in this paper to delay systems and study the design of controller.

References

  1. H. Deng and M. Krstić, “Output-feedback stochastic nonlinear stabilization,” IEEE Transactions on Automatic Control, vol. 44, no. 2, pp. 328–333, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  2. H. Deng and M. Krstić, “Output-feedback stabilization of stochastic nonlinear systems driven by noise of unknown covariance,” Systems & Control Letters, vol. 39, no. 3, pp. 173–182, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  3. M. Krstić and H. Deng, Stabilization of Nonlinear Uncertain Systems, Communications and Control Engineering Series, Springer, New York, NY, USA, 1998. View at MathSciNet
  4. Y.-G. Liu and J.-F. Zhang, “Practical output-feedback risk-sensitive control for stochastic nonlinear systems with stable zero-dynamics,” SIAM Journal on Control and Optimization, vol. 45, no. 3, pp. 885–926, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  5. Z.-J. Wu, X.-J. Xie, and S.-Y. Zhang, “Stochastic adaptive backstepping controller design by introducing dynamic signal and changing supply function,” International Journal of Control, vol. 79, no. 12, pp. 1635–1646, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. S.-J. Liu, J.-F. Zhang, and Z.-P. Jiang, “Decentralized adaptive output-feedback stabilization for large-scale stochastic nonlinear systems,” Automatica, vol. 43, no. 2, pp. 238–251, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  7. Z. Pan, Y. Liu, and S. Shi, “Output feedback stabilization for stochastic nonlinear systems in observer canonical form with stable zero-dynamics,” Science in China. Series F, vol. 44, no. 4, pp. 292–308, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  8. Y. Liu, J. Zhang, and Z. Pan, “Design of satisfaction output feedback controls for stochastic nonlinear systems under quadratic tracking risk-sensitive index,” Science in China. Series F, vol. 46, no. 2, pp. 126–144, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  9. Z.-J. Wu, X.-J. Xie, and S.-Y. Zhang, “Adaptive backstepping controller design using stochastic small-gain theorem,” Automatica, vol. 43, no. 4, pp. 608–620, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  10. X. Yu and X.-J. Xie, “Output feedback regulation of stochastic nonlinear systems with stochastic iISS inverse dynamics,” IEEE Transactions on Automatic Control, vol. 55, no. 2, pp. 304–320, 2010. View at Publisher · View at Google Scholar · View at MathSciNet
  11. W.-Q. Li and X.-J. Xie, “Inverse optimal stabilization for stochastic nonlinear systems whose linearizations are not stabilizable,” Automatica, vol. 45, no. 2, pp. 498–503, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  12. W. Li, X. Liu, and S. Zhang, “Further results on adaptive state-feedback stabilization for stochastic high-order nonlinear systems,” Automatica, vol. 48, no. 8, pp. 1667–1675, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  13. S.-J. Liu and J.-F. Zhang, “Output-feedback control of a class of stochastic nonlinear systems with linearly bounded unmeasurable states,” International Journal of Robust and Nonlinear Control, vol. 18, no. 6, pp. 665–687, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  14. N. Duan and X.-J. Xie, “Further results on output-feedback stabilization for a class of stochastic nonlinear systems,” IEEE Transactions on Automatic Control, vol. 56, no. 5, pp. 1208–1213, 2011. View at Publisher · View at Google Scholar · View at MathSciNet
  15. C. Qian, “A homogeneous domination approach for global output feedback stabilization of a class of nonlinear systems,” in Proceedings of the American Control Conference (ACC '05), pp. 4708–4715, June 2005. View at Scopus
  16. C. Qian and W. Lin, “Non-Lipschitz continuous stabilizers for nonlinear systems with uncontrollable unstable linearization,” Systems & Control Letters, vol. 42, no. 3, pp. 185–200, 2001. View at Publisher · View at Google Scholar · View at MathSciNet
  17. J. Polendo and C. Qian, “A generalized framework for global output feedback stabilization of genuinely nonlinear systems,” in Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference (CDC-ECC '05), pp. 2646–2651, December 2005. View at Publisher · View at Google Scholar · View at Scopus
  18. X.-J. Xie and W.-Q. Li, “Output-feedback control of a class of high-order stochastic nonlinear systems,” International Journal of Control, vol. 82, no. 9, pp. 1692–1705, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  19. W. Li, Y. Jing, and S. Zhang, “Output-feedback stabilization for stochastic nonlinear systems whose linearizations are not stabilizable,” Automatica, vol. 46, no. 4, pp. 752–760, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  20. W. Li, X. Liu, and S. Zhang, “Output-feedback stabilization of high-order stochastic nonlinear systems with more general growth conditions,” in Proceedings of the 30th Chinese Control Conference (CCC '11), pp. 5953–5957, July 2011. View at Scopus
  21. V. Kolmanovskii and A. Myshkis, Introduction to the Theory and Applications of Functional Differential Equations, vol. 463 of Mathematics and its Applications, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1999. View at MathSciNet
  22. M. Krstić and H. Deng, Stabilization of Uncertain Nonlinear Systems, Springer, New York, NY, USA, 1980.
  23. R. Z. Khas’minskii, Stochastic Stability of Differential Equations, vol. 7 of Monographs and Textbooks on Mechanics of Solids and Fluids: Mechanics and Analysis, Sijthoff & Noordhoff, Alphen aan den Rijn, The Netherlands, 1980. View at MathSciNet
  24. H. Deng and M. Krstić, “Stochastic nonlinear stabilization. II. Inverse optimality,” Systems & Control Letters, vol. 32, no. 3, pp. 151–159, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet