- About this Journal
- Abstracting and Indexing
- Aims and Scope
- Annual Issues
- Article Processing Charges
- Articles in Press
- Author Guidelines
- Bibliographic Information
- Citations to this Journal
- Contact Information
- Editorial Board
- Editorial Workflow
- Free eTOC Alerts
- Publication Ethics
- Reviewers Acknowledgment
- Submit a Manuscript
- Subscription Information
- Table of Contents
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 808035, 21 pages
Nonsmooth Adaptive Control Design for a Large Class of Uncertain High-Order Stochastic Nonlinear Systems
School of Control Science and Engineering, Shandong University, Jinan 250061, China
Received 27 September 2011; Accepted 3 November 2011
Academic Editor: Xue-Jun Xie
Copyright © 2012 Jian Zhang and Yungang Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper investigates the problem of the global stabilization via partial-state feedback and adaptive technique for a class of high-order stochastic nonlinear systems with more uncertainties/unknowns and stochastic zero dynamics. First of all, two stochastic stability concepts are slightly extended to allow the systems with more than one solution. To solve the problem, a lot of substantial technical difficulties should be overcome since the presence of severe uncertainties/unknowns, unmeasurable zero dynamics, and stochastic noise. By introducing the suitable adaptive updated law for an unknown design parameter and appropriate control Lyapunov function, and by using the method of adding a power integrator, an adaptive continuous (nonsmooth) partial-state feedback controller without overparameterization is successfully designed, which guarantees that the closed-loop states are bounded and the original system states eventually converge to zero, both with probability one. A simulation example is provided to illustrate the effectiveness of the proposed approach.
In the past decades, stability and stabilization for stochastic nonlinear systems have been vigorously developed [1–13]. As the early investigation in the area, in [1–3], some quite fundamental notations have been proposed to characterize different types of stochastic stability and, meanwhile for which, sufficient conditions have been separately provided. As the recent investigation, works  and [3, 5] considered stabilization problems by using Sontag’s formula and backstepping method, respectively, and stimulated a series of subsequent works [6–13].
The control designs for classes of high-order nonlinear systems have received intense investigation recently and developed the so-called method of adding a power integrator which is based on the idea of the stable domain  and can be viewed as the latest achievement of the traditional backstepping method . By applying such skillful method, smooth state-feedback control design can be achieved when some severe conditions are imposed on systems (see, e.g., [16, 17]), while, without those conditions, only nonsmooth state-feedback control can be possibly designed (see, e.g., [18–23]). As a natural extension, the output-feedback case was considered in , for less available information, which is a more interesting and difficult subject of intensive study. Another extension is the control design for high-order stochastic nonlinear systems, which attract plenty of attention because of the presence of stochastic disturbance and cannot be solved by simply extending the methods for deterministic systems (see, e.g., [25–31]). To the authors’ knowledge, this issue has not been richly investigated and on which many significant problems remain unsolved.
This paper considers the global stabilization for the high-order stochastic nonlinear systems described by (3.1) below, relaxes the assumptions imposed on the systems in [25–28], and obtains much more general results than the previous ones. Since the presence of system uncertainties, some nontrivial obstacles will be encountered during control design, which force many skillful adaptive techniques to be employed in this paper. Furthermore, for the stabilization problem, finding a suitable and available control Lyapunov function is necessary and important. In this paper, a novel control Lyapunov function is first successfully constructed, which is available for the stabilization of system (3.1) and different from those introduced in [25–28] which are unusable here. Then, by using the method of adding a power integrator, an adaptive continuous partial-state feedback controller is successfully achieved to guarantee that for any initial condition the original system states are bounded and can be regulated to the origin almost surely.
The contributions of the paper are highlighted as follows.(i)The systems under investigation are more general than those studied in closely related works [25–28]. Different from , the zero dynamics of the systems are unmeasurable and disturbed by stochastic noise. Moreover, the restrictions on the system nonlinear terms are weaker than those in [25–28], and in particular, the assumption in  that the low bounds of unknown control coefficients are known has been removed.(ii)The paper considerably generalizes the results in [17, 22], and more importantly, no overparameterization problem is present in the adaptive control scheme. In fact, the paper presents the stochastic counterpart of the result in  under quite weak assumptions. Particularly, the paper develops the adaptive control scheme without overparameterization (one parameter estimate is enough). Furthermore, it is easy to see that the scheme developed can be used to eliminate the overparameterization problem in [17, 21, 22] (reduce the number of parameter estimates from to 1).(iii)The formulation of zero dynamics is typical and suggestive. In fact, to make the formulation of zero dynamics more representational, we adopt partial assumptions on zero dynamics in [8, 9]. It is worth pointing out that the formulation of the gain functions of stochastic disturbance is somewhat general than those in [8, 9].
The remainder of this paper is organized as follows. Section 2 presents some necessary notations, definition and preliminary results. Section 3 describes the systems to be studied, formulates the control problem, and presents some useful propositions. Section 4 gives the main contributions of this paper and presents the design scheme to the controller. Section 5 gives a simulation example to demonstrate the effectiveness of the theoretical results. The paper ends with an Appendices A and B.
2. Notations and Preliminary Results
Throughout the paper, the following notations are adopted. denotes the real -dimensional space. denotes the set and are odd positive integers, and . denotes the set of all positive real numbers. For a given vector or matrix , denotes its transpose, denotes its trace when is square, and denotes the Euclidean norm when is a vector. denotes the set of all functions with continuous partial derivatives up to the th order. denotes the set of all functions from to , which are continuous, strictly increasing, and vanishing at zero, and denotes the set of all functions which are of class and unbounded.
Consider the general stochastic nonlinear system where is the system state vector with the initial condition ; drift term and diffusion term are piecewise continuous and continuous with respect to the first and second arguments, respectively, and satisfy and ; is an independent standard Wiener process defined on a complete probability space with being a sample space, a -algebra on , and a probability measure.
Since both and are only continuous, not locally Lipschitz, system (2.1) may not have the solution in the classical sense as in [7, 9]. However, the system always has weak solutions which are essentially different from the classical (or strong) solution since the former may not be unique and may be defined on a different probability space . The following definition gives the rigorous characterization of the weak solution of system (2.1), and for more details of weak solution, we refer the reader to [32, 33].
Definition 2.1. For system (2.1), if a continuous stochastic process defined on a probability space with a filtration and an -dimensional Brownian motion adapted to , such that for all , the integrals below are well-defined and satisfies then is called a weak solution of system (2.1), where denotes either or the finite explosion time of solution (i.e., ).
To characterize the stability of the origin solution of system (2.1), as well as the common statistic property of all possible weak solutions of the system, we slightly extend the classical stochastic stability concepts of globally stable in probability and globally asymptotically stable in probability given in . This extension is inspired by the deterministic analog in  and allows the above two stability concepts applicable to the systems with more than one weak solution.
Definition 2.2. The origin solution of system (2.1) is globally stable in probability if, for all , for any weak solution which is defined on its corresponding probability space , there exists a class function such that and globally asymptotically stable in probability if it is globally stable in probability and for any weak solution ,
More importantly, we have the following theorem, which can be regarded as the version of Theorem 2.1 of  in the setting of more than one weak solution, provides the sufficient conditions for the above two extended stability concepts, and consequently will play a key role in the later development. By comparison, one can see that Theorem 2.3 preserves the main conclusion of Theorem 2.1 of  except for the uniqueness of strong solution. By some minor/trivial modifications to the proofs of Theorem 3.19 in  (or that of Lemma 2 in ) and Theorem 2.4 in , it is not difficult to prove Theorem 2.3.
Theorem 2.3. For system (2.1), suppose that there exists a function which is positive definite and radially unbounded, such that where is continuous and nonnegative. Then the origin solution of (2.1) is globally stable in probability. Furthermore, if is positive definite, then for any weak solution defined on probability space , there holds .
Proof. From Theorem 2.3 in [33, page 159], it follows that system (2.1) has at least one weak solution. We use to denote anyone of the weak solutions, which is defined on its corresponding probability space and on where denotes either or the finite explosion time of the weak solution .
First, quite similar to the proof of Theorem 3.19 in [35, page 95-96] or that of Lemma 2 in , we can prove that (namely, all weak solutions of system (2.1) are defined on ) and that the origin solution of system (2.1) is globally stable in probability.
Second, very similar to the proof of Theorem 2.4 in [37, page 114-115], we can show that if is positive definite, then for any weak solution , it holds .
We next provide three lemmas which will play an important role in the later development. In fact, Lemma 2.4 can be directly deduced from the well-known Young’s Inequality, and the proofs of Lemmas 2.5 and 2.6 can be found in [19, 20].
Lemma 2.4. For any , , , there holds
Lemma 2.5. For any continuous function , there are smooth functions , , , and such that
Lemma 2.6. For any , and any , , there hold and, in particular, if , .
3. System Model and Control Objective
In this paper, we consider the global adaptive stabilization for a class of uncertain high-order stochastic nonlinear systems in the following form: where is the unmeasurable system state vector, called zero dynamics; and are the measurable system state vector and the control input, respectively; the system initial condition is , ; , are said the system high orders; , , and , , are unknown continuous functions, called the system drift and diffusion terms, respectively; are uncertain and continuous, called the control coefficients; is an independent standard Wiener process defined on a complete probability space with being a sample space, a -algebra on , and a probability measure. Besides, for the simplicity of expression in later use, let and .
Differential equations (3.1) describe a large class of uncertain high-order stochastic nonlinear systems, for which some tedious technical difficulties will be encountered in control design mainly due to the presence of the stochastic zero dynamics and the uncertainties/unknowns in the control coefficients, the system drift, and diffusion terms. In the recent works [25–28], with measurable inverse dynamics or deterministic zero dynamics and by imposing somewhat severe restrictions on ’s, ’s, ’s, and ’s in system (3.1), smooth stabilizing controllers have been designed. The purpose of this paper is to relax these restrictions and solve the stabilization problem of the more general system (3.1) under the following three assumptions.
Assumption 3.1. There exists a function such that where , are functions; , , and are continuous functions; and is an unknown constant.
Assumption 3.2. For each , and satisfy where and are known functions with and ; and are unknown constants; is some positive integer; ’s satisfy .
Assumption 3.3. For each , , its sign is known, and there are unknown constants and , known smooth functions , and such that where when .
Above three assumptions are common and similar to the ones usually imposed on the high-order nonlinear systems (see, e.g., [17, 20]). Based on Assumption 3.2 and Lemma 2.5, we obtain the following proposition which dominates the growth properties of ’s and ’s and will play a key role in overcoming the obstacle caused by system uncertainties/unknowns. The proof is omitted here since it is quite similar to that of Proposition 2 in .
Proposition 3.4. For each , there exist smooth functions , , and , such that where is obviously an unknown constant.
Remark 3.5. It is worth pointing out that in the recent related work , to ensure continuously differential output feedback control design, somewhat stronger assumptions have been imposed on the system drift and diffusion terms. For example, different from Proposition 1, Assumption 1 in  requires that the powers of , are larger than one in the upper bound estimations of and (the case of and is more evident).
Assumption 3.6. For some , there exist and which are continuous, positive, and monotone increasing functions satisfying and , such that where denotes the inverse function of .
To understand well the academic meaning of the control problem to be studied, and in particular the generality and different nature of system (3.1) compared with the exiting works, we make the following four remarks corresponding to above four assumptions, respectively.
Remark 3.7. Assumption 3.1 indicates that the unmeasurable zero dynamics possesses the Stochastic ISS (Input-State Stability) type property, like in [8, 9], and the restriction on is somewhat weaker than that in [8, 9] since the additional term in the estimation of .
Remark 3.8. Assumption 3.2 demonstrates that the power of in must be strictly less than the corresponding system high order. This is necessary to realize the stabilization of the system by using the domination approach of . Moreover, thanks to no further restrictions on ’s or ’s, Assumption 3.2 is more possibly met than those in [25–28].
Remark 3.9. Assumption 3.3 shows that the control coefficients ’s never vanish and otherwise system (3.1) would be uncontrollable somewhere. Besides, from this assumption, one can easily see that the signs of ’s remain unchanged. Furthermore, the unknown constant “” makes system (3.1) more general than those studied in [25–28] where the lower bounds of uncertain control coefficients ’s are required to be precisely known.
Remark 3.10. In fact, Assumption 3.6 is similar to the corresponding one in [8, 9]. From above formulation of the system, it can be seen that the unwanted effects of , that is, “” in Assumption 3.1 and “” in Proposition 3.4, can only be dominated by the term “” in Assumption 3.1, and therefore some requirements should be imposed on these three terms. For the sake of stabilization, we make Assumption 3.6, which clearly includes a special case where since at this moment and can be constants and (3.6) obviously holds.
As the recent development on high-order control systems, works [17, 21, 22] proposed a novel adaptive control technique, which is powerful to successfully overcome the technical difficulties in stabilizing system (3.1) caused by the weaker conditions on unknown control coefficients. Inspired by these works, the paper extends the stabilization results in [17, 22] from deterministic systems to stochastic ones, under quite weaker assumptions than those in [25–28]. More importantly, instead of simple generalization, motivated by the novel adaptive technique for deterministic nonlinear systems , we develop the adaptive control scheme without overparameterization that occurred in [17, 21, 22]. (In fact, the number of parameter estimates is reduced from to 1.)
For details, in this paper, the main objective is to design a controller in the following form: where , and is a smooth function, while is a continuous function, such that all closed-loop states are bounded almost surely, and furthermore, the original system is globally asymptotically stable in probability.
Finally, for the sake of the later control design, we obtain the following proposition by the technique of changing supply functions [20, 38]. The proof of Proposition 3.11 is mainly inspired by [9, 38] and placed in Appendix A.
Proposition 3.11. Define , and . Then, under Assumptions 3.1 and 3.6, one can construct a suitable which is , monotone increasing, such that(i) is , positive definite, and radially unbounded;(ii)there exist a smooth function and an unknown constant such that
4. Partial-State Feedback Adaptive Stabilizing Control
Since the signs of ’s are known and remain unchanged, without loss of generality, suppose , . The following theorem summarizes the main result of this paper.
Theorem 4.1. Consider system (3.1) and suppose Assumptions 3.1–3.3 and 3.6 hold. Then there exists an adaptive continuous partial-state feedback controller in the form (3.7), such that(i)the origin solution of the closed-loop system is globally stable in probability;(ii)the states of the original system converge to the origin, and the other states of the closed-loop system converge to some finite value, both with probability one.
About the main theorem, we have the following remark.
Remark 4.2. From Claim (i) and the former part of Claim (ii), we easily know that the original system is globally asymptotically stable in probability.
Proof. To complete the proof, we will first construct an adaptive continuous controller in the form (3.7) for system (3.1). Then by applying Theorem 2.3, it will be shown that the theorem holds for the closed-loop system.
First, let us define , where and are the same as in Assumption 3.3 and Proposition 3.11, respectively. The estimate of is denoted by , for which the following updating law will be designed: where is a to-be-determined nonnegative smooth function which ensures that , for all .
We would like to give some inequalities on above defined for the sake of use in the later control design. Noting (see Proposition 3.4) and , for all , it is clear that . Moreover, since , , there hold , , and hence and .
Remark 4.3. As will be seen, mainly because that the definition of new unknown parameter is essentially different form that in [17, 21, 22], the overparameterization problem that occurred in the works is successfully overcome.
Next, we introduce the following new variables: and the actual control law , where , are continuous functions satisfying , for all . In the following, a recursive design procedure is provided to construct the virtual and actual controllers ’s. For completing the control design, we also introduce a sequence of functions as follows: Similar to the corresponding proof in , it is easy to verify that, for each , is in all its arguments, when , when , and as .
Step 1. Choose to be the candidate Lyapunov function for this step, where denotes the parameter estimation error. Then, along the trajectories of system (3.1), we have
By Proposition 3.4 and Lemma 2.4, we have following estimations: from which, (4.4), Proposition 3.11, and the facts , , it follows that where and . It will be seen from the later design steps that a series of nonnegative smooth functions , , are introduced so as to finally obtain the updating law of , that is, .
Mainly based on (4.6), the virtual continuous controller is chosen such that and such choice makes (4.6) become
Remark 4.4. It is necessary to mention that in the first design step, functions and have been provided with explicit expressions in order to deduce the completely explicit virtual controller . However, in the later design steps, sometimes for the sake of briefness, we will not explicitly write out the functions which are easily defined.
Inductive Steps. Suppose that the first design steps have been completed. In other words, we have found appropriate functions , , satisfying and for known nonnegative smooth functions , , , such that for the candidate Lyapunov function .
Let be the candidate Lyapunov function for step . Then, along the trajectories of system (3.1), we have
Just as in the first step, in order to design , one should appropriately estimate the last four terms on the right-hand side of above equality and the last term on the right-hand side of (4.9), as formulated in the following proposition whose proof is placed in Appendix B.
Proposition 4.5. There exists nonnegative smooth function , such that
Then, by (4.9), (4.10), and Proposition 4.5, we have where .
Observing that a nonnegative smooth function can be easily constructed such that if we design the continuous virtual controller such that (obviously, is a strictly positive smooth function), then (4.12) becomes
Noting the arguments in the last design step, we choose the adaptive actual continuous controller as follows: from which, (4.15) with and the aforementioned , , it follows that where is a smooth function.
With the adaptive controller (4.16) in loop, we know that is the origin solution of the closed-loop system. Thus, from Theorem 2.3 and (4.17), it follows that the origin solution is globally stable in probability; furthermore, since is positive definite which can be deduced from the expressions of and , it follows that , and in terms of the similar proof of Theorem 3.1 in , one can see that the state converges to some finite value with probability one.
We would like to point out that the adaptive control scheme given above can be used to remove the overparameterization in the recent works [17, 21, 22], where the number of parameter estimates is not less than . For this aim, it suffices to introduce another new unknown parameter like defined before, and the design steps are quite similar to those developed earlier and do not need further discussion.
5. A Simulation Example
Consider the following three-dimensional uncertain high-order stochastic nonlinear system: where is an unknown constant.
It is easy to verify that system (5.1) satisfies Assumptions 3.1 and 3.6 with , , and . Assumption 3.2 holds with , , , and . Assumption 3.3 holds with , and . Therefore, in terms of the design steps developed in Section 4, an adaptive partial-state feedback stabilizing controller can be explicitly given.
Let and the initial states be , , and . Using MATLAB, Figures 1 and 2 are obtained to exhibit the trajectories of the closed-loop system states. (To show the transient behavior more clearly, logarithmic -coordinates have been adopted.) From these figures, one can see that , , and are regulated to zero while converges to a finite value, all with probability one.
6. Concluding Remarks
In this paper, the partial-state feedback stabilization problem has been investigated for a class of high-order stochastic nonlinear systems under weaker assumptions than the existing works. By introducing the novel adaptive updated law and appropriate control Lyapunov function, and using the method of adding a power integrator, we have designed an adaptive continuous partial-state feedback controller without overparameterization and given a simulation example to illustrate the effectiveness of the control design method. It has been shown that, with the designed controller in loop, all the original system states are regulated to zero and the other closed-loop states are bounded almost surely for any initial condition. Along this direction, there are a lot of other interesting research problems, such as output-feedback control for the systems studied in the paper, which are now under our further investigation.
A. The Proof of Proposition 3.11
It is easy to verify that the first assertion of Proposition 3.11 holds when is chosen to be positive, , and monotone increasing. Thus, in the rest of the proof, we will find such to guarantee the correctness of the second assertion.
(i) Case of
For this case, from (A.1), we have
Moreover, noting the above definitions of and using integration by parts, we have for all which together with (A.4) concludes that , for all , and therefore, is positive, , and monotone increasing on .
(ii) Case of
In this case, it is not hard to find a function and an unknown constant satisfying . Then from (A.1), we get
Choosing the same as in the first case and in view of (A.6), we have From this, (A.4) and (A.8), it follows that which shows that second assertion of Proposition 3.11 holds for this case by letting and . (Since when , and otherwise , from the fact that and are positive and monotone increasing functions on , it follows that .)
B. The Proof of Proposition 4.5
We first prove the following proposition.
Proposition B.1. For , there exist smooth nonnegative functions , , and , such that where , , and is the same as in Proposition 3.4.
Proof. The first claim obviously holds when , because of the following inequality:
and can be easily proven in the same way of Lemma 3.4 in .
Based on Lemma 2.4 and Proposition 3.4, the proof for the last three claims is straightforward (though somewhat tedious) and quite similar to the proof of Lemma 3.5 in  and is omitted here.
From Propositions 3.4 and B.1, we have where and whereafter , are nonnegative smooth functions and can be easily obtained by Lemma 2.4, and for the notional convenience, their explicit expressions are omitted.
This work was supported by the National Natural Science Foundations of China (60974003, 61143011), the Program for New Century Excellent Talents in University of China (NCET-07-0513), the Key Science and Technical Foundation of Ministry of Education of China (108079), the Excellent Young and Middle-Aged Scientist Award Grant of Shandong Province of China (2007BS01010), the Natural Science Foundation for Distinguished Young Scholar of Shandong Province of China (JQ200919), and the Independent Innovation Foundation of Shandong University (2009JQ008).
- H. J. Kushner, Stochastic Stability and Control, Mathematics in Science and Engineering, Academic Press, New York, NY, USA, 1967.
- R. Z. Has’minskii, Stochastic Stability of Differential Equations, vol. 7 of Monographs and Textbooks on Mechanics of Solids and Fluids: Mechanics and Analysis, Sijthoff & Noordhoff, Alphen aan den Rijn, The Netherlands, 1980.
- M. Krstić and H. Deng, Stabilization of Nonlinear Uncertain Systems, Communications and Control Engineering Series, Springer, New York, NY, USA, 1998.
- P. Florchinger, “Lyapunov-like techniques for stochastic stability,” SIAM Journal on Control and Optimization, vol. 33, no. 4, pp. 1151–1169, 1995.
- Z. Pan and T. Başar, “Backstepping controller design for nonlinear stochastic systems under a risk-sensitive cost criterion,” SIAM Journal on Control and Optimization, vol. 37, no. 3, pp. 957–995, 1999.
- H. Deng and M. Krstić, “Output-feedback stochastic nonlinear stabilization,” IEEE Transactions on Automatic Control, vol. 44, no. 2, pp. 328–333, 1999.
- H. Deng, M. Krstić, and R. J. Williams, “Stabilization of stochastic nonlinear systems driven by noise of unknown covariance,” IEEE Transactions on Automatic Control, vol. 46, no. 8, pp. 1237–1253, 2001.
- S.-J. Liu, J.-F. Zhang, and Z.-P. Jiang, “Decentralized adaptive output-feedback stabilization for large-scale stochastic nonlinear systems,” Automatica, vol. 43, no. 2, pp. 238–251, 2007.
- S.-J. Liu, Z.-P. Jiang, and J.-F. Zhang, “Global output-feedback stabilization for a class of stochastic non-minimum-phase nonlinear systems,” Automatica, vol. 44, no. 8, pp. 1944–1957, 2008.
- Y. Liu, Z. Pan, and S. Shi, “Output feedback control design for strict-feedback stochastic nonlinear systems under a risk-sensitive cost,” IEEE Transactions on Automatic Control, vol. 48, no. 3, pp. 509–513, 2003.
- Y. Liu and J.-F. Zhang, “Reduced-order observer-based control design for nonlinear stochastic systems,” Systems & Control Letters, vol. 52, no. 2, pp. 123–135, 2004.
- Y. Liu and J.-F. Zhang, “Practical output-feedback risk-sensitive control for stochastic nonlinear systems with stable zero-dynamics,” SIAM Journal on Control and Optimization, vol. 45, no. 3, pp. 885–926, 2006.
- Z. Pan, Y. Liu, and S. Shi, “Output feedback stabilization for stochastic nonlinear systems in observer canonical form with stable zero-dynamics,” Science in China (Series F), vol. 44, no. 4, pp. 292–308, 2001.
- W. Lin and C. Qian, “Adding one power integrator: a tool for global stabilization of high-order lower-triangular systems,” Systems & Control Letters, vol. 39, no. 5, pp. 339–351, 2000.
- M. Krsti'c, I. Kanellakopoulos, and P. V. Kokotovi'c, Nonlinear and Adaptive Control Design, John Wiley & Sons, New York, NY, USA, 1995.
- W. Lin and C. Qian, “Adaptive control of nonlinearly parameterized systems: the smooth feedback case,” IEEE Transactions on Automatic Control, vol. 47, no. 8, pp. 1249–1266, 2002.
- Z. Sun and Y. Liu, “Adaptive state-feedback stabilization for a class of high-order nonlinear uncertain systems,” Automatica, vol. 43, no. 10, pp. 1772–1783, 2007.
- W. Lin and C. Qian, “A continuous feedback approach to global strong stabilization of nonlinear systems,” IEEE Transactions on Automatic Control, vol. 46, no. 7, pp. 1061–1079, 2001.
- W. Lin and C. Qian, “Adaptive control of nonlinearly parameterized systems: a nonsmooth feedback framework,” IEEE Transactions on Automatic Control, vol. 47, no. 5, pp. 757–774, 2002.
- W. Lin and R. Pongvuthithum, “Nonsmooth adaptive stabilization of cascade systems with nonlinear parameterization via partial-state feedback,” IEEE Transactions on Automatic Control, vol. 48, no. 10, pp. 1809–1816, 2003.
- Z. Sun and Y. Liu, “Adaptive practical output tracking control for high-order nonlinear uncertain systems,” Acta Automatica Sinica, vol. 34, no. 8, pp. 984–988, 2008.
- Z. Sun and Y. Liu, “Adaptive stabilisation for a large class of high-order uncertain non-linear systems,” International Journal of Control, vol. 82, no. 7, pp. 1275–1287, 2009.
- J. Zhang and Y. Liu, “A new approach to adaptive control design without overparametrization for a class of uncertain nonlinear systems,” Science China Information Sciences, vol. 54, no. 7, pp. 1419–1429, 2011.
- C. Qian and W. Lin, “Recursive observer design, homogeneous approximation, and nonsmooth output feedback stabilization of nonlinear systems,” IEEE Transactions on Automatic Control, vol. 51, no. 9, pp. 1457–1471, 2006.
- X. Xie and J. Tian, “State-feedback stabilization for high-order stochastic nonlinear systems with stochastic inverse dynamics,” International Journal of Robust and Nonlinear Control, vol. 17, no. 14, pp. 1343–1362, 2007.
- J. Tian and X. Xie, “Adaptive state-feedback stabilization for more general high-order stochastic nonlinear systems,” Acta Automatica Sinica, vol. 34, no. 9, pp. 1188–1191, 2008.
- X. Xie and J. Tian, “Adaptive state-feedback stabilization of high-order stochastic systems with nonlinear parameterization,” Automatica, vol. 45, no. 1, pp. 126–133, 2009.
- W. Li and X. Xie, “Inverse optimal stabilization for stochastic nonlinear systems whose linearizations are not stabilizable,” Automatica, vol. 45, no. 2, pp. 498–503, 2009.
- X. Xie and N. Duan, “Output tracking of high-order stochastic nonlinear systems with application to benchmark mechanical system,” IEEE Transactions on Automatic Control, vol. 55, no. 5, pp. 1197–1202, 2010.
- W. Li, X. Xie, and S. Zhang, “Output-feedback stabilization of stochastic high-order nonlinear systems under weaker conditions,” SIAM Journal on Control and Optimization, vol. 49, no. 3, pp. 1262–1282, 2011.
- X. Xie, N. Duan, and X. Yu, “State-feedback control of high-order stochastic nonlinear systems with SiISS inverse dynamics,” IEEE Transactions on Automatic Control, vol. 56, no. 8, pp. 1921–1926, 2011.
- F. C. Klebaner, Introduction to Stochastic Calculus with Applications, Imperial College Press, London, UK, 2nd edition, 2005.
- N. Ikeda and S. Watanabe, Stochastic Differential Equations and Diffusion Processes, North-Holland, Amsterdam, The Netherlands, 2nd edition, 1981.
- J. Kurzweil, “On the inversion of lyapunov’s theorem on the stability of motion,” American Mathematical Society Translations 2, vol. 24, pp. 19–77, 1956.
- X. Mao and C. Yuan, Stochastic Differential Equations with Markovian Switching, Imperial College Press, London, UK, 2006.
- X. Yu and X. Xie, “Output feedback regulation of stochastic nonlinear systems with stochastic iISS inverse dynamics,” IEEE Transactions on Automatic Contro, vol. 55, no. 2, pp. 304–320, 2010.
- X. Mao, Stochastic Differential Equations and Their Applications, Horwood Publishing Series in Mathematics & Applications, Horwood, Chichester, UK, 1997.
- E. Sontag and A. Teel, “Changing supply functions in input/state stable systems,” IEEE Transactions on Automatic Control, vol. 40, no. 8, pp. 1476–1478, 1995.