Abstract

This paper investigates the problem of the global stabilization via partial-state feedback and adaptive technique for a class of high-order stochastic nonlinear systems with more uncertainties/unknowns and stochastic zero dynamics. First of all, two stochastic stability concepts are slightly extended to allow the systems with more than one solution. To solve the problem, a lot of substantial technical difficulties should be overcome since the presence of severe uncertainties/unknowns, unmeasurable zero dynamics, and stochastic noise. By introducing the suitable adaptive updated law for an unknown design parameter and appropriate control Lyapunov function, and by using the method of adding a power integrator, an adaptive continuous (nonsmooth) partial-state feedback controller without overparameterization is successfully designed, which guarantees that the closed-loop states are bounded and the original system states eventually converge to zero, both with probability one. A simulation example is provided to illustrate the effectiveness of the proposed approach.

1. Introduction

In the past decades, stability and stabilization for stochastic nonlinear systems have been vigorously developed [113]. As the early investigation in the area, in [13], some quite fundamental notations have been proposed to characterize different types of stochastic stability and, meanwhile for which, sufficient conditions have been separately provided. As the recent investigation, works [4] and [3, 5] considered stabilization problems by using Sontag’s formula and backstepping method, respectively, and stimulated a series of subsequent works [613].

The control designs for classes of high-order nonlinear systems have received intense investigation recently and developed the so-called method of adding a power integrator which is based on the idea of the stable domain [14] and can be viewed as the latest achievement of the traditional backstepping method [15]. By applying such skillful method, smooth state-feedback control design can be achieved when some severe conditions are imposed on systems (see, e.g., [16, 17]), while, without those conditions, only nonsmooth state-feedback control can be possibly designed (see, e.g., [1823]). As a natural extension, the output-feedback case was considered in [24], for less available information, which is a more interesting and difficult subject of intensive study. Another extension is the control design for high-order stochastic nonlinear systems, which attract plenty of attention because of the presence of stochastic disturbance and cannot be solved by simply extending the methods for deterministic systems (see, e.g., [2531]). To the authors’ knowledge, this issue has not been richly investigated and on which many significant problems remain unsolved.

This paper considers the global stabilization for the high-order stochastic nonlinear systems described by (3.1) below, relaxes the assumptions imposed on the systems in [2528], and obtains much more general results than the previous ones. Since the presence of system uncertainties, some nontrivial obstacles will be encountered during control design, which force many skillful adaptive techniques to be employed in this paper. Furthermore, for the stabilization problem, finding a suitable and available control Lyapunov function is necessary and important. In this paper, a novel control Lyapunov function is first successfully constructed, which is available for the stabilization of system (3.1) and different from those introduced in [2528] which are unusable here. Then, by using the method of adding a power integrator, an adaptive continuous partial-state feedback controller is successfully achieved to guarantee that for any initial condition the original system states are bounded and can be regulated to the origin almost surely.

The contributions of the paper are highlighted as follows.(i)The systems under investigation are more general than those studied in closely related works [2528]. Different from [26], the zero dynamics of the systems are unmeasurable and disturbed by stochastic noise. Moreover, the restrictions on the system nonlinear terms are weaker than those in [2528], and in particular, the assumption in [27] that the low bounds of unknown control coefficients are known has been removed.(ii)The paper considerably generalizes the results in [17, 22], and more importantly, no overparameterization problem is present in the adaptive control scheme. In fact, the paper presents the stochastic counterpart of the result in [22] under quite weak assumptions. Particularly, the paper develops the adaptive control scheme without overparameterization (one parameter estimate is enough). Furthermore, it is easy to see that the scheme developed can be used to eliminate the overparameterization problem in [17, 21, 22] (reduce the number of parameter estimates from to 1).(iii)The formulation of zero dynamics is typical and suggestive. In fact, to make the formulation of zero dynamics more representational, we adopt partial assumptions on zero dynamics in [8, 9]. It is worth pointing out that the formulation of the gain functions of stochastic disturbance is somewhat general than those in [8, 9].

The remainder of this paper is organized as follows. Section 2 presents some necessary notations, definition and preliminary results. Section 3 describes the systems to be studied, formulates the control problem, and presents some useful propositions. Section 4 gives the main contributions of this paper and presents the design scheme to the controller. Section 5 gives a simulation example to demonstrate the effectiveness of the theoretical results. The paper ends with an Appendices A and B.

2. Notations and Preliminary Results

Throughout the paper, the following notations are adopted. denotes the real -dimensional space. denotes the set and are odd positive integers, and . denotes the set of all positive real numbers. For a given vector or matrix , denotes its transpose, denotes its trace when is square, and denotes the Euclidean norm when is a vector. denotes the set of all functions with continuous partial derivatives up to the th order. denotes the set of all functions from to , which are continuous, strictly increasing, and vanishing at zero, and denotes the set of all functions which are of class and unbounded.

Consider the general stochastic nonlinear system where is the system state vector with the initial condition ; drift term and diffusion term are piecewise continuous and continuous with respect to the first and second arguments, respectively, and satisfy and ; is an independent standard Wiener process defined on a complete probability space with being a sample space, a -algebra on , and a probability measure.

Since both and are only continuous, not locally Lipschitz, system (2.1) may not have the solution in the classical sense as in [7, 9]. However, the system always has weak solutions which are essentially different from the classical (or strong) solution since the former may not be unique and may be defined on a different probability space . The following definition gives the rigorous characterization of the weak solution of system (2.1), and for more details of weak solution, we refer the reader to [32, 33].

Definition 2.1. For system (2.1), if a continuous stochastic process defined on a probability space with a filtration and an -dimensional Brownian motion adapted to , such that for all , the integrals below are well-defined and satisfies then is called a weak solution of system (2.1), where denotes either or the finite explosion time of solution (i.e., ).

To characterize the stability of the origin solution of system (2.1), as well as the common statistic property of all possible weak solutions of the system, we slightly extend the classical stochastic stability concepts of globally stable in probability and globally asymptotically stable in probability given in [7]. This extension is inspired by the deterministic analog in [34] and allows the above two stability concepts applicable to the systems with more than one weak solution.

Definition 2.2. The origin solution of system (2.1) is globally stable in probability if, for all , for any weak solution which is defined on its corresponding probability space , there exists a class function such that and globally asymptotically stable in probability  if it is globally stable in probability and for any weak solution ,

More importantly, we have the following theorem, which can be regarded as the version of Theorem 2.1 of [7] in the setting of more than one weak solution, provides the sufficient conditions for the above two extended stability concepts, and consequently will play a key role in the later development. By comparison, one can see that Theorem 2.3 preserves the main conclusion of Theorem 2.1 of [7] except for the uniqueness of strong solution. By some minor/trivial modifications to the proofs of Theorem  3.19 in [35] (or that of Lemma  2 in [36]) and Theorem  2.4 in [37], it is not difficult to prove Theorem 2.3.

Theorem 2.3. For system (2.1), suppose that there exists a function which is positive definite and radially unbounded, such that where is continuous and nonnegative. Then the origin solution of (2.1) is globally stable in probability. Furthermore, if is positive definite, then for any weak solution defined on probability space , there holds .

Proof. From Theorem  2.3 in [33, page 159], it follows that system (2.1) has at least one weak solution. We use to denote anyone of the weak solutions, which is defined on its corresponding probability space and on where denotes either or the finite explosion time of the weak solution .
First, quite similar to the proof of Theorem  3.19 in [35, page 95-96] or that of Lemma  2 in [36], we can prove that (namely, all weak solutions of system (2.1) are defined on ) and that the origin solution of system (2.1) is globally stable in probability.
Second, very similar to the proof of Theorem  2.4 in [37, page 114-115], we can show that if is positive definite, then for any weak solution , it holds .

We next provide three lemmas which will play an important role in the later development. In fact, Lemma 2.4 can be directly deduced from the well-known Young’s Inequality, and the proofs of Lemmas 2.5 and 2.6 can be found in [19, 20].

Lemma 2.4. For any , , , there holds

Lemma 2.5. For any continuous function , there are smooth functions , , , and such that

Lemma 2.6. For any , and any , , there hold and, in particular, if , .

3. System Model and Control Objective

In this paper, we consider the global adaptive stabilization for a class of uncertain high-order stochastic nonlinear systems in the following form: where is the unmeasurable system state vector, called zero dynamics; and are the measurable system state vector and the control input, respectively; the system initial condition is , ; , are said the system high orders; , , and , , are unknown continuous functions, called the system drift and diffusion terms, respectively; are uncertain and continuous, called the control coefficients; is an independent standard Wiener process defined on a complete probability space with being a sample space, a -algebra on , and a probability measure. Besides, for the simplicity of expression in later use, let and .

Differential equations (3.1) describe a large class of uncertain high-order stochastic nonlinear systems, for which some tedious technical difficulties will be encountered in control design mainly due to the presence of the stochastic zero dynamics and the uncertainties/unknowns in the control coefficients, the system drift, and diffusion terms. In the recent works [2528], with measurable inverse dynamics or deterministic zero dynamics and by imposing somewhat severe restrictions on ’s, ’s, ’s, and ’s in system (3.1), smooth stabilizing controllers have been designed. The purpose of this paper is to relax these restrictions and solve the stabilization problem of the more general system (3.1) under the following three assumptions.

Assumption 3.1. There exists a function such that where , are functions; , , and are continuous functions; and is an unknown constant.

Assumption 3.2. For each , and satisfy where and are known functions with and ; and are unknown constants; is some positive integer; ’s satisfy .

Assumption 3.3. For each , , its sign is known, and there are unknown constants and , known smooth functions , and such that where when .

Above three assumptions are common and similar to the ones usually imposed on the high-order nonlinear systems (see, e.g., [17, 20]). Based on Assumption 3.2 and Lemma 2.5, we obtain the following proposition which dominates the growth properties of ’s and ’s and will play a key role in overcoming the obstacle caused by system uncertainties/unknowns. The proof is omitted here since it is quite similar to that of Proposition  2 in [22].

Proposition 3.4. For each , there exist smooth functions , , and , such that where is obviously an unknown constant.

Remark 3.5. It is worth pointing out that in the recent related work [30], to ensure continuously differential output feedback control design, somewhat stronger assumptions have been imposed on the system drift and diffusion terms. For example, different from Proposition  1, Assumption  1 in [30] requires that the powers of , are larger than one in the upper bound estimations of and (the case of and is more evident).

Furthermore, as done in [8, 9], to ensure the stabilizability of system (3.1), it is necessary to make the following restriction on , and in Assumption 3.1, and in Proposition 3.4.

Assumption 3.6. For some , there exist and which are continuous, positive, and monotone increasing functions satisfying and , such that where denotes the inverse function of .

To understand well the academic meaning of the control problem to be studied, and in particular the generality and different nature of system (3.1) compared with the exiting works, we make the following four remarks corresponding to above four assumptions, respectively.

Remark 3.7. Assumption 3.1 indicates that the unmeasurable zero dynamics possesses the Stochastic ISS (Input-State Stability) type property, like in [8, 9], and the restriction on is somewhat weaker than that in [8, 9] since the additional term in the estimation of .

Remark 3.8. Assumption 3.2 demonstrates that the power of in must be strictly less than the corresponding system high order. This is necessary to realize the stabilization of the system by using the domination approach of [18]. Moreover, thanks to no further restrictions on ’s or ’s, Assumption 3.2 is more possibly met than those in [2528].

Remark 3.9. Assumption 3.3 shows that the control coefficients ’s never vanish and otherwise system (3.1) would be uncontrollable somewhere. Besides, from this assumption, one can easily see that the signs of ’s remain unchanged. Furthermore, the unknown constant “” makes system (3.1) more general than those studied in [2528] where the lower bounds of uncertain control coefficients ’s are required to be precisely known.

Remark 3.10. In fact, Assumption 3.6 is similar to the corresponding one in [8, 9]. From above formulation of the system, it can be seen that the unwanted effects of , that is, “” in Assumption 3.1 and “” in Proposition 3.4, can only be dominated by the term “” in Assumption 3.1, and therefore some requirements should be imposed on these three terms. For the sake of stabilization, we make Assumption 3.6, which clearly includes a special case where since at this moment and can be constants and (3.6) obviously holds.

As the recent development on high-order control systems, works [17, 21, 22] proposed a novel adaptive control technique, which is powerful to successfully overcome the technical difficulties in stabilizing system (3.1) caused by the weaker conditions on unknown control coefficients. Inspired by these works, the paper extends the stabilization results in [17, 22] from deterministic systems to stochastic ones, under quite weaker assumptions than those in [2528]. More importantly, instead of simple generalization, motivated by the novel adaptive technique for deterministic nonlinear systems [23], we develop the adaptive control scheme without overparameterization that occurred in [17, 21, 22]. (In fact, the number of parameter estimates is reduced from to 1.)

For details, in this paper, the main objective is to design a controller in the following form: where , and is a smooth function, while is a continuous function, such that all closed-loop states are bounded almost surely, and furthermore, the original system is globally asymptotically stable in probability.

Finally, for the sake of the later control design, we obtain the following proposition by the technique of changing supply functions [20, 38]. The proof of Proposition 3.11 is mainly inspired by [9, 38] and placed in Appendix A.

Proposition 3.11. Define , and . Then, under Assumptions 3.1 and 3.6, one can construct a suitable which is , monotone increasing, such that(i) is , positive definite, and radially unbounded;(ii)there exist a smooth function and an unknown constant such that

4. Partial-State Feedback Adaptive Stabilizing Control

Since the signs of ’s are known and remain unchanged, without loss of generality, suppose , . The following theorem summarizes the main result of this paper.

Theorem 4.1. Consider system (3.1) and suppose Assumptions 3.13.3 and 3.6 hold. Then there exists an adaptive continuous partial-state feedback controller in the form (3.7), such that(i)the origin solution of the closed-loop system is globally stable in probability;(ii)the states of the original system converge to the origin, and the other states of the closed-loop system converge to some finite value, both with probability one.

About the main theorem, we have the following remark.

Remark 4.2. From Claim (i) and the former part of Claim (ii), we easily know that the original system is globally asymptotically stable in probability.

Proof. To complete the proof, we will first construct an adaptive continuous controller in the form (3.7) for system (3.1). Then by applying Theorem 2.3, it will be shown that the theorem holds for the closed-loop system.
First, let us define , where and are the same as in Assumption 3.3 and Proposition 3.11, respectively. The estimate of is denoted by , for which the following updating law will be designed: where is a to-be-determined nonnegative smooth function which ensures that , for all .
We would like to give some inequalities on above defined for the sake of use in the later control design. Noting (see Proposition 3.4) and , for all , it is clear that . Moreover, since , , there hold , , and hence and .
Remark 4.3. As will be seen, mainly because that the definition of new unknown parameter is essentially different form that in [17, 21, 22], the overparameterization problem that occurred in the works is successfully overcome.
Next, we introduce the following new variables: and the actual control law , where , are continuous functions satisfying , for all . In the following, a recursive design procedure is provided to construct the virtual and actual controllers ’s. For completing the control design, we also introduce a sequence of functions as follows: Similar to the corresponding proof in [18], it is easy to verify that, for each , is in all its arguments, when , when , and as .
Step 1. Choose to be the candidate Lyapunov function for this step, where denotes the parameter estimation error. Then, along the trajectories of system (3.1), we have
By Proposition 3.4 and Lemma 2.4, we have following estimations: from which, (4.4), Proposition 3.11, and the facts , , it follows that where and . It will be seen from the later design steps that a series of nonnegative smooth functions , , are introduced so as to finally obtain the updating law of , that is, .
Mainly based on (4.6), the virtual continuous controller is chosen such that and such choice makes (4.6) become
Remark 4.4. It is necessary to mention that in the first design step, functions and have been provided with explicit expressions in order to deduce the completely explicit virtual controller . However, in the later design steps, sometimes for the sake of briefness, we will not explicitly write out the functions which are easily defined.
Inductive Steps. Suppose that the first design steps have been completed. In other words, we have found appropriate functions , , satisfying and for known nonnegative smooth functions , , , such that for the candidate Lyapunov function .
Let be the candidate Lyapunov function for step . Then, along the trajectories of system (3.1), we have
Just as in the first step, in order to design , one should appropriately estimate the last four terms on the right-hand side of above equality and the last term on the right-hand side of (4.9), as formulated in the following proposition whose proof is placed in Appendix B.
Proposition 4.5. There exists nonnegative smooth function , such that
Then, by (4.9), (4.10), and Proposition 4.5, we have where .
Observing that a nonnegative smooth function can be easily constructed such that if we design the continuous virtual controller such that (obviously, is a strictly positive smooth function), then (4.12) becomes
Noting the arguments in the last design step, we choose the adaptive actual continuous controller as follows: from which, (4.15) with and the aforementioned , , it follows that where is a smooth function.
With the adaptive controller (4.16) in loop, we know that is the origin solution of the closed-loop system. Thus, from Theorem 2.3 and (4.17), it follows that the origin solution is globally stable in probability; furthermore, since is positive definite which can be deduced from the expressions of and , it follows that , and in terms of the similar proof of Theorem  3.1 in [6], one can see that the state converges to some finite value with probability one.

We would like to point out that the adaptive control scheme given above can be used to remove the overparameterization in the recent works [17, 21, 22], where the number of parameter estimates is not less than . For this aim, it suffices to introduce another new unknown parameter like defined before, and the design steps are quite similar to those developed earlier and do not need further discussion.

5. A Simulation Example

Consider the following three-dimensional uncertain high-order stochastic nonlinear system: where is an unknown constant.

It is easy to verify that system (5.1) satisfies Assumptions 3.1 and 3.6 with , , and . Assumption 3.2 holds with , , , and . Assumption 3.3 holds with , and . Therefore, in terms of the design steps developed in Section 4, an adaptive partial-state feedback stabilizing controller can be explicitly given.

Let and the initial states be , , and . Using MATLAB, Figures 1 and 2 are obtained to exhibit the trajectories of the closed-loop system states. (To show the transient behavior more clearly, logarithmic -coordinates have been adopted.) From these figures, one can see that , , and are regulated to zero while converges to a finite value, all with probability one.

6. Concluding Remarks

In this paper, the partial-state feedback stabilization problem has been investigated for a class of high-order stochastic nonlinear systems under weaker assumptions than the existing works. By introducing the novel adaptive updated law and appropriate control Lyapunov function, and using the method of adding a power integrator, we have designed an adaptive continuous partial-state feedback controller without overparameterization and given a simulation example to illustrate the effectiveness of the control design method. It has been shown that, with the designed controller in loop, all the original system states are regulated to zero and the other closed-loop states are bounded almost surely for any initial condition. Along this direction, there are a lot of other interesting research problems, such as output-feedback control for the systems studied in the paper, which are now under our further investigation.

Appendices

A. The Proof of Proposition 3.11

It is easy to verify that the first assertion of Proposition 3.11 holds when is chosen to be positive, , and monotone increasing. Thus, in the rest of the proof, we will find such to guarantee the correctness of the second assertion.

First, as defined in Proposition 3.11, , where is and, for simplicity, . Thus by Assumption 3.1, we have The following proceeds in two different cases in which is the same as in Assumption 3.6.

(i) Case of
For this case, from (A.1), we have

Let , for the same and as in Assumption 3.6, and as done in [9], denote by . Then, it is easy to see that

Moreover, noting the above definitions of and using integration by parts, we have for all which together with (A.4) concludes that , for all , and therefore, is positive, , and monotone increasing on .

Furthermore, from (A.4) and the definitions of , and we yield which together with (A.2) results in This shows that the second assertion of Proposition 3.11 holds for this case.

(ii) Case of
In this case, it is not hard to find a function and an unknown constant satisfying . Then from (A.1), we get

Choosing the same as in the first case and in view of (A.6), we have From this, (A.4) and (A.8), it follows that which shows that second assertion of Proposition 3.11 holds for this case by letting and . (Since when , and otherwise , from the fact that and are positive and monotone increasing functions on , it follows that .)

B. The Proof of Proposition 4.5

We first prove the following proposition.

Proposition B.1. For , there exist smooth nonnegative functions , , and , such that where , , and is the same as in Proposition 3.4.

Proof. The first claim obviously holds when , because of the following inequality: and can be easily proven in the same way of Lemma  3.4 in [19].
Based on Lemma 2.4 and Proposition 3.4, the proof for the last three claims is straightforward (though somewhat tedious) and quite similar to the proof of Lemma  3.5 in [19] and is omitted here.

Next, in view of Proposition B.1, we complete the Proof of Proposition 4.5 by estimating each term of the left-hand side of (4.11).

From Propositions 3.4 and B.1, we have where and whereafter , are nonnegative smooth functions and can be easily obtained by Lemma 2.4, and for the notional convenience, their explicit expressions are omitted.

From Lemma 2.6, Propositions 3.4 and B.1, and the expression of given by (4.3), we have For the last term, by Lemma 2.6, we get So far, by choosing , the proof of Proposition 4.5 is finished.

Acknowledgments

This work was supported by the National Natural Science Foundations of China (60974003, 61143011), the Program for New Century Excellent Talents in University of China (NCET-07-0513), the Key Science and Technical Foundation of Ministry of Education of China (108079), the Excellent Young and Middle-Aged Scientist Award Grant of Shandong Province of China (2007BS01010), the Natural Science Foundation for Distinguished Young Scholar of Shandong Province of China (JQ200919), and the Independent Innovation Foundation of Shandong University (2009JQ008).