- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Mathematical Problems in Engineering

Volume 2013 (2013), Article ID 560647, 10 pages

http://dx.doi.org/10.1155/2013/560647

## On the Asymptotical and Practical Stability of Stochastic Control Systems

Department of Mathematics, Universiti Putra Malaysia, 43400 Serdang, Selangor, Malaysia

Received 24 October 2012; Accepted 18 January 2013

Academic Editor: Vu Phat

Copyright © 2013 Fakhreddin Abedi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The asymptotical and practical stability in probability of stochastic control systems by means of feedback laws is provided. The main results of this work enable us to derive the sufficient conditions for the existence of control Lyapunov function that play a leading role in the existence of stabilizing feedback laws. Particularly, the sufficient conditions for practical stability in probability are established and numerical examples are also given to illustrate the usefulness of our results.

#### 1. Introduction

The stabilization of various types of linear and nonlinear systems has been widely studied in the past years (see, for instance, Karafyllis and Tsinias [1], Phat et al. [2], Thuan et al. [3], Bay et al. [4], and Trinh and Fernando [5]). In these papers, the authors derived Lyapunov-Krasovskii functions and established the necessary and sufficient conditions for robust global asymptotic stability and robust stability at the equilibrium state of linear and nonlinear systems.

Stabilization of stochastic control systems (SCSs) by means of state feedback laws is important in the control theory. The stochastic version of the Lyapunov theorem has been used to derive necessary and sufficient conditions for stabilization of SCSs at their equilibrium state. In recent years, the stabilizability of various types of SCSs has been studied for different concepts of stochastic stability (see, for instance, [6–13]).

Florchinger [6] established the necessary and sufficient conditions for the asymptotic stability in probability of the SCSs at their equilibrium state. Under these conditions, Deng and Krstic [7] designed an inverse optimal control law for strict-feedback systems which guarantees global asymptotic stability in probability (GASP). Moreover, Deng et al. [8] also developed the notion of uniform in time GASP to the problem of feedback stabilization for a class of SCSs. On the other hand, the concept of design and analysis of controller for SCS has been introduced by Xie and Tian [9], Pan and Basar [10], Lin et al. [11], and Abedi et al. [13]. Necessary and sufficient conditions are derived by Abedi et al. [12] for non-uniform in time GASP for a class of SCSs.

Thus, our aim in this paper is to explore further the asymptotical stabilization in probability problem for a larger class of SCSs than that described in [6, 13]. This class of SCSs can be characterized in terms of computable control Lyapunov functions (CLFs) which depend on the system's coefficients. In addition, the paper is also intended to fill the gap of the previous works by establishing sufficient conditions for practical stability in probability for this broader class of SCSs. Our main result, Theorem 14, which is an extension of Theorem 2.1 established in Tsinias [14], asserts that this broader class of SCSs is asymptotically stable in probability (ASP), if it admits a CLF at the origin. In addition, we obtain a computable stabilizing feedback law under Propositions 15 and 16 and Theorems 18 and 19 that are extension of Propositions 1.5 and 2.3 and Theorems 2.4 and 2.6, respectively, proved by Tsinias [14] for deterministic control systems to the larger class of SCSs driven by a Wiener process. For some backgrounds related to this paper, readers may refer to the paper by Florchinger [6] that gives necessary and sufficient conditions for ASP of a special case of our SCSs. Both of the results and the methods used in this paper, however, are different from those in the references. The main tools used in this paper are, indeed, the stochastic versions of converse Lyapunov theorems established in Kushner [15].

The paper is organized as follows. In Section 2, we introduce the class of stochastic systems, some basic definitions, and results that we are dealing with in this paper. Section 3 describes a broader class of SCSs and is focused on the properties of stochastic CLF which play an important role in the stabilization theory. In Section 4, we state and prove the main results of the paper on the asymptotical and practical stabilization of the larger class of SCSs. Finally in Section 5 we provide some numerical examples to validate our results.

#### 2. Stochastic Stability

In the following we introduce the class of stochastic systems and recall some definitions of ASP and GASP that we are dealing with in the rest of the paper.

A detailed exposition on the subject can be found in the books of Speyer and Chung [16], Has’minskii [17], and also the paper by Abedi et al. [13].

Let be a complete probability space, and denote by a standard -valued Wiener process defined on this space.

Consider the stochastic process solution of the Ito stochastic differential system as

where(i) is given,(ii) and , are locally Lipschitz functionals mapping with and there exists a constant such that for any , the following linear growth condition holds:

*Notations*. Throughout this paper we adopt the following notation.(i)For any and ,??, where , denotes the solution of the Ito stochastic differential system (1) starting from the state at time .(ii) is the usual Euclidean norm.(iii)For any , denotes its transpose.

*Definition 1. *The equilibrium of the system (1) is(i)stable in probability, if for every and ,
(ii)ASP, if it is stable in probability and

*Definition 2. *A function is a(i)-function if it is continuous, strictly increasing, and ,(ii)-function if it is a -function and as ,(iii)positive definite function if for all , and .

*Definition 3. *The equilibrium of the system (1) is(i)globally stable in probability, if for any , there exists a class -function such that
(ii)GASP, if it is globally stable in probability and

#### 3. Preliminary Results

In this section, we introduce a broader class of SCSs and focus on the properties of CLF and control Lyapunov family which play an important role in asymptotical and practical stability in probability, respectively, in Section 4. A detailed exposition on the subject can be found in the paper of Abedi et al. [13].

Let be a complete probability space, and denote by a standard -valued Wiener process defined on this space.

Consider the following stochastic process solution of the SCS: where(i) is given,(ii) is an -valued control law,(iii) and are functionals mapping satisfying the hypothesis given in Section 2, (iv), and , are functionals mapping with and there exists a constant such that for any , the following linear growth condition holds:

Then SCS (7) is said to be ASP at the origin if there exists a neighborhood of the origin in and a function mapping into , vanishing at the origin, such that(i)for every , the solution of the resulting closed-loop system is uniquely defined,(ii)the equilibrium solution of the closed-loop system (9) is ASP.

Moreover, SCS (7) is said to be practically stabilizable in probability at the origin by means of the family of feedback laws , if for any sufficiently small and for any near the open sphere of radius around origin, the corresponding trajectory of the resulting closed-loop system

enters after some time and it stays in this region thereafter. The concept of practical stability is first introduced by La Salle and Lefschetz [18].

Denote by the infinitesimal generator of the stochastic process solution of the uncontrolled part of SCS (7); that is, is the second order differential operator defined for any function in by

For any , let be the second order differential operator defined for any function in and is given by and, for any , denotes the second order differential operator defined for any function in by

We also denote as the infinitesimal generator for the stochastic process solution of the closed-loop system (9); that is, is the differential operator defined for any function in given by

Using these notations, we recall some notions of CLF as follows.

*Definition 4. *The SCS (7) is said to satisfy a stochastic Lyapunov condition at the origin if there exists a neighborhood of the origin in and a positive definite function , and a positive definite function , such that for all the following condition holds:

*Definition 5. *A continuously differentiable real function is called a CLF, if it is positive definite and satisfies condition (15).

*Definition 6. *The CLF “” is said to satisfy the bounded control property, if there exists a positive real function such that is bounded on and for every there exists a control satisfying the following inequalities:

If in addition , then the is said to satisfy the small control property.

A useful tool to study the stabilizability of SCS is the stochastic version of Artstein's theorem [19], established by Florchinger [6], and is given in the following theorem.

Theorem 7. *(i) The SCS (7) satisfies a stochastic Lyapunov condition at the origin, if and only if it is asymptotically stabilizable by means of a feedback law , which is smooth in a neighborhood of the origin except possibly at .**(ii) The corresponding CLF, , satisfies the bounded control property, if and only if there exists a stabilizer , which is smooth in a neighborhood of the origin, except possibly at zero and satisfies
**
where is defined in (16). This implies that the function is bounded in a neighborhood of the origin and if, in addition, satisfies the small control property then as .*

In the following we introduce the stochastic version of Florchinger control law [20] established by Abedi et al. [12], which gives the property of GASP for the resulting closed-loop system (9).

Theorem 8. *Consider the SCS (7) and its corresponding closed-loop system (7). Let be a CLF associated with the system, and for any , denote by the function defined by
**
Then the feedback law
**
where is given by (19) and
**guarantees that the resulting closed-loop system (9) is GASP.*

Under a slight change of assumption, that is on the property of small control family, introduced by Tsinias [14] we give the definitions on small control family, in the notion of stochastic, as follows.

*Definition 9. *The SCS (7) is said to satisfy a stochastic practical Lyapunov condition at the origin if there exist compact neighborhoods and of the origin in such that , a family of positive definite function , where , and a positive definite function , such that for any sufficiently small , condition (15) holds with , and
where is the boundary of . Moreover, for any sequences and with and as , we have

*Definition 10. *A continuously differentiable real function is regarded as a member of control Lyapunov family, if it is positive definite and is satisfying the conditions given in Definition 9.

*Definition 11. *The control Lyapunov family is said to satisfy the stochastic bounded control property, if there exists a positive real function such that is bounded on , and a family of nonnegative real numbers , where such that as and for any sufficiently small , conditions (17), (22), and (23) hold with , and for some where

If, in addition, as , then the control Lyapunov family is said to satisfy the stochastic small control property.

The following theorem presents sufficient conditions for practical stabilization. The proof of this theorem is similar to the proof of Theorem 7.

Theorem 12. *(i) If the SCS (7) satisfies the stochastic practical Lyapunov condition, then it is practically stabilizable in probability at the origin by means of a family of smooth feedback laws .**(ii) If the control Lyapunov family satisfies the stochastic bounded (small) control property, then for any the corresponding stabilizer satisfies the following inequality:
**
where and are those given in Definition 11.*

*Remark 13. *A direct result of Definitions 4 and 9 is that if the SCS (7) satisfies the stochastic Lyapunov condition, it will also satisfy the stochastic practical Lyapunov condition.

In the methodology used in this paper, we will consider the SCS (7) in the differential form as where , , and are functionals vanishing at the origin, and we will derive the sufficient conditions for the existence of CLF guaranteeing stabilization (Theorem 14) in the form of (26). We extend the similar decomposition used by Tsinias [14] in deterministic case to the stochastic case. Thus, the study of asymptotical and practical stability in probability at the equilibrium state of the SCS (7) can be replaced by the study of asymptotical and practical stability in probability at the equilibrium state of the SCS (26). Special emphasis is given to asymptotical and practical stabilization in probability for SCS (26), whose control terms and are constants; that is, the system has the form where and is an -valued control law.

#### 4. Main Results

The aim of this section is to derive sufficient conditions for the existence of CLFs that play a leading role in the existence of stabilizing feedback laws that are smooth, except possibly at the equilibrium state of the system (Theorem 14). The formulas for the special case of SCS (7) with unity intensity noise and vanishing without the term were given by Abedi et al. [13] and for the system (7) with affinity in the noise and control , where may be nonvanishing at the origin, without the term were given by Deng et al. [8]. In [12], Abedi et al. derived the necessary and sufficient conditions for nonuniform in time GASP for the system (7) in the case where and may be nonvanishing at the origin. Here, we extend Propositions 1.5 and 2.3 and Theorems 2.4 and 2.6 of Tsinias [14] in deterministic case and obtain a computable stabilizing feedback law as Propositions 15 and 16 and Theorems 18 and 19, respectively, for a broader class of SCS (26).

Consider the following lower-dimensional subsystems of the SCS (26):

Then, we can present the following result, where the stochastic Lyapunov condition for the SCS (26) is characterized in terms of some appropriate positive functions for the SCS (28) and (29).

Theorem 14. *Suppose that there exist neighborhoods and of the origin in and , respectively, and mappings , and such that where is continuous, and is continuously differentiable, and let us denote
*(i)*Assume that for any in a neighborhood of the origin in and and the SCS (28) is ASP by means of the feedback law . Then, the SCS (26) satisfies a stochastic Lyapunov condition at the origin.*(ii)*Let be the corresponding CLF. If we further assume that is bounded, and are continuously differentiable, and there exist positive constants and such that
**for in a neighborhood of the origin in then the function satisfies the small control property.*

*Proof. *Part (i): since the equilibrium solution of the closed-loop system
is asymptotically stable, then the converse Lyapunov theorem of Kushner [15] asserts that there exists a smooth Lyapunov function such that and

for near zero, where denotes the infinitesimal generator for the stochastic process solution of the closed-loop system (33). Obviously, by the assumption of the theorem, the function

is positive definite, and for any , we have

For any such that it follows by the assumption that . Hence,

and so
where is the infinitesimal generator for the stochastic process solution of the uncontrolled part of the SCS (26). Therefore, the SCS (26) satisfies a stochastic Lyapunov condition at the origin, and the function is the associated CLF.

Part (ii): let us assume that is the corresponding CLF and that (32) holds, with . Define

Clearly, is positive definite. Since is continuously differentiable, is twice differentiable, , and , then there are positive constants , and such that

for in a neighborhood of the equilibrium. Let

Then, by taking into account (32)–(42), we have
and it follows that

Therefore,
for all near zero. Inequality (46) in conjunction with condition gives

Since is bounded and is continuous with , it follows that satisfies the small control property.

Proposition 15. *If the SCS (7) is asymptotically stabilizable in probability by means of the feedback , which is continuous in a neighborhood of origin, then it is practically stabilizable in probability by means of a family of smooth feedback .*

*Proof. *The proof of this proposition is a direct consequence of Theorems 7(i) and 12 and Remark 13. Indeed, suppose that the SCS (7) is asymptotically stabilizable in probability, then according to Theorem 7(i) the SCS (7) satisfies the stochastic Lyapunov condition at the origin. Therefore, by Remark 13, the SCS (7) satisfies the stochastic practical Lyapunov condition at the origin. Hence, the desired practically stability in probability for the SCS (7) is assured by Theorem 12.

In the following, we will customize Theorem 14 for a particular case of SCS (27). Assume, without loss of generality that . In this case, the SCS (27) becomes where , is an -valued control law, and and are constant function. The following proposition, which is an immediate consequence of Theorem 14 and is also an extension of Proposition 2.3 established in [14], provides a CLF for the SCS (48) that satisfies the small control property. In the proof of this proposition we use the stochastic version of converse Lyapunov theorem established in [15].

Proposition 16. *Consider the SCS (48) and suppose that (28) is asymptotically stabilizable in probability by means of the feedback law , which is continuously differentiable in a neighborhood of the origin. Then (48) satisfies the stochastic Lyapunov condition and the corresponding CLF satisfies the small control property.*

*Proof. *Since the equilibrium solution for the closed-loop system
is asymptotically stable, then the converse Lyapunov theorem of Kushner [15] asserts that there exists a smooth Lyapunov function defined in a neighborhood of the origin in such that

for near zero. Let

Obviously, is nonnegative definite and is continuously differentiable in a neighborhood of the origin. If we define the functional on by the following:
then
where is uniformly bounded on and such that satisfies the properties in Theorem 14 (i). In particular, (32) holds with . Therefore, the function satisfies the small control property.

As in deterministic case (see Tsinias [14]) we can easily establish the following elementary result that is a useful tool in proving some of our results (Theorem 18).

Lemma 17. *Let be a real mapping that satisfies the Lipschitz condition on a compact neighborhood of the origin in , and . Then there exist a neighborhood of the origin in , a constant , and a family of smooth mappings such that for every sufficiently small the following conditions hold:
**
Moreover, for any sequence and such that and , as , there exist subsequences and of and , respectively, and a positive constant satisfying
*

The following theorem is an extension of Theorem 2.4 established in [14] and provides a control Lyapunov family for the SCS (48) that depends directly on the dynamics of the SCS (33). The proof of this theorem is the stochastic analogue of the deterministic case of Tsinias [14] where the author used the converse Lyapunov theorem given in Massera [21, 22] to establish his result. In the proof of the following theorem we use the stochastic version of converse Lyapunov theorem established in Kushner [15].

Theorem 18. *Suppose that the SCS (33) is asymptotically stabilizable in probability by means of the feedback law with .*(i)*If is continuous at the origin, then SCS (48) satisfies the practical Lyapunov condition.*(ii)*If is Lipschitz continuous, then the SCS (48) satisfies the practical Lyapunov condition and, additionally, the corresponding control Lyapunov family satisfies the stochastic small control property.*

*Proof. *We prove only statement (ii) of this theorem. The proof of the first part of the theorem can be obtained using the similar argument and is therefore omitted.

Suppose that the SCS (33) is stabilizable in probability by means of the feedback law and . Then the converse Lyapunov theorem proved by Kushner [15] asserts that there exists a smooth Lyapunov function defined on a compact neighborhood of and a positive definite and strictly increasing continuous function such that and

for any . Let and as defined in Lemma 17, where is the above Lipschitzian stabilizer that is defined on and it takes values on . Consider the positive constants and such that for and for every belonging to

For any , such that the sphere is contained to , consider a positive where
is contained to . Since and , there exist nonnegative constants and such that
for all in a neighborhood of the origin in . Let with and

Then, by (54)–(56) and (58)–(61), we have for any and

Now, for , we can define

Obviously, is continuously differentiable and positive definite for . Let be a real mapping where

Denoting as the infinitesimal generator of the stochastic process, the solution of the resulting closed-loop system is deduced from the SCS (48) with the function given by (65). This yields
for every . Note that both and give and by (56) that . Therefore,

Since , and are continuously differentiable, where , and because of (54) and (56), there is a positive constant which is independent of , satisfying
for every . This implies that for every sufficiently small , condition (22) holds. In fact, there exist sequences and such that and . Since , it follows from (57) that there exist subsequences and , and real constants and satisfying
and so , which indicates a contradiction. Similarly, we can show (23). It turns out that as defined in (64) is a control Lyapunov family which satisfies the stochastic small control property.

We will summarize our results in the following theorem.

Theorem 19. *Suppose that the SCS (33) is asymptotically stabilizable in probability by means of the feedback law with .*(i)*If is continuous at the origin, then the SCS (48) is practically stabilizable in probability at the origin.*(ii)*If is Lipschitz continuous, then then SCS (48) is practically stabilizable in probability at the origin, where the corresponding control Lyapunov family satisfies the stochastic small control property.*(iii)*If is continuously differentiable, then the SCS (48) is asymptotically stabilizable in probability at the origin, by means of the feedback law, which is smooth for near the origin and continuous at the origin. *

#### 5. Applications

In this section, we illustrate our results by giving three numerical examples.

*Example 20. *Denote by the solution of the following SCS:
where is a standard real-valued Wiener process and is a real-valued measurable control law.

Consider the following Lyapunov function candidate:

After some simple calculations, it gives

From (72), we get

Therefore, condition (15) is satisfied with , and, thus, the function is a CLF for the SCS (70). By Theorems 7 and 14(i), the SCS (70) is ASP by means of a feedback law which is smooth in a neighborhood of the origin in , except possibly at the origin.

Finally, we may invoke Theorem 8 to find an explicit formula for a feedback law. Indeed, by (20) and (72) we can obtain the following feedback law: that guarantees the equilibrium solution of the resulting closed-loop system is deduced from the SCS (70) which satisfies the ASP property.

*Example 21. *Let be the solution of the SCS as
where is a standard real-valued Wiener process, is a real-valued measurable control law, and is a smooth functional mapping from into such that
for any . Obviously, the SCS (75) has the form (26), and the function , defined on as
is a CLF for SCS (75). Indeed, for any with
it follows that , and therefore

Using condition , the latter equality implies . Thus, the function is a CLF for SCS (75). In fact, the feedback law asymptotically stabilizes
at zero. Therefore, by Theorems 7 and 14(i), system (75) is ASP by means of a feedback law which is smooth in a neighborhood of the origin in , except possibly at the origin.

*Example 22. *Consider the following SCS:
where is a standard real-valued Wiener process and is a real-valued measurable control law. Obviously, the SCS (82) has the form of (48), and the stochastic system

is stabilized by the continuous law . Therefore, according to Theorem 19(i), the SCS (82) is practically stabilizable in probability at the origin by means of a family of smooth feedback laws.

#### 6. Conclusion

In this paper, we have studied the problem of asymptotical and practical stabilization in probability of SCS when both drift and diffusion terms are affine in the control. We have used stochastic version of Artstein theorem and extended the asymptotical and practical stabilization results proved by Tsinias [14] for a larger class of SCSs driven by Wiener process. Moreover, the sufficient conditions for the existence of CLF and control Lyapunov family that play a leading role in the existence of stabilizing feedback laws are derived. Several numerical examples are given to validate our results. Finally, we can make the following summaries on our main results.(1)The stability results provided in this paper are an extension of deterministic results stated in Tsinias [14]. Indeed, we obtained a computable stabilizing feedback law under Propositions 15 and 16 and Theorems 18 and 19 that are extension of Propositions 1.5 and 2.3 and Theorems 2.4 and 2.6, respectively, proved by Tsinias [14] for deterministic control systems to the larger class of SCSs driven by a Wiener process.(2)The stabilizability results stated in Florchinger [6, Theorem 3.1] and Abedi et al. [12, Theorem 3.1] are not able to render our previous stabilizing results (Theorem 14 and Proposition 16).(3)The stabilizability results provided for stochastic control systems in [6, 12, 13] do not permit us to make a conclusion about practical stability in probability (Theorem 18), whereas the results of this paper are still valid.

#### References

- I. Karafyllis and J. Tsinias, “A converse Lyapunov theorem for nonuniform in time global asymptotic stability and its application to feedback stabilization,”
*SIAM Journal on Control and Optimization*, vol. 42, no. 3, pp. 936–965, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - V. N. Phat, Q. P. Ha, and H. Trinh, “Parameter-dependent ${H}_{\infty}$ control for time-varying delay polytopic systems,”
*Journal of Optimization Theory and Applications*, vol. 147, no. 1, pp. 58–70, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. V. Thuan, V. N. Phat, and H. M. Trinh, “Dynamic output feedback guaranteed cost control for linear systems with interval time-varying delays in states and outputs,”
*Applied Mathematics and Computation*, vol. 218, no. 21, pp. 10697–10707, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - N. S. Bay, N. M. Linh, and V. N. Phat, “Robust ${H}_{\infty}$ control of linear time-varying systems with mixed delays in the Hilbert space,”
*Optimal Control Applications & Methods*, vol. 32, no. 5, pp. 545–557, 2011. View at Publisher · View at Google Scholar · View at MathSciNet - H. Trinh and T. Fernando,
*Functional Observers for Dynamical Systems*, vol. 420 of*Lecture Notes in Control and Information Sciences*, Springer, Berlin, Germany, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - P. Florchinger, “Lyapunov-like techniques for stochastic stability,”
*SIAM Journal on Control and Optimization*, vol. 33, no. 4, pp. 1151–1169, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H. Deng and M. Krstić, “Stochastic nonlinear stabilization. I. A backstepping design,”
*Systems & Control Letters*, vol. 32, no. 3, pp. 143–150, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H. Deng, M. Krstić, and R. J. Williams, “Stabilization of stochastic nonlinear systems driven by noise of unknown covariance,”
*IEEE Transactions on Automatic Control*, vol. 46, no. 8, pp. 1237–1253, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - X.-J. Xie and J. Tian, “State-feedback stabilization for high-order stochastic nonlinear systems with stochastic inverse dynamics,”
*International Journal of Robust and Nonlinear Control*, vol. 17, no. 14, pp. 1343–1362, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Z. Pan and T. Başar, “Backstepping controller design for nonlinear stochastic systems under a risk-sensitive cost criterion,”
*SIAM Journal on Control and Optimization*, vol. 37, no. 3, pp. 957–995, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Z. Lin, J. Liu, Y. Lin, and W. Zhang, “Nonlinear stochastic passivity, feedback equivalence and global stabilization,”
*International Journal of Robust and Nonlinear Control*, vol. 22, no. 9, pp. 999–1018, 2012. View at Publisher · View at Google Scholar · View at MathSciNet - F. Abedi, M. A. Hassan, and N. M. Arifin, “Lyapunov function for nonuniform in time global asymptotic stability in probability with application to feedback stabilization,”
*Acta Applicandae Mathematicae*, vol. 116, no. 1, pp. 107–117, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - F. Abedi, M. A. Hassan, and M. Suleiman, “Feedback stabilization and adaptive stabilization of stochastic nonlinear systems by the control Lyapunov function,”
*Stochastics*, vol. 83, no. 2, pp. 179–201, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. Tsinias, “Existence of control Lyapunov functions and applications to state feedback stabilizability of nonlinear systems,”
*SIAM Journal on Control and Optimization*, vol. 29, no. 2, pp. 457–473, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H. Kushner, “Converse theorems for stochastic Liapunov functions,”
*SIAM Journal on Control and Optimization*, vol. 5, pp. 228–233, 1967. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. L. Speyer and W. H. Chung,
*Stochastic Processes, Estimation, and Control*, vol. 17 of*Advances in Design and Control*, SIAM, Philadelphia, Pa, USA, 2008. View at Publisher · View at Google Scholar · View at MathSciNet - R. Z. Has'minskiĭ,
*Stochastic Stability of Differential Equations*, vol. 7 of*Monographs and Textbooks on Mechanics of Solids and Fluids: Mechanics and Analysis*, Sijthoff & Noordhoff, Alphen aan den Rijn, The Netherlands, 1980. View at MathSciNet - P. J. La Salle and S. Lefschetz,
*Stability by Liapunov's Direcct Method with Applications*, Academic Press, New York, NY, USA, 1961. - Z. Artstein, “Stabilization with relaxed controls,”
*Nonlinear Analysis: Theory, Methods & Applications*, vol. 7, no. 11, pp. 1163–1173, 1983. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - P. Florchinger, “A stochastic Jurdjevic-Quinn theorem for the stabilization of nonlinear stochastic differential systems,”
*Stochastic Analysis and Applications*, vol. 19, no. 3, pp. 473–480, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. L. Massera, “Contributions to stability theory,”
*Annals of Mathematics*, vol. 64, pp. 182–206, 1956. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. L. Massera, “Erratum in “Contributions to stability theory”,”
*Annals of Mathematics*, vol. 68, no. 1, p. 202, 1958. View at Google Scholar