Mathematical Problems in Engineering

Volume 2015 (2015), Article ID 292794, 7 pages

http://dx.doi.org/10.1155/2015/292794

## Stability of a Class of Stochastic Nonlinear Systems with Markovian Switching

School of Mathematics and Statistics Science, Ludong University, Yantai 264025, China

Received 11 December 2014; Revised 13 August 2015; Accepted 13 August 2015

Academic Editor: Asier Ibeas

Copyright © 2015 Xiaohua Liu and Wuquan Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper investigates the stability of a class of stochastic nonlinear systems with Markovian switching via output-feedback. Based on the backstepping design method and homogeneous domination technique, an output-feedback controller is constructed to guarantee that the closed-loop system has a unique solution and is almost surely asymptotically stable. The efficiency of the output-feedback controller is demonstrated by a simulation example.

#### 1. Introduction

There are lots of real systems, such as hierarchical control of manufacturing systems, financial engineering, and wireless communications systems, whose structure and parameters may change abruptly. Further, if the occurrence of these events is governed by a Markov chain, these systems are called Markovian jump systems. As one branch of modern control theory, the study of Markovian jump systems has aroused lots of attention with fruitful results achieved for linear case, such as the controllability and observability [1], the stability and stabilization [2–4], control [5], control [6, 7], filtering [8], and model reduction [9]. For semilinear stochastic differential equations with Markovian switching, [10] discusses the stabilization problem; [11] discusses the exponential stability problem for general nonlinear differential equations with Markovian switching. References [12, 13] focus on the controller design for hybrid systems with the global Lipschitz condition or linear growth condition. Based on the backstepping design method developed by [14–17] investigates the control of stochastic systems with Markovian switching.

Considering that the system states are incompletely measurable, the problem of output-feedback control is more important and challenging than that of the state-feedback control in practical applications. Reference [18] addresses the problem of global output-feedback and link position tracking control of robot manipulators despite the fact that only link position measurements are available in the presence of incomplete model information. Reference [19] presents the output-feedback tracking controllers design for an underactuated ship and introduces global nonlinear coordinate changes to transform the ship dynamics to a system affine in the ship velocities to design observers to globally exponentially estimate unmeasured velocities. Reference [20] focuses on the problem of output-feedback tracking control for stochastic Lagrangian systems with the unmeasurable velocity. By using the structural properties of Lagrangian systems, a reduced-order observer is skillfully constructed to estimate the velocity. Inspired by [17], this paper aims to solve the output-feedback stabilization problem for a class of stochastic nonlinear systems with Markovian switching. As demonstrated by [21], due to the fact that these classes of systems’ Jacobian linearizations are neither controllable nor feedback linearizable, the existing design tools are hardly applicable.

Compared with the existing results, the contributions of this paper are as follows:(1)The results in [22] consider the stabilization problem of stochastic nonlinear systems without considering Markovian switching. However, considering that systems may often undergo abrupt disturbances in the practical environment which can be modelled as Markovian process, thus from both practical and theoretical points of view, the stochastic nonlinear systems model without Markovian switching is somewhat restrictive. This paper considers the Markovian switching version of [22].(2)Since the drift terms and diffusions terms are all Markovian switching, how to design an effective observer to deal with the unmeasurable states and how to design a control to guarantee that the closed-loop system has a unique solution and is almost surely asymptotically stable are nontrivial work.

The remaining part of this paper is organized as follows. Section 2 offers some preliminary results. The problem investigated is described in Section 3. After that, in Section 4, the output-feedback controller is designed followed by a simulation example to show the effectiveness of the designed controller in Section 5. Finally, the paper is concluded in Section 6.

#### 2. Preliminary Results and Useful Lemmas

The following notations will be used throughout this paper. denotes the set of all nonnegative real numbers, and denotes the real -dimensional space: . One has , , and . For a given vector or matrix , denotes its transpose, denotes its trace when is square, and is the Euclidean norm of a vector . denotes the set of all functions with continuous th partial derivatives.

Consider the stochastic differential equations with Markovian switching:where is the state of system; is an -dimensional independent standard Wiener process defined on the complete probability space with a filtration satisfying the usual conditions (i.e., it is increasing and right continuous while contains all -null sets). Let be a right-continuous homogeneous Markov process on the probability space taking values in a finite state space with generator given byfor any Here is the transition rate from to if while We assume that the Markov process is independent of the Wiener process . The Borel measurable functions and are locally Lipschitz in for all .

For , introduce the infinitesimal generator bywhere , , , and

*Definition 1 (see [14]). *A stochastic process is said to be bounded in probability if the random variable is bounded in probability uniformly in ; that is,

Lemma 2 (see [12]). *For any , define the first exit time as**Assume that there exist a positive function and parameters and such that**Then for every and , there exists a solution , unique up to equivalence, of system (1).*

Lemma 3 (see [12]). *Let and let be bounded stopping times such that a.s. If and are bounded on a.s., then *

*3. Problem Formulation*

*Consider the following stochastic nonlinear systems:where , , and are the system state and control input and output, respectively. are unmeasurable. One has , . And one has , . The nonlinear functions and , , are assumed to be , vanishing at the origin.*

*We need the following assumption.*

*Assumption 4. *There are constants and such thatwhere and . Let and , . Meanwhile, one of the following conditions should be satisfied: Condition , if or for all . Condition , otherwise.

*Remark 5. *When , , and , , Assumption 4 reduces to the natural condition used for output-feedback controller design [14, 15]. Therefore, this assumption is general and reasonable. Condition or condition in Assumption 4 plays an essential role in designing the locally Lipschitz controller, which guarantees the existence and uniqueness of the solution of system (9).

*4. Output-Feedback Stabilization of System (9)*

*By introducing the coordinateswhere , , and is a constant to be designed, with (11), system (9) can be written aswhose nominal nonlinear system is*

*The design of output-feedback controller for system (9) is divided into three steps. In Step 1, one supposes that the states are available for measurement, and a state-feedback controller is designed for nominal nonlinear system (13). Then in Step 2, by constructing a reduced-order observer, an output-feedback controller is designed for (13). Finally, by using the homogeneous domination technique, the output-feedback stabilization problem is solved for system (9).*

*For simplicity, we assume . Under this assumption, we know that .*

*Choose to satisfy and , . With Assumption 4, is chosen in the following manner:(a)Choose if condition of Assumption 4 is satisfied.(b) can be chosen as any satisfying if condition of Assumption 4 holds.*

*Step 1 (state-feedback controller design for nominal nonlinear system (13)). *We introduce the following transformation:where , , are positive constants to be designed later.

By choosing the Lyapunov functionone hasNote thatwhere . By (17) and Young’s inequality in [14], one getswhere , , , and are positive constants.

Substituting (18) into (16), one getsChoosingone has

*Step 2 (output-feedback controller design for nominal nonlinear system (13)). *Since are unmeasurable, we construct a homogeneous observer:where is a constant gain which can be selected by a similar manner in [22], , and . By replacing with in , one obtains the output-feedback controller:where . Choosewhere .

The following design procedure proceeds in the similar way as in [22]. One can obtainwhere , , and .

The construction of indicates that is positive definite and proper with respect to .

Hence, (25) implies that the closed-loop system described by the compact formis globally asymptotically stable, where , .

By introducing the dilation weight we know that (26) is homogeneous of degree .

*Step 3 (homogeneous output-feedback controller design for (9)). *For system (12), we construct a homogeneous observer:where are defined in (22). We use the output-feedback controller with the same structure as (23); specifically,Considering the closed-loop system (12), (28)-(29) can be written aswhere

*Now, we state the main results in this paper.*

*Theorem 6. If Assumption 4 holds for the stochastic high-order nonlinear system (9), with (11) and (28), under the output-feedback controller (29) with , where , , , and are positive constants, then one has the following:(1)For every and , the closed-loop system has a solution , unique up to equivalence.(2)For any and , the solution of the closed-loop systems is almost surely asymptotically stable.*

*Proof. *By the definition of , we can conclude thatFrom (21) and (25), considering that and are homogeneous of degree and , respectively, one obtainsfor a constant .

In view of Assumption 4, , one getswhere , , and are positive constants. By (34), noting that is homogeneous of degree , one arrives atwhere and is a constant.

Similar to (35), we can obtainwhere and are constants.

From (33), (35), and (36), noting that is independent of and , for system (30), we havewhere .

*From the definition of , one has . By checking the controller design process, one can obtain .*

*For any , define the first exit time:Let for any . is bounded in the interval a.s., which implies that is bounded on a.s. From (37), it can be obtained that is also bounded on a.s.*

*By Lemma 3, one can getBy (32), (39), and Lemma 2, conclusion holds.*

*With (37) and the definition of , by using Theorem in [13], conclusion holds.*

*Remark 7. *The unique features of the approaches proposed in this paper include the following:(1)This paper is the first result about the output-feedback control of stochastic nonlinear systems with Markovian switching and uncontrollable linearizations.(2)Since the drift terms and diffusions terms are all Markovian switching, a homogeneous domination approach is developed in this paper, which can effectively deal with the Markovian switching and uncontrollable linearizations simultaneously.

*5. A Simulation Example*

*5. A Simulation Example*

*Consider the following system with two modes. The Markov process belongs to the space with generator given by , , , and . One gets and .*

*The system is described bywhere , , , , , , , and .*

*Here, without detailed arguments, we only state the final results as follows:*

*In the simulation, one chooses the initial values , , and . Figure 1 gives the responses of (40)-(41), from which the efficiency of the controller is demonstrated. Figure 2 shows the runs of the Markov process .*