Mathematical Problems in Engineering

Volume 2016 (2016), Article ID 5185784, 11 pages

http://dx.doi.org/10.1155/2016/5185784

## Robust Stability, Stabilization, and Control of a Class of Nonlinear Discrete Time Stochastic Systems

^{1}College of Information and Control Engineering, China University of Petroleum (East China), Qingdao, Shandong 266510, China^{2}College of Electrical Engineering and Automation, Shandong University of Science and Technology, Qingdao, Shandong 266590, China

Received 14 November 2015; Revised 20 March 2016; Accepted 31 March 2016

Academic Editor: Mingcong Deng

Copyright © 2016 Tianliang Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper studies robust stability, stabilization, and control for a class of nonlinear discrete time stochastic systems. Firstly, the easily testing criteria for stochastic stability and stochastic stabilizability are obtained via linear matrix inequalities (LMIs). Then a robust state feedback controller is designed such that the concerned system not only is internally stochastically stabilizable but also satisfies robust performance. Moreover, the previous results of the nonlinearly perturbed discrete stochastic system are generalized to the system with state, control, and external disturbance dependent noise simultaneously. Two numerical examples are given to illustrate the effectiveness of the proposed results.

#### 1. Introduction

Stochastic control has been one of the most important research topics in modern control theory. The study of stochastic stability can be traced back to the 1960s; see [1] and the recently well-known monographs [2, 3]. Stability is the first considered problem in system analysis and synthesis, while stabilization is to look for a controller to stabilize an unstable system. control is one of the most important robust control approaches, which aims to design the controller to restrain the external disturbance below a given level. We refer the reader to [4–9] for stability and stabilization of Itô-type stochastic systems and [10–14] for stability and stabilization of discrete time stochastic systems. Stochastic control of Itô-type systems starts from [15], which has been extensively studied in recent years; see [16–20] and the references therein. Discrete time control with multiplicative noise can be found in [21–25].

Along with the development of computer technology, discrete time difference systems have attracted a great deal of attention, which have been studied extensively; see [26, 27]. The reason is twofold: Firstly, discrete time systems are ideal mathematical models in the study of satellite attitude control [28], mathematical finance [29], single degree of freedom inverted pendulums [21], and gene regulator networks [30]. Secondly, as said in [27], the study for discrete time systems has the advantage over continuous time differential systems from the perspective of computation; moreover, it presents a very good approach to study differential equations and functional differential equations.

From the existing works on stability, stabilization, and control of discrete time stochastic systems with multiplicative noise, we can find that, except for linear stochastic systems where perfect results have been obtained [22–25], few works are on the stability of the general nonlinear discrete time stochastic system [12] or the control of affine nonlinear discrete time stochastic system [21] Up to now, the results of the deterministic discrete time nonlinear control [31] have not been perfectly generalized to the above nonlinear stochastic systems. For example, although some stability results in continuous time Itô systems [3] can be extended to nonlinear discrete stochastic systems [12], the corresponding criteria are not easily applied in practice; this is because the mathematical expectation of the trajectory is involved in the preconditions. In addition, [21] tried to discuss a general nonlinear control of discrete time stochastic systems, but only the control of a class of norm bounded systems was perfectly solved based on linear matrix inequality (LMI) approach. As said in [32], the general control of nonlinear discrete time stochastic multiplicative noise systems remains unsolved. We have to admit such a fact that some research issues of discrete systems are more difficult to solve than those of continuous time systems. For instance, a stochastic maximum principle for Itô systems was obtained in 1990 [33], but a nonlinear discrete time maximum principle has just been presented in [34].

Recently, [7, 13] investigated the robust quadratic stability and feedback stabilization of a class of nonlinear continue time and discrete time systems, respectively, where the nonlinear terms are quadratically bounded. Such a nonlinear constraint possesses great practical importance and has been widely used in many types of systems, such as singularly perturbed systems with uncertainties [35, 36], neutral systems with nonlinear perturbations [37], impulsive Takagi-Sugeno fuzzy systems [38], and some time-delay systems [18]. It should be pointed out that the small gain theorem can also be used to examine the robustness as done in [39] for the study of the simple adaptive control system within the framework of the small gain theorem. In addition, the robustness of a class of nonlinear feedback systems with unknown perturbations was discussed based on the robust right coprime factorization and passivity property [40]. All these methods are expected to play important roles in stochastic uncertain control.

This paper deals with a class of nonlinear uncertain discrete time stochastic systems, for which the system state, control input, and external disturbance depend on noise simultaneously, which was often called -dependent noise for short [24] and which mean that not only the system state as in [21] but also the control input and external disturbance are subject to random noise. Hence, our concerned models have more wide applications. The considered nonlinear dynamic term is priorly unknown but belongs to a class of functions with a bounded energy level, which represent a kind of very important nonlinear functions, and has been studied by many researchers; see, for example, [41]. For such a class of nonlinear discrete time stochastic systems, the stochastic stability, stabilization, and control have been discussed, respectively, and easily testing criteria have also been obtained. What we have obtained extends the previous works to more general models.

The paper is organized as follows: in Section 2, we give a description of the considered nonlinear stochastic systems and define robust stochastic stability and stabilization. Section 3 contains our main results. Section 3.1 presents a robust stability criterion which extends the result of [13] to more general stochastic systems. Section 3.2 gives a sufficient condition for robust stabilization criterion. Section 3.3 is about control, where an LMI-based sufficient condition for the existence of a static state feedback controller is established. All our main results are expressed in terms of LMIs. In Section 4, two examples are constructed to show the effectiveness of our obtained results.

For convenience, the notations adopted in this paper are as follows.

is the transpose of the matrix or vector , (): is a positive semidefinite (positive definite) symmetric matrix; is the identity matrix; is the -dimensional Euclidean space; is the space of all matrices with entries in ; is the natural number set; that is, represents ; denotes the set of ; denotes the set of all nonanticipative square summable stochastic processes The -norm of is defined by Similarly, and can be defined.

#### 2. System Descriptions and Definitions

Consider the discrete stochastic iterative system described by the following equation: where is the -dimensional state vector and is the -dimensional control input. is a sequence of one-dimensional independent white noise processes defined on the complete filtered probability space , where Assume that , , where stands for the expectation operation and is a Kronecker function defined by for while for . Without loss of generality, is assumed to be determined. The following is assumed to hold throughout this paper.

*Assumption 1. *The nonlinear functions and describe parameter uncertainty of the system and satisfy the following quadratic inequalities:for all , where is a constant related to the function for . is a constant matrix reflecting structure of .

We note that inequalities (6) and (7) can be written as a matrix form:System (5) is regarded as the generalized version of the system in [13, 42]. We note that, in system (5), the system state, control input, and uncertain terms depend on noise simultaneously, which makes (1) more useful in describing many practical phenomena.

*Definition 2. *The unforced system (5) with is said to be robustly stochastically stable with margins and if there exists a constant such that

Definition 2 implies

*Definition 3. *System (5) is said to be robustly stochastically stabilizable if there exists a state feedback control law , such that the closed-loop system is robustly stochastically stable for all nonlinear functions satisfying (6) and (7).

When there is the external disturbance in system (5), we consider the following nonlinear perturbed system: where and are, respectively, the disturbance signal and the controlled output. is assumed to belong to , so is independent of .

*Definition 4 ( control). *For a given disturbance attenuation level , is an control of system (11), if (i) system (11) is internally stochastically stabilizable for in the absence of ; that is, is robustly stochastically stable;(ii) The norm of system (11) is less than ; that is,

#### 3. Main Results

In this section, we give our main results on stochastic stability, stochastic stabilization, and robust control via LMI-based approach. Firstly, we introduce the following two lemmas which will be used in the proof of our main results.

Lemma 5 (Schur’s lemma). *For a real symmetric matrix , the following three conditions are equivalent: *(i)*.*(ii)*, .*(iii)*, .*

*Lemma 6. For any real matrices , and with appropriate dimensions, we have*

* Proof. *Because , Inequality (14) is an immediate corollary of the well-known inequality

*3.1. Robust Stability Criteria*

*Consider the following unforced stochastic discrete time system: where and satisfy (8). The following theorem gives a sufficient condition of robust stochastic stability for system (16).*

*Theorem 7. System (16) with margins and is said to be robustly stochastically stable, if there exists a symmetric positive definite matrix and a scalar such that*

*Proof. *If (17) holds, we set as a Lyapunov function candidate of system (16), where by (17). Note that and are independent, so the difference generator isApplying Lemma 6 and inequalities (6)-(7), by , we have Substituting (19) into (18), we achieve thatBy Schur’s complement, is equivalent to which holds by (17). We denote and to be the largest and the minimum eigenvalues of the matrix , respectively; then (20) yields Taking summation on both sides of the above inequality from to , we get Therefore, which leads to Hence, the robust stochastic stability of system (16) is obtained by (26) via letting .

*Remark 8. *From Theorem 7, if LMI (17) has feasible solutions, then, for any bounded parameters and on the uncertain perturbation satisfying and , system (16) is robustly stochastically stable with margins and .

*3.2. Robust Stabilization Criteria*

*In this subsection, a sufficient condition about robust stochastic stabilization via LMI will be given.*

*Theorem 9. System (5) with margins and is robustly stochastically stabilizable if there exist real matrices and and a real scalar such thatholds, where In this case, is a robustly stochastically stabilizing control law.*

*Proof. *We consider synthesizing a state feedback controller to stabilize system (5). Substituting into system (5) yields the closed-loop system described by where and . By Theorem 7, system (29) is robustly stochastically stable if there exists a matrix , , such that the following LMI holds. Let and pre- and postmultiply on both sides of inequality (30), and it yields In order to transform (32) into a suitable LMI form, we set ; then is equivalent to Combining (33) with (32) and setting the gain matrix , LMI (27) is obtained. The proof is completed.

*3.3.
Control*

*In this subsection, main result about robust control will be given via LMI approach.*

*Theorem 10. Consider system (11) with margins and . For the given , if there exist real matrices and and scalar satisfying the following LMI, where , , then system (11) is controllable, and the robust control law is for .*

*Proof. *When , by Theorem 9, system (11) is internally stabilizable via , because LMI (34) implies LMI (27). Next, we only need to show .

Take and choose the Lyapunov function , where for some to be determined; then for the system with and , we have, with and independent of , in mind that Set , and then for any , Using Lemma 6 and setting , , and , we have Similarly, the following inequalities can also be obtained: Substituting (39)-(40) into inequality (38) and considering (35), it yields where with Let in (41); then we have It is easy to see that if , then for system (11). Next, we give an LMI sufficient condition for . Notice thatwhere Using Lemma 5, is equivalent toIt is obvious that seeking gain matrix needs to solve LMIs (47) and . Setting and , and pre- and postmultiplying on both sides of (47) and considering (35), (34) is obtained immediately. The proof is completed.

*4. Numerical Examples*

*This section presents two numerical examples to demonstrate the validity of our main results described above.*

*Example 1. *Consider system (5) with parameters as For the unforced system (16) with , its corresponding state locus diagram is made in Figure 1. From Figure 1, it is easy to see that the status values are of serious divergences through iterations. Hence the unforced system is not stable.

To design a feedback controller such that the closed-loop system is stochastically stable, using Matlab LMI Toolbox, we find that a symmetric, positive definite matrix , a real matrix , and a scalar given by solve LMI (27). So we get the feedback gain . Submitting into system (5), the state trajectories of the closed-loop system are shown as in Figure 2.

From Figure 2, one can find that the controlled system achieves stability using the proposed controller. Meanwhile, in the case , the controlled system maintains stabilization.

In order to show the robustness, we use different values of in Example 1 below. We reset and in system (5) with the corresponding trajectories shown in Figures 3 and 4, respectively. By comparing Figures 1, 3, and 4, intuitively speaking, the more value takes, the more divergence the autonomous system (5) has. By using Theorem 9, we can get that, under the condition of and , the corresponding controllers are and , respectively. Substitute and into system (5) in turn, and the corresponding closed-loop system is shown in Figures 5 and 6, respectively.

From the simulation results, we can see that the larger the value takes, the slower the system converges. This observation is reasonable, because the larger uncertainty the system has, the stronger the robustness of controller requires.