Mathematical Approaches in Advanced Control TheoriesView this Special Issue
Research Article | Open Access
Robust Adaptive Switching Control for Markovian Jump Nonlinear Systems via Backstepping Technique
This paper investigates robust adaptive switching controller design for Markovian jump nonlinear systems with unmodeled dynamics and Wiener noise. The concerned system is of strict-feedback form, and the statistics information of noise is unknown due to practical limitation. With the ordinary input-to-state stability (ISS) extended to jump case, stochastic Lyapunov stability criterion is proposed. By using backstepping technique and stochastic small-gain theorem, a switching controller is designed such that stochastic stability is ensured. Also system states will converge to an attractive region whose radius can be made as small as possible with appropriate control parameters chosen. A simulation example illustrates the validity of this method.
The establishment of modern control theory is contributed by state space analysis method which was introduced by Kalman in 1960s. This method, describing the changes of internal system states accurately through setting up the relationship of internal system variables and external system variables in time domain, has become the most important tool in system analysis. However, there remain many complex systems whose states are driven by not only continuous time but also a series of discrete events. Such systems are named hybrid systems whose dynamics vary with abrupt event occurring. Further, if the occurring of these events is governed by a Markov chain, the hybrid systems are called Markovian jump systems. As one branch of modern control theory, the study of Markovian jump systems has aroused lots of attention with fruitful results achieved for linear case, for example, stability analysis [1, 2], filtering [3, 4] and controller design [5, 6], and so forth. But studies are far from complete because researchers are facing big challenges while dealing with the nonlinear case of such complicated systems.
The difficulties may result from several aspects for the study of Markovian jump nonlinear systems (MJNSs). First of all, controller design largely relies on the specific model of systems, and it is almost impossible to find out one general controller which can stabilize all nonlinear systems despite of their forms. Secondly Markovian jump systems are applied to model systems suffering sudden changes of working environment or system dynamics. For this reason, practical jump systems are usually accompanied by uncertainties, and it is hard to describe these uncertainties with precise mathematical model. Finally, noise disturbance is an important factor to be considered. More often that not, the statistics information of noise is unknown when taking into account the complexity of working environment. Among the achievements of MJNSs, the format of nonlinear systems should be firstly taken into account. As one specific model, the nonlinear system of strict-feedback form is well studied due to its powerful modelling ability of many practical systems, for example, power converter , satellite attitude , and electrohydraulic servosystem . However, such models should be modified since stochastic structure variations exist in these practical systems, and this specific nonlinear system has been extended to jump case. For Markovian jump nonlinear systems of strict-feedback form, [10, 11] investigated stabilization and tracking problems for such MJNSs, respectively. And  studied the robust controller design for such systems with unmodeled dynamics. However, for the MJNSs suffering aforementioned factors in this paragraph, research work has not been performed yet.
Motivated by this, this paper focuses on robust adaptive controller design for a class of MJNSs with uncertainties and Wiener noise. Compared with the existing result in , several practical limitations are considered which include the following: the uncertainties are with unmodeled dynamics, and the upper bound of dynamics is not necessarily known. Meanwhile the statistics information of Wiener noise is unknown. Also the adaptive parameter is introduced to the controller design whose advantage has been described in . The control strategy consists of several steps: firstly, by applying generalized formula, the stochastic differential equation for MJNS is deduced and the concept of JISpS (jump input-to-state practical stability) is defined. Then with backstepping technology and small-gain theorem, robust adaptive switching controller is designed for such strict-feedback system. Also the upper bound of the uncertainties can be estimated. Finally according to the stochastic Lyapunov criteria, it is shown that all signals of the closed-loop system are globally uniformly bounded in probability. Moreover, system states can converge to an attractive region whose radius can be made as small as possible with appropriate control parameters chosen.
The rest of this paper is organized as follows. Section 2 begins with some mathematical notions including differential equation for MJNS, and we introduce the notion of JISpS and stochastic Lyapunov stability criterion. Section 3 presents the problem description, and a robust adaptive switching controller is given based on backstepping technique and stochastic small-gain theorem. In Section 4, stochastic Lyapunov criteria are applied for the stability analysis. Numerical examples are given to illustrate the validity of this design in Section 5. Finally, a brief conclusion is drawn in Section 6.
2. Mathematical Notions
2.1. Stochastic Differential Equation of MJNS
Throughout the paper, unless otherwise specified, we denote by a complete probability space with a filtration satisfying the usual conditions (i.e., it is right continuous and contains all -null sets). Let stand for the usual Euclidean norm for a vector , and let stand for the supremum of vector over time period , that is, . The superscript will denote transpose and we refer to as the trace for matrix. In addition, we use to denote the space of Lebesgue square integrable vector.
Take into account the following Markovian jump nonlinear system: where , are state vector and input vector of the system, respectively. , is named system regime, a right-continuous Markov chain on the probability space taking values in finite state space . And is -dimensional independent Wiener process defined on the probability space, with covariance matrix , where is an unknown bounded matrix-value function. Furthermore, we assume that the Wiener noise is independent of the Markov chain . The functions and are locally Lipschitz in for all ; namely, for any , there is a constant such that It is known by  that with (2.3) standing, MJNS (2.1) has a unique solution.
Considering the right-continuous Markov chain with regime transition rate matrix , the entries are interpreted as transition rates such that where and satisfies . Here is the transition rate from regime to regime . Notice that the total probability axiom imposes negative and For each regime transition rate matrix , there exists a unique stationary distribution such that  Let denote the family of all functions on which are continuously twice differentiable in and once in . Furthermore, we give the stochastic differentiable equation of as where is a martingale process.
Remark 2.1. Equation (2.7) is the differential equation of MJNS (2.1). It is given by , and the similar result is also achieved in . Compared with the differential equation of general nonjump systems, two parts come forth as differences: transition rates and martingale process , which are both caused by the Markov chain . And we will show in the following section that the martingale process also has effects on the controller design.
2.2. JISpS and Stochastic Small-Gain Theorem
Definition 2.2. MJNS (2.1) is JISpS in probability if for any given , there exist function , function , and a constant such that
Remark 2.3. The definition of ISpS (input-to-state practically stable) in probability for nonjump stochastic system is put forward by Wu et al. , and the difference between JISpS in probability and ISpS in probability lies in the expressions of system state and control signal . For nonjump system, system state and control signal contain only continuous time with . While jump systems concern with both continuous time and discrete regime . For different regime , control signal will differ with different sample taken even at the same time , and that is the reason why the controller is called a switching one. Based on this, the corresponding stability is called Jump ISpS, and it is an extension of ISpS. Let , and the definition of JISpS will degenerate to ISpS.
Consider the jump interconnected dynamic system described in Figure 1: where is the state of system, denotes exterior disturbance and/or interior uncertainty. is independent Wiener noise with appropriate dimension, and we introduce the following stochastic nonlinear small-gain theorem as a lemma, which is an extension of the corresponding result in Wu et al. .
Lemma 2.4 (stochastic small-gain theorem). Suppose that both the -system and -system are JISpS in probability with as input and as state and as input and as state, respectively; that is, for any given ,
hold with being function, and being functions, and being nonnegative constants, .
If there exist nonnegative parameters , , such that nonlinear gain functions , satisfy the interconnected system is JISpS in probability with as input and as state; that is, for any given , there exist a function , a function , and a parameter such that
Remark 2.5. The previously mentioned stochastic small-gain theorem for jump systems is an extension of nonjump case. This extension can be achieved without any mathematical difficulties, and the proof process is the same as in . The reason is that in Lemma 3.1 we only take into account the interconnection relationship between synthetical system and its subsystems, despite the fact that subsystems are of jump or nonjumpform. If both subsystems are nonjump and ISpS in probability, respectively, the synthetical system is ISpS in probability. By contraries, if both subsystems are jump and JISpS in probability, respectively, the synthetical system is JISpS in probability correspondingly.
3. Problem Description and Controller Design
3.1. Problem Description
Consider the following Markovian jump nonlinear systems with dynamic uncertainty and noise described by where is state vector, is system input signal, is unmeasured state vector, and is output signal. is a vector of unknown adaptive parameters. The Markov chain and Wiener noise are as defined in Section 2. , are vector-valued smooth functions, and denotes the unmodeled dynamic uncertainty which could vary with different regime taken. Both , and are locally Lipschitz as in Section 2.
Our design purpose is to find a switching controller of the form such that the closed-loop jump system could be JISpS in probability and the system output could be within an attractive region around the equilibrium point. In this paper, the following assumptions are made for MJNS (3.1). (A1) The subsystem with input is JISpS in probability; namely, for any given , there exist function , function , and a constant such that (A2) For each , , there exists an unknown bounded positive constant such that where , are known nonnegative smooth functions for any given . Notice that is not unique since any satisfies inequality (3.3). To avoid confusion, we define the smallest nonnegative constant such that inequality (3.3) is satisfied.
For the design of switching controller, we introduce the following lemmas.
Lemma 3.1 (Young’s inequality ). For any two vectors , the following inequality holds where and the constants , satisfy .
Lemma 3.2 (martingale representation ). Let be N-dimensional standard Wiener noise. Supposing is an -martingale (with respect to P) and that for all , then there exists a stochastic process , such that
3.2. Controller Design
Now we seek for the switching controller for MJNS (3.1) so that the closed-loop system could be JISpS in probability, where the parameter , needs to be estimated. Denote the estimation of adaptive parameter with and the estimation of upper bound of uncertainty with . Perform a new transformation as For simplicity, we just denote , , , , by , , , , , respectively, where , , for all , and the new coordinate is .
According to stochastic differential equation (2.7), one has Here we define From assumption (A2), one gets that there exists nonnegative smooth function , satisfying The inequality (3.9) could easily be deduced by using Lemma 3.1.
Considering the transformation in (3.7) which contains the martingale process , according to Lemma 3.2, there exist a function and an -dimensional standard Wiener noise satisfying , where and is a positive bounded constant. Therefore we have Differential equation of new coordinate is deduced by (3.10). The martingale process resulting from Markov process is transformed into Wiener noise by using Martingale representation theorem. To deal with this, quartic Lyapunov function is proposed, and in the controller design, consideration must be taken for the Wiener noise .
Choose the quartic Lyapunov function as where , are constants. and are parameter estimation errors, where and are given positive constants.
In the view of (3.10) and (3.11), the infinitesimal generator of satisfies The following inequalities could be deduced by using Young’s inequality and norm inequalities with the help of changing the order of summations or exchanging the indices of the summations: where , and , , , , are design parameters to be chosen.
Here we suggest the following adaptive laws : Here , , , are design parameters to be chosen. And define function as where , , are control parameters to be chosen, and let the virtual control signal be Thus the real control signal satisfies such that Based on assumption (A2) and (3.9), we obtain the following inequality by applying Lemma 3.1: In (3.18), the following inequality is applied: Notice the fact that Submitting (3.18), (3.20) into (3.12), there is Here parameter , and function is chosen to satisfy
4. Stochastic Stability Analysis
Theorem 4.1. Considering the MJNS (3.1) with Assumptions (A2) standing, the -subsystem is JISpS in probability with the adaptive laws (3.14) and switching control law (3.16) adopted; meanwhile all solutions of closed-loop -subsystem are ultimately bounded.
Proof. Considering the MJNS (3.1) with Lyapunov function (3.11), the following equations hold according to :
Thus (3.21) can be written as
where positive scalar is given as
It is easily seen that is a function with given, and appropriate control parameter , , can be chosen to satisfy .
For each integer , define a stopping time as Obviously, almost surely as . Noticing that if , we can apply the generalized formula to derive that for any , Let , apply Fatou’s lemma to (4.5), and we have By using mean value theorem for integration, there is According to the property of function, the following inequality is deduced: According to (3.11), one gives Consequently, Defining function , function , and nonnegative number as: and applying Chebyshev’s inequality, we have that the -subsystem of MJNS (3.1) is JISpS in probability.
The proof is completed.
Theorem 4.2. Considering the MJNS (3.1) with Assumptions (A1), (A2) holding, the interconnected Markovian jump system is JISpS in probability with adaptive laws (3.14) and switching control law (3.16) adopted; meanwhile all solutions of closed-loop system are ultimately bounded. Furthermore, the system output could be regulated to an arbitrarily small neighborhood of the equilibrium point in probability within finite time.
Proof. From Assumption (A1), the subsystem is JISpS in probability. And it has been shown in Theorem 4.1 that the subsystem is JISpS in probability. Similar to the proof in , we have that the entire MJNS (3.1) is JISpS in probability; that is, for any given , there exists and such that if , the output of jump system satisfies Meanwhile can be made as small as possible by appropriate control parameters chosen.
With loss of generality, in this section we consider a two-order Markovian jump nonlinear system with regime transition space , and the system with unmodeled dynamics and noise is as follows: where the transition rate matrix is with stationary distribution .
Here let noise covariance be and system dynamics for each mode as From Assumption (A2), we have where and and the subsystem satisfies where , , , and it can be checked which satisfies the stochastic small-gain theorem. Thus the control law is taken as follows (here ).
Case 1. The system regime is :
Case 2. The system regime is : In computation, we set the initial value to be , , , let parameter , , , , , , and the time step to be 0.05 s. For comparison, two groups of different control parameters are given. First we take the parameter with values , , and the simulation results are as follows. Figure 2 shows the regime transition of the jump system, Figure 3 shows the system output which is defined as the system state , and Figure 4 shows system state . Figure 5 shows the corresponding switching controller ; finally Figure 6 shows the trajectory of adaptive parameter and Figure 7; Figure 8 shows the trajectory of parameter , , respectively.
Now we choose different control parameters as , and repeat the simulation. The simulation results are as follows. Figure 9 shows the regime transition of the jump system, Figure 10 shows the system output which is defined as the system state and, Figure 11 shows system state , and Figure 12 shows the corresponding switching controller ; the trajectory of adaptive parameter is shown in Figures 13 and 14; Figure 15 shows the trajectory of parameter , , respectively.
Comparing the results from two simulations, all the signals of closed-loop system are globally uniformly ultimately bounded, and the system output can be regulated to a neighborhood near the equilibrium point despite different jump samples. As could be seen from the figures, larger values of , , , help to increase the convergence speed of system states. This reason is that the increase of these parameters increases the value of , which determines the system states convergence speed. Also adaptive parameters and , approach convergence faster with the increasing of aforementioned parameters.
Remark 5.1. Much research work has been performed towards the study of nonlinear system by using small-gain theorem [16, 19]. In contrast to their contributions, this paper considers a more general form than nonjump systems. The controller varies with different regime taken, and it differs in two aspects (see (3.16)): the coupling of regimes and , which are both caused by the Markovian jumps. The switching controller will degenerate to an ordinary one if . This controller design method can also be applied for the nonjump nonlinear system.
In this paper, the robust adaptive switching controller design for a class of Markovian jump nonlinear system is studied. Such MJNSs, suffering from unmodeled dynamics and noise of unknown covariance, are of the strict feedback form. With the extension of input-to-state stability (ISpS) to jump case as well as the small-gain theorem, stochastic Lyapunov stability criterion is put forward. By using backstepping technique, a switching controller is designed which ensures the jump nonlinear system to be jump ISpS in probability. Moreover the upper bound of uncertainties can be estimated, and system output will converge to an attractive region around the equilibrium point, whose radius can be made as small as possible with appropriate control parameters chosen. Numerical examples are given to show the effectiveness of the proposed design.
This work is supported by the National Natural Science Foundation of China under Grants 60904021 and the Fundamental Research Funds for the Central Universities under Grants WK2100060004.
- M. Mariton, Jump Linear Systems in Automatic Control, Marcel-Dekker, New York, NY, USA, 1990.
- X. Mao, “Stability of stochastic differential equations with Markovian switching,” Stochastic Processes and Their Applications, vol. 79, no. 1, pp. 45–67, 1999.
- C. E. de Souza, A. Trofino, and K. A. Barbosa, “Mode-independent filters for Markovian jump linear systems,” IEEE Transactions on Automatic Control, vol. 51, no. 11, pp. 1837–1841, 2006.
- S. Xu, J. Lam, and X. Mao, “Delay-dependent control and filtering for uncertain Markovian jump systems with time-varying delays,” IEEE Transactions on Circuits and Systems. I. Regular Papers, vol. 54, no. 9, pp. 2070–2077, 2007.
- N. Xiao, L. Xie, and M. Fu, “Stabilization of Markov jump linear systems using quantized state feedback,” Automatica, vol. 46, no. 10, pp. 1696–1702, 2010.
- T. Hou, W. Zhang, and H. Ma, “Finite horizon control for discrete-time stochastic systems with Markovian jumps and multiplicative noise,” IEEE Transactions on Automatic Control, vol. 55, no. 5, pp. 1185–1191, 2010.
- H. Sira-Ramírez, M. Rios-Bolivar, and A. S. I. Zinober, “Adaptive dynamical input-output linearization of dc to dc power converters: a backstepping approach,” International Journal of Robust and Nonlinear Control, vol. 7, no. 3, pp. 279–296, 1997.
- R. Kristiansen, P. J. Nicklasson, and J. T. Gravdahl, “Satellite attitude control by quaternion-based backstepping,” IEEE Transactions on Control Systems Technology, vol. 17, no. 1, pp. 227–232, 2009.
- C. Kaddissi, J. P. Kenne, and M. Saad, “Indirect adaptive control of an electrohydraulic Servo system based on nonlinear backstepping,” IEEE/ASME Transactions on Mechatronics, vol. 16, no. 6, pp. 1171–1177, 2010.
- Z.-J. Wu, X.-J. Xie, P. Shi, and Y.-Q. Xia, “Backstepping controller design for a class of stochastic nonlinear systems with Markovian switching,” Automatica, vol. 45, no. 4, pp. 997–1004, 2009.
- Z. J. Wu, J. Yang, and P. Shi, “Adaptive tracking for stochastic nonlinear systems with Markovian switching,” IEEE Transactions on Automatic Control, vol. 55, no. 9, pp. 2135–2141, 2010.
- J. Zhu, J. Park, K.-S. Lee, and M. Spiryagin, “Switching controller design for a class of Markovian jump nonlinear systems using stochastic small-gain theorem,” Advances in Difference Equations, vol. 2009, Article ID 896218, 23 pages, 2009.
- D. Dong, C. Chen, J. Chu, and T. J. Tarn, “Robust quantum-inspired reinforcement learning for robot navigation,” IEEE/ASME Transactions on Mechatronics, vol. 17, no. 1, pp. 86–97, 2012.
- D. W. Stroock, An Introduction to Markov Processes, vol. 230 of Graduate Texts in Mathematics, Springer, Berlin, Germany, 2005.
- C. Yuan and X. Mao, “Robust stability and controllability of stochastic differential delay equations with Markovian switching,” Automatica, vol. 40, no. 3, pp. 343–354, 2004.
- Z.-J. Wu, X.-J. Xie, and S.-Y. Zhang, “Adaptive backstepping controller design using stochastic small-gain theorem,” Automatica, vol. 43, no. 4, pp. 608–620, 2007.
- B. Øsendal, Stochastic Differential Equations, Springer, New York, NY, USA, 2000.
- M. M. Polycarpou and P. A. Ioannou, “A robust adaptive nonlinear control design,” Automatica, vol. 32, no. 3, pp. 423–427, 1996.
- Z.-P. Jiang, “A combined backstepping and small-gain approach to adaptive output feedback control,” Automatica, vol. 35, no. 6, pp. 1131–1139, 1999.
Copyright © 2012 Jin Zhu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.