Mathematical Theories and Applications for Nonlinear Control SystemsView this Special Issue
Control for Nonlinear Infinite Markov Jump Systems
In this paper, we discuss the infinite horizon control problem for a class of nonlinear stochastic systems with state, control, and disturbance dependent noise. The jumping parameters are modelled as an infinite-state Markov chain. Based on the solvability of a set of coupled Hamilton-Jacobi inequalities (HJIs), the exponential mean square controller for the considered nonlinear stochastic systems is obtained. A numerical example is given to show the effectiveness of the proposed design method.
During the past decades, as one of the most important robust control design, control has been extensively studied in both theory and practical applications . From the time-domain viewpoint, control is to find a control law to eliminate the effect of external disturbance below a given level . Due to the ability to model many real plants in practice, stochastic systems has gained much attention. In particular, stochastic control was firstly investigated in  for It systems, where a stochastic bounded real lemma was established in the form of linear matrix inequalities. References [4, 5], respectively, studied filtering and control for nonlinear stochastic systems via solving second-order nonlinear HJIs.
Stochastic systems with Markov jumps are powerful tool to describe physical systems which may encounter abrupt changes in their dynamics. In the theoretical study of stochastic Markov jump systems, stability and observability [6–11] and robust control [12–15] have been widely investigated. Recently, stable and control problems for nonlinear systems have become a hot research topic [16–22]. It should be pointed out that most of the aforementioned researches on Markov jump systems assume that Markov chain takes values in a finite set. However, Markov jump systems with infinite-state chains can be used to describe more plants in many real scenarios [23, 24]. Therefore, infinite Markov jump systems deserve our consideration. Recently, some papers on stability [25–27] and control problems [28, 29] of linear infinite Markov jump systems have appeared. To be specific, infinite horizon controller has been obtained by four coupled algebraic Riccati equations in . Nevertheless, to the best of our knowledge, control problem for a class of nonlinear stochastic systems with infinite Markov jumps is still unsolved, let alone the case of ()-dependent noise. This situation motivates us to carry out the present research.
This paper is concerned with the infinite horizon control problem for a class of nonlinear stochastic systems with infinite Markov jumps and ()-dependent noise. The rest of the paper is organized as follows. Section 2 provides some useful definitions and lemmas. In Section 3, based on the generalized It-type formula and the technique of squares completion, an exponential mean square stable controller is designed in terms of a set of coupled HJIs. And a numerical example is provided to illustrate the applicability of the proposed design approach. Conclusions are made in Section 4.
Next, we adopt the following notations. denotes the set of all real numbers and is the set of all nonnegative real numbers. and stand for -dimensional real vector space and the vector space of all matrices, respectively. For a matrix , represents the transpose and we denote () the positive semidefinite (definite) symmetric matrix. Also, we make use of the notation of and for the set of all symmetric and identity matrices, respectively. The operator norm of or the Euclidean norm of is . By we define the space of -valued, square integrable, and -measurable processes satisfying . The class of functions which are twice continuously differential with respect to , except possibly at the point , will be denoted by . .
Consider the following stochastic nonlinear system with infinite Markov jumps:where , , , and stand for the system state, exogenous disturbance, control input, and measurement output, respectively. is a standard one-dimensional Brownian motion on a probability space (,,). Assume that , where denotes the totality of -null sets and the -algebras and are mutually independent. We denote the right continuous, homogeneous Markov process on taking values in the countably infinite state space with generator given by where , , is the transition rate from mode at time to mode at time and for all . Suppose that , and satisfy the local Lipschitz condition and the linear growth condition for any , which guarantee that system (1) has a unique strong solution [13, 31]. Moreover, assume .
Denote the Banach space of all sequences with the norm Likewise, define another Banach space with the norm Assume all coefficients of considered systems have a finite norm . If , will be simplified as and so does . When and , () is written as (resp., ). For , implies that . Therefore, we have .
For each , an infinitesimal operator associated with system (1) is defined as follows [28, 31]:
To study the infinite horizon nonlinear stochastic control, the internal stability requirement is needed; thus we introduce the following definition.
Definition 1 (see ). The unforced stochastic system with infinite Markov jumps, is called exponentially mean square stable (EMSS) if there exist and such that for all and
Definition 2. For a given , the control is said to be an infinite horizon control of system (1), if (i) stabilizes system (1) internally; i.e. when , , the trajectory of system (1) with any initial value is EMSS(ii)For any nonzero and zero initial state , we always have
Remark 3. If (6) holds, it is easy to verify that (6) is equivalent to , where the perturbation operator is defined by subject to system (1) with
We provide some lemmas which are absolutely necessary to derive our main results.
Lemma 4 (see ). For , , exists, we have
The following lemma generalizes Theorem 5.8  and Corollary 3.2.3  to the infinite Markov jump and nonlinear systems, respectively. Its proof can be easily shown by analogous arguments.
Lemma 5. Assume that there are a set of positive functions and positive constants such thatandfor all and . Then system (4) is EMSS.
3. Infinite Horizon Nonlinear Stochastic Control
In this subsection, we attempt to obtain the sufficient condition for the infinite horizon nonlinear stochastic control problem of system (1).
Theorem 6. For a given disturbance attenuation level , if there exist a set of positive functions , , and for all nonzero , with the properties offor some positive constants such that solves the coupled HJIs,then is an infinite horizon control of system (1).
Proof. We first verify that (6) holds. For any and initial state , , note that the generalized It-type formula  and (3) yield where Invoking , , we deduce that which shows that Hence, where Applying Lemma 4 to and , we conclude thatwhere Recalling (12) and substituting (20) and (21) into (18) yield that In view of (12), for , if we choose , then it follows from (24) that Taking the limit for in the above, it is easy to show (6) by Definition 2.
Next, we remain to show that when , , the trajectory of system (1) with any initial value is EMSS. To this end, for , let be the infinitesimal operator of system (1) with , ; then where and By direct calculations, one obtains thatandImplementing (29) and (30) into (16) and taking into account (11) and (12), we have Based on Lemma 5, it results that system (1) with is EMSS. The proof is complete.
Remark 7. It should be pointed that, in Theorem 6, if we take in (12), system (1) is internally stable (globally asymptotically stable in probability) even without condition (11). Then the controller defined in (13) is still an controller (globally asymptotically stable in probability).
Below, we will give an example to show the effectiveness of our above developed design method.
Example 8. Consider a one-dimensional stochastic nonlinear system with infinite Markov jumps and the parameters as follows: Let be a Poisson process with parameter . It is obvious that is a homogeneous Markov process with the countably infinite state space, and its infinitesimal matrix is given by and .
Assume the disturbance attenuation level and . Then setting , we solve the coupled HJIs (12), it is easy to verify that the conditions of Theorem 6 are satisfied; thus, via Theorem 6, we have So the controller is
With the initial conditions and the exogenous disturbance sin, Figure 1 shows the state response.
For a class of nonlinear stochastic systems with infinite Markov jumps and ()-dependent noise, a sufficient condition for infinite horizon control problem has been obtained in terms of coupled HJIs, and the effectiveness of the proposed design method is demonstrated by a numerical example. There are some further research directions including the investigation on control and filter problems for nonlinear infinite Markov jump systems.
No data were used to support this study.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
This work was supported by the National Natural Science Foundation of China (no. 61673013), the Natural Science Foundation of Shandong Province (no. ZR2016JL022), and the SDUST Research Fund (no. 2015TDJH105).
T. Basar and P. Bernhard, H∞-Optimal Control and Related Minimax Design Problems: A Dynamic Game Approach, Birkhäuser, Boston, Mass, USA, 2nd edition, 1995.View at: MathSciNet
G. Zames, “Feedback and optimal sensitivity: model reference transformations, multiplicative seminorms, and approximate inverses,” IEEE Transactions on Automatic Control, vol. 26, no. 2, pp. 301–320, 1981.View at: Publisher Site | Google Scholar | MathSciNet
D. Hinrichsen and A. J. Pritchard, “Stochastic ,” SIAM Journal on Control and Optimization, vol. 36, no. 5, pp. 1504–1538, 1998.View at: Publisher Site | Google Scholar | MathSciNet
W. Zhang, B. Chen, and C. Tseng, “Robust filtering for nonlinear stochastic systems,” IEEE Transactions on Signal Processing, vol. 53, no. 2, part 1, pp. 589–598, 2005.View at: Publisher Site | Google Scholar | MathSciNet
W. Zhang and B. Chen, “State feedback control for a class of nonlinear stochastic systems,” SIAM Journal on Control and Optimization, vol. 44, no. 6, pp. 1973–1991, 2006.View at: Publisher Site | Google Scholar | MathSciNet
Y. Fang and K. A. Loparo, “Stochastic stability of jump linear systems,” IEEE Transactions on Automatic Control, vol. 47, no. 7, pp. 1204–1208, 2002.View at: Publisher Site | Google Scholar | MathSciNet
N. Xiao, L. Xie, and M. Fu, “Stabilization of Markov jump linear systems using quantized state feedback,” Automatica, vol. 46, no. 10, pp. 1696–1702, 2010.View at: Publisher Site | Google Scholar | MathSciNet
C. Tan and W. H. Zhang, “On observability and detectability of continuous-time stochastic Markov jump systems,” Journal of Systems Science & Complexity, vol. 28, no. 4, pp. 830–847, 2015.View at: Publisher Site | Google Scholar | MathSciNet
Shaowei Zhou, Xiaoping Liu, Bing Chen, and Hongxia Liu, “Stability Analysis for a Class of Discrete-Time Nonhomogeneous Markov Jump Systems with Multiplicative Noises,” Complexity, vol. 2018, Article ID 1586846, 9 pages, 2018.View at: Publisher Site | Google Scholar
J. Xiong, J. Lam, Z. Shu, and X. Mao, “Stability analysis of continuous-time switched systems with a random switching signal,” IEEE Transactions on Automatic Control, vol. 59, no. 1, pp. 180–186, 2014.View at: Publisher Site | Google Scholar | MathSciNet
L. Zhang, Y. Leng, and P. Colaneri, “Stability and stabilization of discrete-time semi-MARkov jump linear systems via semi-MARkov kernel approach,” Institute of Electrical and Electronics Engineers Transactions on Automatic Control, vol. 61, no. 2, pp. 503–508, 2016.View at: Google Scholar | MathSciNet
V. Dragan, T. Morozan, and A. M. Stoica, Mathematical Methods in Robust Control of Discrete-Time Linear Stochastic Systems, Springer, New York, NY, USA, 2010.View at: Publisher Site | MathSciNet
V. Dragan, T. Morozan, and A.-M. Stoica, Mathematical methods in robust control of linear stochastic systems, Springer, New York, Second edition, 2013.View at: Publisher Site | MathSciNet
T. Hou, W. Zhang, and H. Ma, “Finite horizon / control for discrete-time stochastic systems with Markovian jumps and multiplicative noise,” IEEE Transactions on Automatic Control, vol. 55, no. 5, pp. 1185–1191, 2010.View at: Publisher Site | Google Scholar | MathSciNet
Y. Zhao and W. Zhang, “Observer-based controller design for singular stochastic Markov jump systems with state dependent noise,” Journal of Systems Science & Complexity, vol. 29, no. 4, pp. 946–958, 2016.View at: Publisher Site | Google Scholar | MathSciNet
Z. Wu, S. Wang, and M. Cui, “Tracking controller design for random nonlinear benchmark system,” Journal of The Franklin Institute, vol. 354, no. 1, pp. 360–371, 2017.View at: Publisher Site | Google Scholar | MathSciNet
Z. Wu, “Stability criteria of random nonlinear systems and their applications,” Institute of Electrical and Electronics Engineers Transactions on Automatic Control, vol. 60, no. 4, pp. 1038–1049, 2015.View at: Publisher Site | Google Scholar | MathSciNet
X. Xie and M. Jiang, “Output feedback stabilization of stochastic feedforward nonlinear time-delay systems with unknown output function,” International Journal of Robust and Nonlinear Control, vol. 28, no. 1, pp. 266–280, 2018.View at: Publisher Site | Google Scholar
X. Xie, N. Duan, and C. Zhao, “A combined homogeneous domination and sign function approach to output-feedback stabilization of stochastic high-order nonlinear systems,” IEEE Transactions on Automatic Control, vol. 59, no. 5, pp. 1303–1309, 2014.View at: Publisher Site | Google Scholar | MathSciNet
M. Gao, L. Sheng, and W. Zhang, “Stochastic H2/H∞ control of nonlinear systems with time-delay and state-dependent noise,” Applied Mathematics and Computation, vol. 266, pp. 429–440, 2015.View at: Publisher Site | Google Scholar | MathSciNet
Mingyue Cui, Liangchao Geng, and Zhaojing Wu, “Random Modeling and Control of Nonlinear Active Suspension,” Mathematical Problems in Engineering, vol. 2017, Article ID 4045796, 8 pages, 2017.View at: Publisher Site | Google Scholar | MathSciNet
Y. Wang, Z. Pan, Y. Li, and W. Zhang, “H∞ control for nonlinear stochastic Markov systems with time-delay and multiplicative noise,” Journal of Systems Science & Complexity, vol. 30, no. 6, pp. 1293–1315, 2017.View at: Publisher Site | Google Scholar | MathSciNet
O. L. V. Costa and M. D. Fragoso, “Discrete-time LQ-optimal control problems for infinite Markov jump parameter systems,” IEEE Transactions on Automatic Control, vol. 40, no. 12, pp. 2076–2088, 1995.View at: Publisher Site | Google Scholar | MathSciNet
O. L. Costa and D. Z. Figueiredo, “Stochastic stability of jump discrete-time linear systems with Markov chain in a general Borel space,” IEEE Transactions on Automatic Control, vol. 59, no. 1, pp. 223–227, 2014.View at: Publisher Site | Google Scholar | MathSciNet
M. D. Fragoso and J. Baczynski, “Stochastic versus mean square stability in continuous time linear infinite Markov jump parameter systems,” Stochastic Analysis and Applications, vol. 20, no. 2, pp. 347–356, 2002.View at: Publisher Site | Google Scholar | MathSciNet
T. Hou and H. Ma, “Exponential stability for discrete-time infinite Markov jump systems,” IEEE Transactions on Automatic Control, vol. 61, no. 12, pp. 4241–4246, 2016.View at: Publisher Site | Google Scholar | MathSciNet
R. Song and Q. Zhu, “Stability of linear stochastic delay differential equations with infinite Markovian switchings,” International Journal of Robust and Nonlinear Control, vol. 28, no. 3, pp. 825–837, 2018.View at: Publisher Site | Google Scholar
V. M. Ungureanu, “Optimal control for infinite dimensional stochastic differential equations with infinite Markov jumps and multiplicative noise,” Journal of Mathematical Analysis and Applications, vol. 417, no. 2, pp. 694–718, 2014.View at: Publisher Site | Google Scholar | MathSciNet
Y. Liu and T. Hou, “LQ optimal control for stochastic systems with infinite Markovian jumps,” in Proceedings of the 2017 Chinese Automation Congress (CAC), pp. 7107–7111, Jinan, October 2017.View at: Publisher Site | Google Scholar
Y. Liu, T. Hou, and X. Bai, “Infinite horizon H2/H∞ optimal control for discrete-time infinite Markov jump systems with (x,u,v)-dependent noise,” in 36th Chinese Control Conference, pp. 1955–1960, 2017.View at: Google Scholar
X. Mao and C. Yuan, Stochastic Differential Equations with Markovian Switching, Imperial College Press, London, UK, 2006.View at: Publisher Site | MathSciNet
W. Zhang, B. Chen, H. Tang, L. Sheng, and M. Gao, “Some remarks on general nonlinear stochastic control with state, control, and disturbance-dependent noise,” IEEE Transactions on Automatic Control, vol. 59, no. 1, pp. 237–242, 2014.View at: Publisher Site | Google Scholar | MathSciNet