Abstract

This paper addresses the robust stabilization problem for a class of stochastic Markovian jump systems with distributed delays. The systems under consideration involve Brownian motion, Markov chains, distributed delays, and parameter uncertainties. By an appropriate Lyapunov–Krasovskii functional, the novel delay-dependent stabilization criterion for the stochastic Markovian jump systems is derived in terms of linear matrix inequalities. When given linear matrix inequalities are feasible, an explicit expression of the desired state feedback controller is given. The designed controller, based on the obtained criterion, ensures asymptotically stable in the mean square sense of the resulting closed-loop system. The convenience of the design is greatly enhanced due to the existence of an adjustable parameter in the controller. Finally, a numerical example is exploited to demonstrate the effectiveness of the developed theory.

1. Introduction

Stochastic Markovian jump systems (SMJSs) driven by continuous-time Markov chains have played an important role in many branches of science and industry. One can use the SMJSs to model many practical dynamical systems where they may experience abrupt changes in their structure and parameters, such as failure-prone manufacturing systems, power systems and economic systems. In the past decades, the stability and control problems of the SMJSs have received a great deal of attention and many results have been reported in [19]. Especially, it plays an important role in the field of control, such as in the field of adaptive tracking control, using feedback Markov jump nonlinear time-delay system to design a self-adaptive tracking controller can effectively improve the output tracking performance, which has very important practical application value and theoretical research significance in robot arm operating system and aircraft trajectory tracking system.

On the contrary, time delays are often encountered in many industrial and physical systems, and they are also a great source of systems instability, oscillation, and poor performance. We have seen an increasing interest in the stabilization of this class of systems [1013]. Recently, distributed time-delay systems, a particular case of time-delay systems, have also drawn much research interest [1418]. This is mainly because distributed delay systems are often used to model the time lag phenomenon in thermodynamics and in ecology, as well as epidemiology such as predator-prey systems. In [19], the state feedback control problem for a class of discrete-time stochastic with distributed delays has been considered by using linear matrix inequality approach. In [20], Zhu and Cao studied the adaptive synchronization problem for a class of stochastic neural networks with time-varying delays and distributed delays.

Recently, Yan et al. investigated the guaranteed cost stabilization problem for a class of stochastic Markovian jump systems with incomplete transition rates. The new state and output feedback finite-time guaranteed cost controllers design methods are proposed, and a new N-mode optimization algorithm is given to minimize the upper bound of cost function [21]. In [22], the robust finite-time control problem for a class of uncertain singular stochastic Markovian jump systems with partially unknown transition rates via proportional differential control law was solved. New sufficient conditions for the existence of mode-dependent desired controllers are derived in terms of linear matrix inequalities. The designed proportional differential controller ensures finite-time stability in the mean square sense and also satisfies a prescribed performance level of the resulting closed-loop system for all admissible uncertainties. Zhu and Yang investigated the fault-tolerant control problems of stochastic Markovian jump systems with actuator faults including loss of effectiveness, stuck, and outage [23]. Le Van Hien and Hieu Trinh studied the stability analysis problems of two-dimensional Markovian jump state-delayed systems in the Roesser model with uncertain transition probabilities [24]. In addition to the above research on Markov stable systems with time delay, there are many literatures on the stability control of other systems, which can be used for reference in this paper. In [25], Na et al. proposed an adaptive fuzzy control scheme to control the input delay of a nonlinear suspension system. Jin et al. researched the problem of robot movement in a complex environment, using artificial intelligence to solve the problem of robot state control and tracking [26]. In [27], Zhu and Zheng studied the asymptotic stability analysis and PWA state feedback control design for a class of discrete nonlinear systems by using smooth approximation technique. Xue-Bo Jin et al. proposed a way to control both dynamic model update and states fusion estimation to achieve real-time indoor RFID tracking [28]. In [29], Liu et al. designed a constrained generalized predictive current controller to control the charging current to keep the internal temperature of the battery in an ideal range. In [30], Wang and Na proposed a new adaptive parameter estimation method to solve the nonlinear servo mechanism with friction compensation and combined with robust integral to solve the bounded interference, which improved the accuracy. Stochastic Markov jump systems are widely used in various fields, and a large number of applications and research examples have been reported. In [31], Zhang et al. designed filters for a class of linear Markov jump systems with time-varying delays, and the designed filter ensures the stochastic stability of the filtering error system. In [32], Wu et al. designed a full-order filter by studying the problem of filtering for 2D discrete Markov jump systems, which guarantees the filtering error system to be mean square asymptotically stable. However, to the best of the author’s knowledge, there are few works undertaken on the robust control for stochastic Markovian jump systems with time-varying delays and distributed delays. We find that the robust control problems of such systems have not been fully investigated and there is still room for further investigation.

This paper focuses on the robust control for stochastic Markovian jump systems with distributed delays. A new delay-dependent stabilization criterion for the stochastic Markovian jump systems with distributed delays is established in terms of linear matrix inequalities (LMIs). When these LMIs are feasible, an explicit expression of the desired state feedback controller with an adjustable positive parameter is given. The designed controller can guarantee the resulting closed-loop system to be mean square asymptotically stable for all admissible uncertainties and distributed delays.

Notation: the notation used throughout this paper is fairly standard. denotes the n-dimensional Euclidean space and is the set of all real matrices. The notation denotes that is a symmetric positive definite matrix. The superscripts “T” and “−1” represent the transpose and the inverse of a matrix, respectively. Let the notation “I” denote an identity matrix, and let the notation “∗” denote the transposed entries in the symmetric positions of a symmetric matrix. The notation diag() stands for a block-diagonal matrix.

2. Model Formulation and Preliminaries

Consider the following stochastic Markovian jump systems with time-varying delays and distributed delays:where is the state and is the control input vector. , , , , , and are known matrices with appropriate dimensions. , , , , and denote norm-bounded real-valued matrix functions which satisfy the assumption:where , , , , , and are known matrices of appropriate dimensions, while is an unknown matrix function satisfying . And, , , denotes a right-continuous Markov process on the probability space taking values in a finite set with generator given bywhere . Here, is the transition rate from state to state if , and . The delay is a time-varying differentiable function that satisfies: , , where and are constants. The initial condition is a continuous function from to .

The purpose of this paper is to design a state feedback controller, and the expression is shown assuch that system (1) is asymptotically stable in the mean square sense, where is an adjustable positive parameter.

For the sake of simplicity, we write the matrix , which is dependent on the Markov chain, as .

Let us introduce the following lemmas which are essential in establishing our main result.

Lemma 1 (see [33]). For matrices , , , and with , and the scalar , we have the following inequalities:

Lemma 2 (see [34]). For matrices , , , and with , and the scalar , if , then

Lemma 3 (see [35]). For the given matrix , if there is a vector function such that the integrals and are well defined, then the following inequality holds:

3. Main Results

In this section, we aim at the robust stabilization problem for a class of stochastic Markovian jump systems with distributed delays.

Theorem 1. Consider the stochastic Markovian jump system with distributed time delays (1). For a given positive scalar , if there exist matrices , , , and and the positive scalars such that the following LMI (8) holds:whereand the state feedback controller is chosen as (4), then the resulting closed-loop system is mean square asymptotically stable.

Proof. To prove this theorem, let us introduce the Lyapunov–Krasovskii functional candidate as follows:whereAlong the trajectory of system (1), it yields thatAccording to Lemma 2, we can obtain the inequalities as follows:Combining inequalities (12)–(14), it follows thatwhereFrom Lemmas 1 and 2, we arrive at the inequalities as follows:Noting expressions (14), (15), and (17), we can obtain a sufficient condition of as follows:where

Premultiplying and postmultiplying (18) by diag and diag, respectively, and using Schur complement, one can obtain the desired result and the theorem is proved.

4. A Numerical Example

In this section, we will give a numerical example to illustrate the effectiveness of the proposed theory in this paper.

Consider the stochastic Markovian jump systems with distributed delays (1) with two modes and the following parameters:

Using the MATLAB toolbox to solve the linear matrix inequality (8), we can obtain the two-state feedback gain matrices as

The obtained state feedback gains and satisfy Lyapunov’s second theorem (asymptotic stability theorem), so the designed state feedback controller satisfies the requirement of mean square asymptotic stability of the closed-loop system.

According to the results, the state trajectories of two different states are drawn by MATLAB, as shown in Figures 1 and 2, respectively.

The simulation diagram is shown in the above figures. From the two figures, it can be seen that, under the constraints of the controller, the two states become asymptotically stable over time, and finally, approach 0.

According to the above derivation results and state trajectory curves, we know that the resulting closed-loop system is mean square asymptotically stable, thus proving the validity of the theory.

5. Conclusions

In this paper, the robust control problem for a class of stochastic Markovian jump systems with distributed delays is investigated. A new delay-dependent stabilization sufficient condition is proposed. Designed state feedback controller can guarantee the resulting closed-loop system mean square asymptotically stable for all admissible uncertainties and distributed delays.

The research results of this paper can be widely used in power systems, chemical systems, networked intelligent systems, aircraft systems, etc., so it has a certain reference value and significance for other related fields.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was supported by the Natural Science Foundation of Shandong Province (ZR2017MF048), National Natural Science Foundation of China (Grant no. 71801144), Key Research and Development Project of Shandong Province (Grant no. 2019GGX101008), and Science and Technology Plan for Colleges and Universities of Shandong Province (J17KA214 and J18KB159).