#### Abstract

This paper investigates the stabilization of networked control systems (NCSs) with random delays and random sampling periods. Sampling periods can randomly switch between three cases according to the high, low, and medium types of network load. The sensor-to-controller (*S-C*) random delays and random sampling periods are modeled as Markov chains. The transition probabilities of Markov chains do not need to be completely known. A state feedback controller is designed via the iterative linear matrix inequality (LMI) approach. It is shown that the designed controller is two-mode dependent and depends on not only the current *S-C* delay but also the most recent available sampling period at the controller node. The resulting closed-loop systems are special discrete-time jump linear systems with two modes. The sufficient conditions for the stochastic stability are established. An example of the cart and inverted pendulum is given to illustrate the effectiveness of the theoretical result.

#### 1. Introduction

Networked control systems (NCSs) are feedback control systems whose feedback paths are implemented by a real-time network. Recently, much attention has been paid to the study of NCSs, due to their low cost, reduced weight and power requirements, simple installation and maintenance, high reliability, and so on [1, 2].

A basic problem in NCSs is the stability of the systems. In real-time control systems, three main issues are bandwidth and packet size constraints, time delays, and packet losses, which will degrade the performance of control systems and even make systems unstable; so it is significant to overcome the adverse influences of time delays, packet losses, and dynamic bandwidth [3–7].

On the other hand, the Markov chain, a discrete-time stochastic process with the Markov property, can be effectively used to model NCSs with random time delays and random sampling periods, which are modeled as the Markovian jump linear systems (MJLSs). Recently, there have been considerable research efforts into NCSs [8–12]. For example, Xiao et al. present an *V-K* iteration algorithm to design stabilizing controllers for specially structured discrete-time jump linear systems with random but bounded delays in the feedback loop [8]. Zhang et al. propose a promising two-mode-dependent-state feedback scheme to stabilize section with the current *S-C* delay () and the previous controller-to-actuator (*C-A*) delay () modeled as two Markov chains [9]. Shi and Yu investigate the output feedback stabilization and robust mixed control of NCSs, in which it is assumed that at each sampling instant, the current *S-C* delay () and the most recent available *C-A* delay () can be obtained [10, 11]. Huang and Nguang discuss the stabilization problem for a class of linear uncertain continuous NCSs, in which it is assumed that random communication *S-C* delays () and random communication *C-A* delays () are modeled two continuous time-discrete-state Markov processes [12].

In the above references, the transition probabilities are assumed completely accessible and considered as the available knowledge for analyzing and designing the NCSs. In practice, this kind of information including the variation of time delays and packet losses is hard to obtain. The problems of partly unknown transition probabilities were investigated [13–16]. Zhang et al. investigate the stability of Markovian jump linear systems with partly unknown transition probabilities [13, 14]. Wang et al. study the partially mode-dependent filtering problem for discrete-time Markovian jump systems with partly unknown transition probabilities [15]. Sun and Qin investigate the stability and stabilization problems of a class of NCSs with bounded packet dropout, in which the transition probabilities are partly unknown due to the complexity of network [16]. However, the developed controllers of these references [13–16] only are either mode independent or one-mode dependent, and the design problem can thus be readily converted into a standard MJLS problem. To the best of the authors’ knowledge, if the transition probabilities are assumed partly accessible, designing the two-mode-dependent controller that simultaneously depends on both the current *S-C* delay (), and the most recent available sampling period () has not been fully investigated, which is the focus of this work. When considering both and , the resulting closed-loop system can be transformed to a special MJLS, and thus the well-developed results on MJLSs with partly unknown transition probabilities cannot be directly applied [13–16].

In this paper, the stochastic stability of NCSs with random time delays and random sampling periods is studied, in which time delays and sampling periods are driven by two finite-state Markov chains. This paper is organized as follows. In Section 2, the NCSs model with random *S-C* time delays and random sampling periods is made, which is equivalent to a class of special discrete-time jump linear systems with two modes. Sufficient and necessary conditions of stochastic stability with completely known transition probabilities for the foregoing model are considered in Section 3. Sufficient conditions of stochastic stability with partly unknown transition probabilities for the foregoing model are considered in Section 4. Section 5 is an illustrative example, and our work in this paper is summarized in Section 6.

Notation. In this paper, is the set of all real numbers, denotes the -dimensional Euclidean space. denotes the probability space. and denote the transpose and the inverse of a matrix , respectively. means that is positive definite (negative definite). and are the zero and identity matrices with appropriate dimensions, respectively. In symmetric block matrices, we use an asterisk to represent a term that is induced by symmetry.

#### 2. Problem Formulation

The structure of the considered NCSs is shown in Figure 1, where the plant is described by the following linear system model: where are the system state vector and control input vector, respectively; are known constant matrices of appropriate dimensions. Suppose the sensor is clock-driven, the controller and the actuator are event-driven.

Suppose bounded random delays only exist in the link from sensor to controller, as shown in Figure 1. Here, represents the *S-C* delay. The state feedback controller is to be designed.

For NCSs, the shorter the sampling period, the better system performance; however, the short sampling period will increase the possibility of network congestion. If the constant sampling period is adopted, the sampling period should be large enough to avoid network congestion, so network bandwidth cannot be sufficiently used when the network is idle. In [17–19], the variable sampling method is used, and the sampling periods are assumed to switch in a finite discrete set. But when the systems switch too fast, it is apt to cause oscillation and instability of the system. In the actual network, the size of sampling period is closely related to the network load. However, the network load usually is random [20]. As a result of the above, network load can be high load, low load, and medium load. Correspondingly, sampling periods can randomly switch between three cases of maximum, medium, and minimum in this paper. In the following, we consider that sample periods randomly switch between three cases and make NCSs model with random time delays and random sample periods. Thus, not only the network bandwidth can be fully used, but also the conservativeness of the stabilization conditions of NCSs can be reduced.

Suppose is the length of the *k*th sampling period, if the network is idle, define the sampling period as , and if the network is occupied by the most users, define the sampling period as , otherwise define the sampling period as . Then, the sampling period .

The discrete-time expression of the system (1) is as follows: where .

In this paper, and are modeled as two homogeneous Markov chains defined in that take values in and , and their transition probability matrices are and , respectively, and where , and , for all and .

*Remark 1. *In this paper, we assume that the controller will always use the most recent data [8]. Thus, if we have at sampling instant , then at sampling time , even if there are delays longer than 1 or package loss, we still have to use. So in our model of the system in Figure 1, the delay can increase at most 1 each step, .

In NCSs, to reduce the conservativeness of the stabilization conditions, it is desirable to consider not only the time delay but also the sample period information in the controller design. For the controller node, at time instant , can be obtained by comparing the current time and the time-stamp of the sensor information received. However, this sample period information cannot be received by the controller immediately because it needs to be transmitted through the network from sensor to controller. So if the time delay exists, the information of time instant would be known at the controller node. Consequently, it is desirable to design the state feedback controller that simultaneously depends on both and . The mode-dependent state feedback control law is where is the input vector of the state feedback controller, and is the controller gain to be designed.

In the following, we make the closed-loop system model. Substituting the formula (4) into (2), we can obtain the following closed-loop NCSs:

Augment the plant’s state variable as where , then we have where and has all elements being zero except for the block being an identity matrix.

*Remark 2. *Because the controller in (4) is two-mode dependent, the resulting closed-loop system in (7) depends on , , and can be transformed to a special MJLS. In addition, is related to both and .

It can be seen that the closed-loop system in (7) is a jump linear system with two modes modeled by different homogeneous Markov chains. The objective of this paper is to design the state feedback controller (4) to guarantee the stochastic stability of the NCSs in (7). For the stochastic stability, we adopt the following definition [9].

*Definition 3. *The system in (7) is stochastically stable if for every finite matrix , initial mode , and , there exists a finite matrix such that the following holds:

#### 3. The Case of Completely Known Transition Probabilities

In this section, we first give the sufficient and necessary conditions for the state feedback stabilization of NCSs with completely known transition probabilities in (7) and then derive the equivalent conditions of LMIs with nonconvex constraints.

Because is to be considered into the controller design, the multistep jump problem of Markov chains is involved in the system design. According to the Chapman-Kolmogorov (*C-K*) equation of stochastic process [21], we give the multistep transition probability as follows.

Lemma 4. *If the transition probability matrix from to is , then the transition probability matrix from to is , which is a multistep transition probability matrix of the Markov chain. Particularly, when , the transition probability matrix is .*

*Proof. *The transition probability from to is , then
From matrix multiplication, the transition probability matrix from to is .

*Remark 5. *It is noted that (3) to (4) are based on the ineffectiveness theory of Markov chain. With Definition 3, the necessary and sufficient conditions on the stochastic stability of closed-loop system in (7) can be obtained.

Theorem 6. *The closed-loop system with completely known transition probabilities in (7) is stochastically stable if and only if there exist the matrices such that the following matrix inequalities:
**
hold for all and .*

*Proof (sufficiency). *For the closed-loop system in (7), consider the Lyapunov function
Then

Define , , and . To evaluate the first term in (13), we need to apply the three probability transition matrices:
Then, (13) can be written as
Thus, if , then
where From inequality (16), we can see that for any ,
Furthermore
Let , then

From Definition 3, the closed-loop system in (7) is stochastically stable.*Necessity.* Assume that the closed-loop system in (7) is stochastically stable. Then, we have

Define
with . Assume that , since , as increases, is monotonically increasing, for . From (20), is upper bounded. Furthermore, its limit exists and can be represented as
Since this is valid for any , we have
From (22), we have , since . Consider
By using Lemma 4, the second term in (24) can be written as
Substituting (25) into (24), we can get
Letting and noticing (22) and , we have

This completes the proof.

*Remark 7. *Although the proof of Theorem 6 is similar to the literature [10], the considered problem is not the same. The *S-C* random delays and random sampling periods are considered in this paper; however, the *S-C* and *C-A* random delays are considered in [10]. In addition, the continuous systems are considered in this paper and the discrete systems are considered in [10].

Theorem 6 gives the sufficient and necessary conditions on the existence of the state feedback controller, when transition probabilities are completely accessible. However, the conditions in (11) are nonlinear in the controller matrices. In the following theorem, the equivalent LMI conditions to (11) are given.

Theorem 8. *There exists a controller (4) such that the closed-loop system in (7) is stochastically stable if and only if there exist matrices and , such that the following matrix inequalities
*

*with*

*hold for all and .*

*Proof. *By applying the Schur complement and letting , the proof can be readily completed.

*Remark 9. *The conditions in Theorem 8 are a set of LMIs with nonconvex constraints. This can be solved by several existing iterative LMI algorithms. Using a cone complementarity approach [22–24], we can solve this nonconvex feasibility problem by formulating it into an optimisation problem subject to LMI constraints.

Now, using a cone complementarity approach, we suggest the following nonlinear minimisation problem involving LMI conditions instead of the original nonconvex feasibility problem formulated in Theorem 8.

Minimise
subject to (28) and

*Algorithm 10. * Find a feasible point satisfying (30). If there are none, set .

Solve the following LMI problem
Set .

If the conditions in (31) are satisfied, then return to . If the conditions in (31) are not satisfied within a specified number () of iterations, then exit. Otherwise, set and go to .

#### 4. The Case of Partly Known Transition Probabilities

In the above section, the transition probabilities are assumed completely accessible and considered as the available knowledge for analyzing the NCSs. In practice, this kind of information including the variation of time delay and actual sampling period is hard to obtain. This variation can degrade the control performance and even make the system unstable.

In this section, the transition probabilities are assumed partially accessible. The transition probabilities in (3) are considered to be partially available; namely, some elements in matrices and are time invariant but unknown. For instance, where ? represents the unavailable elements. For notation clarity, , we denote

Moreover, if , it is further described as where represents the th known element with the index in the th row of matrix , represents the th unknown element with the index in the th row of matrix , represents the th known element with the index in the th row of matrix , and represents the th unknown element with the index in the th row of matrix .

*Remark 11. *Note that the expression of (33)–(35) was first introduced for regular state-space Markovian systems [13, 14], and it covers completely known and completely unknown probabilities. If or , the transition probabilities are completely available. If or , the transition probabilities are completely unavailable.

Now, the following theorem presents a sufficient condition for the stochastic stability of the system described by (7) with partially known transition probabilities (33).

Theorem 12. *The system described by (7) with partially known transition probabilities (33) is stochastically stable if there exist matrices such that the following matrix inequalities
**
hold for all and , where
*

*Proof. *From Theorem 6, we know

However, from (30), we know that some elements are inaccessible in and . According to the property of Markov chains, we obtain
where , , are defined in (38).

According to Lemma 4, . We obtain , that is inaccessible in , even if there is an inaccessible element in . So we obtain , that is inaccessible in , and

Substituting the formulae (40)–(43) into (39), we can obtain (36) and (37). This completes the proof.

Theorem 12 gives the sufficient conditions on the existence of the state feedback controller, when transition probabilities are partly accessible. However, the conditions in (36) and (37) are nonlinear in the controller matrices. In the following theorem, the equivalent LMI conditions to (36) and (37) are given.

Theorem 13. *Consider the system described by (7) with partially known transition probabilities (30), the corresponding system is stochastically stable if there exist matrices and , , , , , , , , , , , , , , , , , , , , , , , such that the following matrix inequalities:
**
with
*