#### Abstract

This paper is concerned with the stabilization problem for a class of discrete-time delayed systems, whose stabilizing controller is firstly designed to be partially delay-dependent. The distribution property of such a controller is firstly described by a discrete-time Markov chain with two modes. It is seen that two traditionally special cases of state feedback controller without or with time delay, respectively, are included. Based on the proposed controller, new stabilization conditions depending on some probabilities are developed. Because of the established results with LMI forms, they are further extended to more general cases that the transition probabilities are uncertain and totally unknown, while more applications are also given in detail. Finally, numerical examples are used to demonstrate the effectiveness and superiority of the proposed methods.

#### 1. Introduction

As we know, time delay is usually encountered in various practical dynamical systems, such as chemical systems, heating systems, biological systems, networked control systems (NCSs), and telecommunication and economic systems. The presence of time delay in such practical systems often leads to oscillation, instability, and poor performance. Motivated by these facts, various research topics of time-delayed systems have been studied in [1–5], in which both delay-independent and delay-dependent results are included. Because of making use of the information on the length of delays, delay-dependent results are less conservative, especially when time delay is small. During the past decades, much effort has been paid on the derivation of delay-dependent conditions. For example, by exploiting the slack variable technique, less conservative results on the delayed systems with time-varying delays were established in [6–8], while the other results based on the Jensen inequality were got in [9–12]. By considering the distribution of delay, some improved delay-dependent results were presented in [13–17]. The other research topics on delayed systems are often related to networked control systems [18–21], complex networks [22–25], and Markovian jump systems [26–29]. Particularly, there is a special kind of time delay for Markovian jump systems, which is named as mode-dependent time delay. The characteristic of mode-dependent time delay not only is time-varying but also depends on system modes. For this kind of time delay, some results for continuous-time case were obtained in [30–33], while the results in [34–36] were referred to discrete-time case.

By investigating the existing references on various kinds of time delay systems, it is said that the stabilization achieved by state feedback control is mainly realized through a controller without or with delay separately. When a state feedback controller without delay is designed to be delayed systems, it is assumed that the related system state should be available online and is regarded as an ideal assumption. On the contrary, the state of a delayed state feedback controller should always have time delay. In this case, it is said that it is an absolute assumption. More importantly, it is found in this paper that one cannot say that either of the above realized stabilization ways is superior in terms of having less conservatism. Since the underlying systems are with time delays, whether a controller both with and without time delay can be designed. What is the form of such a controller? Compared with the traditional stabilizing controllers for delayed systems, are there some advantages? What is the correlation among such controllers? Based on these illustrations, it is necessary to consider the above problems in detail. To the best of our knowledge, very few results are available to design a feedback controller for delayed systems depending on state and delayed state simultaneously. All the facts motivate the current research.

In this paper, the stabilization of discrete-time delayed systems is firstly achieved by a kind of partially delay-dependent controller. The main contributions of this paper are summarized as follows: (1) a kind of partially delay-dependent controller is developed, whose probability distribution is modeled into a discrete-time Markov chain with two modes; (2) new conditions for such a stabilizing controller are presented in terms of LMIs, where the probabilities are taken into account in the controller design; (3) the given results are further extended to some general cases that such probabilities are uncertain and totally unknown, which are also developed with LMI forms; (4) based on the proposed methods, more applications on state feedback controller with partially observable states are considered.

*Notation*. denotes the dimensional Euclidean space, and is the set of all real matrices. is the set of positive integer. means the mathematical expectation of . refers to the Euclidean vector norm or spectral matrix norm. and denote the minimum and the maximum eigenvalues of matrices, respectively. In symmetric block matrices, we use “” as an ellipsis for the terms induced by symmetry and for a block-diagonal matrix and .

#### 2. Problem Formulation

Consider a class of discrete-time delayed systems described aswhere is the system state, is the control input, is the time delay, and is an initial value at . , , and are known matrices with appropriate dimensions.

In this paper, a kind of partially delay-dependent controller is proposed aswhere both and are control gains to be determined. The parameter represents the delayed controller added or not. In this paper, it is assumed to be a discrete-time homogeneous Markov chain taking values in a finite set with the following transition probability matrix:Here, parameters and are probabilities and defined as follows:which are named as the recovery rate and failure rate of the delayed state-feedback controller , respectively. If , will be reduced to the Bernoulli type process, whose probabilities areAfter applying controller (2) to system (1), one getswhich is equivalent towith Here, is also a discrete-time homogeneous Markov chain taking values in a finite set , whose transition probability matrix is similar to (3).

*Remark 1. *Compared with state feedback controllers without or with delay added separately (see, e.g., [4, 5, 14, 15, 29]), partially delay-dependent controller (2) is more general and bridges the above two special cases. By applying controller (2), such probabilities will be considered in the proposed stabilization conditions and also play important roles in system synthesis. Though there is a Markov chain in controller (2), it is said that resulting system (6) or (7) is different from traditional Markovian jump systems [30–36]. In these references, the system parameters switch simultaneously according to a Markov process, while all the system parameters considered here are deterministic and only the probability distribution of delayed controller is modeled into a Markov process. More importantly, by investigating the existing references such as [34–36], it is seen that the stabilization ways are also realized by adding either of the above special controllers, respectively, and such probabilities are not considered.

*Remark 2. *In this paper, a kind of partially delay-dependent controller is proposed, whose distribution probability is described by a Markov process. It is said that the key ideal of controller (2) is also applied to other systems or problems such as singular systems, pinning control of complex networks, and networked control systems with disordering phenomenon. For the stabilization problem of singular systems, it is seen that controller (2) can be designed directly. However, the corresponding analysis and synthesis problems should be reconsidered carefully, since the singular derivative matrix is included. Similarly, based on the traditionally pinning control methods of networks and controller (2), a kind of pining controller for complex networks with time delay could be designed. Thirdly, last but not the least, the key ideal of the proposed controller may be applied to deal with the disorder problem of networked control systems. However, how to propose an efficient controller based on controller (2) to resist disorder should be considered carefully. In a word, it is said that the proposed controller (2) has more applications which could be used to handle other problems. These problems will be our further topics.

In order to process our main results, a definition for system (6) or (7) is introduced here.

*Definition 3. *System (6) is stochastically stable if holds for all initial conditions and .

#### 3. Main Results

Theorem 4. *Consider system (1) with a given scalar ; there is controller (2) such that the resulting closed-loop system (6) is stochastically stable if there exist , , , and , , satisfying**where **Then, the gains of partially delay-dependent controller (3) can be computed by*

*Proof. *Choose a stochastic Lyapunov functional for system (7) aswhere and . For each , we havewhich yields Based on this, we can getwhere By the Schur complement lemma, it is known that is equivalent towhere and . Then, by pre- and postmultiplying both sides of (20) with and its transpose, respectively, we have it equal towhere . From (10) or (11), it is concluded that matrix is nonsingular. Then, by pre- and postmultiplying (21) with and its transpose, respectively, it is got thatAs for nonlinear term , it is obtained thatBased on (23) and taking into representation (13), it is known that conditions (10) and (11) imply (20). Then, we conclude thatwhere . Taking the expectation on both sides of (24) and continue the iterative procedure of (24), one getsimplying This implies that the resulting closed-loop system (7) is stochastically stable. This completes the proof.

*Remark 5. *In this theorem, the relationship between such probabilities and existence conditions for stabilizing controller (2) are established. Because of the corresponding probability distribution considered, a stochastic Lyapunov functional depending on the Markov process is exploited. Since the probabilities are included, it is said that they are important in the controller design, whose effects will be illustrated by numerical examples. Moreover, in this paper, it is found by a numerical example that it is not a deterministic conclusion that one of two special cases is less conservative than the other one. This phenomenon further demonstrates the utility and superiority of partially delay-dependent controller (2).

*Remark 6. *It is worth mentioning that, without loss of generality, the stochastic Lyapunov functional is selected to be (14), which is simpler. However, based on the methods proposed in this paper, there are still some possible ways to further reduce the conservatism. First of all, some additional terms related to time delay could be added to (14). In addition, some other techniques such as slack variable algorithm and Jensen inequality method can be used in the proof of Theorem 4. On the other hand, due to the results with LMI forms, more applications could be extended to other systems such as singular systems and the output control or filtering problems of discrete-time delayed systems, which will be our future work.

From the formula of partially delay-dependent controller (7), it is seen that two special cases are included and described bywhere and are the control gains to be determined. They are traditionally state-feedback controllers without or with delay and are obtained by letting and , respectively. After applying such traditional controllers and by the similar method in this paper, we could have the following corollaries.

Corollary 7. *Consider system (1) with a given scalar ; there is controller (27) such that the resulting closed-loop system is stochastically stable if there exist , , and and satisfying**where . Then, the gain of controller (27) is computed by*

*Proof. *When only controller (27) is applied, similar to the proof of Theorem 4, it is obtained that the resulting closed-loop system is stochastically stable if the following conditionholds, where , , and . By pre- and postmultiplying both sides of (31) with and its transpose, respectively, condition (31) is equivalent towhere . Based on the representation of (30), it is concluded that conditions (30) and (32) are the same. This completes the proof.

Corollary 8. *Consider system (1) with a given scalar ; there is controller (28) such that the resulting closed-loop system is stochastically stable if there exist , , and and satisfying**where . Then, the gain of controller (27) is computed by*

*Proof. *Based on the proofs of Theorem 4 and Corollary 7, one can get this corollary easily. Thus, the proof is omitted here.

From Theorem 4, it is seen that probabilities and play an important role in the controller design and should be given exactly. But in some applications, it is very hard or high cost to obtain them exactly, even they are totally unknown. So, it is natural and important to study such general cases. If there exist uncertainties in and , we will use their estimations which are described aswhere and are their estimates and admissible uncertainties are with and with , respectively. Then, based on Theorem 4, we will have the following theorem.

Theorem 9. *Consider system (1) with a given scalar ; there is controller (2) such that the resulting closed-loop system (6) with conditions (35) and (36) is robustly stochastically stable if there exist , , , , and , , satisfying**where **The other symbols are given in Theorem 4. Then, the gains of the designed partially delay-dependent controller (3) under conditions (35) and (36) are also computed by (13).*

*Proof. *By the proof of Theorem 4, it is known that the stochastic stability of the resulting closed-loop system with general conditions (35) and (36) can be guaranteed by , . It is equivalent to be described aswith . First, we deal with condition (42). By condition (35), it is concluded that (42) is equivalent towhere . It is implied bywhere the former one is further guaranteed byLet and by the Schur complement lemma, we have (47) and (46) equivalent towhere . From (37), it is known that is nonsingular. Then, by pre- and postmultiplying both sides of (48) with and its transpose, it is obtained thatwhere . As for nonlinear , it can be done by (23). Similarly, is handled aswith . Based on these and taking into account (13), it is got that condition (37) implies (50). As for (49), by pre- and postmultiplying both its sides with and its transpose, one has it equal to (39). As for (43), similar to (42), it is concluded that it is equivalent towhere . Similar to the above proof, we have it implied byBy pre- and postmultiplying both sides of (53) with and its transpose, we havewhich is equivalent toLetting and considering representation (13), similar to (37) implying (50), it is obtained that (38) implies (56). As for (54), it can be guaranteed by (40), whose process is similar to the proof of (46). This completes the proof.

When probabilities and satisfy another general case that both of them are unknown, how to design controller (2) is also an interesting question. Similar to Theorem 9, we have the following result.

Theorem 10. *Consider system (1) with a given scalar ; there is controller (2) with unknown and such that the resulting closed-loop system (6) is stochastically stable if there exist , , , , and , , satisfying conditions (39), (40), and**The corresponding symbols are given in Theorems 4 and 9. Then, the gains of the designed partially delay-dependent controller (3) are constructed by (13).*

*Proof. *When both and are unknown, , , is rewritten to beSimilar to the proof of Theorem 9, such conditions are equivalent towhere and . Because of and being unknown, condition (59) will be guaranteed by (46) andwhile condition (62) is guaranteed by (54) andSimilar to the proof of conditions (37) and (39) implying (44), one can get that conditions (39) and (57) implying (61), while condition (62) can be guaranteed by conditions (40) and (58) together. This completes the proof.

As for delayed system (1), it is said that the implementation of state feedback controller (27) or (28) needs its corresponding state or delayed state to be observable totally. When the states of such controllers are observable with some probabilities instead of being totally observable, the corresponding state feedback controllers without or with delay can be described byrespectively, where and are control gains to be determined, and is also a Markov process. The corresponding state observation sets are defined aswhich are referred to stochastic sets. However, based on the proposed method for partially delay-dependent controller, when such two sets are complementary, an improved controller based on stochastic observation sets of states can be constructed asAfter applying controllers (65), (66), and (69) to system (1), respectively, based on the exploited methods in this paper, one can get the corresponding results easily.

Corollary 11. *Consider system (1) with a given scalar ; there is controller (65) with condition (67) such that the resulting closed-loop system is stochastically stable if there exist , , , and , , satisfying**where the corresponding symbols are given in Theorem 4. Then, the gain of controller (65) is computed as*

Corollary 12. *Consider system (1) with a given scalar ; there is controller (66) with condition (68) such that the resulting closed-loop system (6) is stochastically stable if there exist , , , and , , satisfying**where , and the others are given in Theorem 4. Then, the gain of controller (65) is computed as*

Because of controllers (2) and (69) being similar, the obtained result based on controller (69) is similar to Theorem 4, while only the meaning of the transition probability matrices is different. Thus, it is omitted here.

#### 4. Numerical Examples

*Example 1. *Consider a discrete-time delayed system described by (1), whose parameters are given as follows: where time delay is assumed to be . When the desired controller is with form (27), it is obtained from Corollary 7 that while there is no solution to controller (28) by Corollary 8. On the other hand, without loss of generality, when the probabilities are assumed to be and , by Theorem 4, the gains of partially delay-dependent controller (2) are computed asFor this example, it is seen that Corollary 7 is less conservative than Corollary 8. However, the system parameters of (1) are selected to bewhere time delay is also assumed to be . It is known that there is no solution to controller (27) by Corollary 7, while the gain of controller (28) can be got by Corollary 8 and is given asFor this case, it is obtained that Corollary 8 is less conservative than Corollary 7. Thus, we cannot conclude that one of them is always less conservative. Based on these facts, it is said that the methods proposed in this paper have some utilities and advantages. In order to further demonstrate the correlation between the stabilization region and such probabilities, the system parameters of discrete-time delayed system (1) are given by where time delay is assumed to be and and are scalars. The probabilities of and are allowed to take values in , respectively. Under , the upper bound of allowable range of can be obtained by Theorem 4, which are listed in Table 1. On the other hand, when , one can get the corresponding upper bound of given in Table 2 similarly. Moreover, the correlations between allowable range of with given and for different pair are also demonstrated in Figures 1 and 2, respectively. Based on these simulations, the correlation between stabilization region and such probabilities are illustrated vividly.

*Example 2. *Consider the cart and inverted pendulum system illustrated in Figure 3.

There, and are the car mass and the pendulum mass, is the length of the pendulum, is the cart position, is the pendulum angular position, and is the input force. The state variables are selected to be , , , and . Without loss of generality, it is assumed that , , and , and the surface is without friction. Under the sampling period , the discretized model linearized on the upposition is obtained aswhere Because of having eigenvalues at 1, 1, 1.5569, and 0.6423, the above discrete-time system is unstable. Without loss of generality, it is assumed that the above system has a constant delay , and the corresponding matrix is given as Based on the proposed method, a kind of partially delay-dependent controller will be designed, whose transition probabilities are assumed to be and , respectively. By Theorem 4, the control gains are computed by Under the initial condition , we have the state response shown in Figure 4, where the small figure is the simulation of taking values in .

When there are uncertainties existing in and , by Theorem 9, we could have the corresponding control gains solved as where with and with . However, when such probabilities are totally unknown, it is found from Theorem 10 that there is no solution to controller (2). The main reasons are that the effects of such probabilities are removed in the controller design, and the original open-loop system (80) is unstable. This simulation also demonstrates the utility of the proposed methods.

#### 5. Conclusions

In this paper, we have studied the stabilization problem for a kind of discrete-time delayed system by exploiting partially delay-dependent controllers. Here, the designed controller is composed of state feedback controllers without and with delay together, whose probability distribution is described by a Markov process with two modes. Several sufficient conditions for the existence of the designed controller are given with LMI forms, where the corresponding probabilities are contained and play important roles. Due to the given results being LMIs, some general cases that such probabilities are uncertain and unknown have been considered, respectively, and more applications on the controller with partially observable states have also been demonstrated. The effectiveness and superiority of the proposed methods have been shown by numerical examples.

#### Conflict of Interests

Guoliang Wang and Boyu Li declare no competing interests, commercial interests, and financial interests.

#### Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grants 61104066, 61374043, and 61473140, the China Postdoctoral Science Foundation funded project under Grant 2012M521086, the Program for Liaoning Excellent Talents in University under Grant LJQ2013040, and the Natural Science Foundation of Liaoning Province under Grant 2014020106.