Complexity

Complexity / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 6195162 | 13 pages | https://doi.org/10.1155/2020/6195162

State-Estimator-Based Asynchronous Repetitive Control of Discrete-Time Markovian Switching Systems

Academic Editor: Sergey Dashkovskiy
Received08 Oct 2019
Revised19 Dec 2019
Accepted08 Jan 2020
Published03 Feb 2020

Abstract

This paper investigates the problem of asynchronous repetitive control for a class of discrete-time Markovian switching systems. The control goal is to track a given periodic reference without steady-state error. To achieve this goal, an asynchronous repetitive controller that renders the overall closed-loop switched system mean square stable is proposed. To reflect realistic scenarios, the proposed approach does not assume that the system modes are available synchronously to the controller but instead designs a detector that provides estimated values of the system modes to the controller. Based on a detected-mode-dependent estimator, the plant and asynchronous repetitive controller are formulated as a closed-loop stochastic system. By utilizing tools from stochastic Lyapunov–Krasovskii stability theory, we develop sufficient conditions in terms of linear matrix inequalities (LMIs) such that the closed-loop system is mean square stable and also simultaneously establish a synthesis procedure for obtaining the gain matrices. We provide numerical simulations on an electrical circuit switched system to illustrate the approach.

1. Introduction

As a special class of hybrid dynamic systems, Markovian switching systems are modeled by a set of linear or nonlinear governing equations with the switching between systems in the set determined by Markov chains [1]. Since Markovian switching systems can be used to describe abrupt variations caused by the random failures of components and sudden environmental changes, important and useful results have been reported in many practical applications, such as robot manipulators [2], power systems [3], economic systems [4], sensor networks [5], neural networks [6], multiagent systems [7], and networked control systems [810]. To date, a variety of results have been published to address the development of controllers for stabilization of Markovian switching systems, see [1121] and references therein. These results on control and filtering of Markovian switching systems are based on the assumption that the mode information of the plant is fully available to the controller or estimator at every instant of time to ensure that the switching of controller or estimator is synchronous with that of the plant. Accordingly, the designed controller is typically referred to as mode-dependent or synchronous; this assumption restricts the application of the controller design approach to many practical systems because the mode information of the plant may not be accessible to the controller due to communication delays and/or missing measurements which may lead to asynchronous behavior between the controller and the system modes.

Since an asynchronous controller has broader applicability than a synchronous controller, the asynchronous control design problem has received wider attention in recent years [2227]. For general switched systems, a strategy to stabilize continuous-time switched systems with asynchronous switching was provided in [22]. Asynchronous filtering was studied for discrete-time switched Takagi–Sugeno (T-S) fuzzy systems in [23]. Based on the hidden Markov model, an controller design was proposed to stochastically stabilize a class of Markovian switching systems with partial information in [24]. A passivity-based asynchronous control problem for Markovian switching systems was considered in [25]. An filtering design for discrete-time hidden Markovian switching systems was discussed in [26]. In [27], an asynchronous sliding mode control design was proposed for a class of uncertain Markovian switching systems with time-varying delays and stochastic perturbation. Although recent published results are encouraging, there are still many relevant open problems of importance to practical applications, and we consider one such essential problem in this paper.

Since the control tasks in many applications that can be modeled by such systems are often repetitive, increased use of repetitive control formulations can be found in many applications, disk drive systems [28], rotating machinery [29], micro-/nanomanipulation applications [30], and power electronics systems [31]; repetitive control strategies use error measurements from the previous period to reduce subsequent steady-state tracking errors for periodic exogenous input signals. There is a rich body of literature related to repetitive control design techniques [3240]. Most repetitive control designs in the literature are developed for deterministic systems, whereas designs for switched stochastic dynamical systems are sparse. In particular, the problem of designing a mode-dependent repetitive controller for discrete-time Markovian switching systems was first studied in [40], where a set of sufficient conditions in terms of linear matrix inequalities is derived for stabilization by combining a 2D Lyapunov functional and singular value decomposition of the output matrix. However, the problem of asynchronous repetitive control for Markovian switching systems has not been investigated, mainly due to the complexity of addressing the asynchronous behavior between the controller modes and the Markovian switching system modes.

In this paper, we design a state-estimator-based asynchronous repetitive controller for a class of discrete-time Markovian switching systems for asymptotic desired reference signal tracking of the output by considering the asynchronous behavior between the Markovian switching system modes and the controller modes. The proposed design provides the following contributions to the existing literature:(1)The estimate of the system mode is obtained by a detector via a hidden Markov model with a mode detection probability matrix which is utilized in the state estimator and controller. This approach relaxes the assumption prevalent in the existing literature that the system mode is available to the controller, and the controller and system mode switching is synchronized.(2)By employing the proposed state-estimator-based asynchronous repetitive control scheme, we develop the closed-loop system as a hidden Markovian jump system. We show that the closed-loop system can achieve mean square asymptotically stable tracking under a set of sufficient conditions which can be expressed in terms of solvable LMIs.(3)We also provide a synthesis procedure to obtain the estimator and controller design matrices; further, to facilitate the solution of the LMIs, we pose a stochastic optimization problem for determining the key gain parameters.

The rest of the paper is organized as follows. In Section 2, we describe the plant and the associated asynchronous repetitive control approach and formulate the closed-loop governing equations for the overall system. The main theoretical framework together with the synthesis procedure is provided in Section 3. Application of the approach to an RLC circuit system and numerical simulations are presented in Section 4. Section 5 provides a summary of this work and potential topics for the future.

1.1. Notations

The following notations are employed in the paper: for the set of non-negative integers; for the n dimensional Euclidean space; for the set of all matrices; I for the identity matrix with dimensions derived from the context; and for the Euclidean norm of a vector x and a matrix A; superscripts and for matrix transposition and matrix inverse; to denote the terms due to symmetry in a matrix. (), where X and Y are both symmetric matrices, to mean the matrix is negative (positive) definite. and denote the eigenvalues of matrix A with maximum and minimum real parts, respectively. denotes the mathematical expectation operator.

2. Problem Formulation

2.1. System Description

To model the switching process as a discrete-time homogeneous Markov chain, we consider the parameter which takes values from a finite set with the following mode transition probabilities:where , , and . Based on this Markov chain, we define the transition probability matrix as π whose -th element is .

Let , and , respectively, denote the state, control input, and system output vectors. Let , , , and denote the system matrices with appropriate dimensions. The discrete-time linear Markovian switching system that we consider in this paper is given by

In the following, for ease of readability, for , the system matrices , , , and are denoted by , , , and which are assumed to be known. The relative degree of the plant is assumed to be zero, which implies that .

Since it is generally not possible to accurately measure the evolution of the system mode , we consider a probabilistic detector that provides the estimated values of with a certain probability. Let the estimated system mode signal be denoted by which is utilized by the controller and need not be synchronized with the system mode . We consider a hidden Markov model to characterize the asynchronous phenomenon as follows:where , , and is the mode detection probability. For any , . The mode detection probability matrix is denoted as . The augmented process (, , , ) can be regarded as a hidden Markov model. We assume that the cardinality of the set is not greater than the cardinality of , i.e., . The asynchronous model (3) covers the mode-dependent and mode-independent cases: (i) when and for , i.e., , the model (3) reduces to the synchronous mode-dependent case and (ii) when , i.e., the model detector has only one mode , the asynchronous model (3) becomes a mode-independent case.

2.2. Asynchronous Repetitive Control

We consider the repetitive control structure provided in Figure 1, where is the periodic reference input signal with period T that needs to be tracked, is the output of the repetitive controller, and is the tracking error. The dynamic model of the discrete-time repetitive controller is given by

For the controller and estimator design, only the estimate of the system mode is assumed to be known. The following detected-mode-dependent estimator is utilized to provide the estimate of the state of the linear Markovian switching system (2):where is the estimated state, is the output of the asynchronous estimator, and , and are estimator parameter matrices which need to be designed. Let the estimation error be . Then, the estimator error governing equation is given by

We consider the following detected-mode-dependent repetitive control input for the plant:where is the repetitive controller gain.

Let the exogenous reference input signal be zero, i.e., . Considering , , and combining (2), (4), and (7), we can write the control input in terms of the estimation error aswhere and , are related to the control gain parameters and as follows:

Remark 1. The effect of the stochastic jumps between detected modes is reflected in the asynchronous repetitive control law (7) via gain matrices and . The above formulation translates the design of these gain matrices into finding parametric matrices and , which can be obtained as the solutions of a set of coupled matrix inequalities that depend on the transition and mode-detection probabilities and , respectively (cf. Theorem 2).

2.3. Closed-Loop Governing Equations and Preliminaries

Combining (2), (5), and (8) yields

Substituting (8) into (6) yields

For , we know that

Substituting (8) and simplifying, we can derive that

The closed-loop governing equations follow from (11), (10), (13) and can be expressed in compact form as follows:where , , , , , , and

Based on the above closed-loop governing equations, the problem considered in this paper can be stated as follows: for the discrete-time linear Markovian jump system (2), develop a design procedure to determine estimator matrices , , and and controller gain matrices and such that the closed-loop system (14) is mean square stable.

To derive the main results, we will employ the following definition, assumption, and lemma.

Definition 1 (see [41]). The discrete-time linear Markovian switching system (2) with is said to be mean square stable if for every initial condition and , the following holds:

Assumption 1. The system output matrix , has full row rank, i.e., . The singular-value decomposition (SVD) of can be obtained as follows:where is a diagonal matrix with positive diagonal elements in descending order; is the matrix with zero as its elements; and and are orthogonal matrices.

Lemma 1 (see [41]). For a given matrix with , if is a symmetric matrix, then there exists a matrix such that if and only ifwhere is the right orthogonal matrix of SVD, and .

3. Main Results

In this section, we provide the asynchronous repetitive controller design approach for the system given by (2). First, we present the sufficient condition in the following lemma to ensure that the closed-loop system (14) is mean square stable.

Theorem 1. Let and , , denote positive scalars. Define the following matrices:

The closed-loop asynchronous repetitive control system (14) is mean square stable if there exist matrices and with appropriate dimensions such that the following matrix inequality holds for all , :

Proof. For any system mode , we construct a stochastic Lyapunov functional for the closed-loop system (14) aswhere and are given as follows:Letting , we have , whereIn the following developments, the expectation operator symbol is omitted from the right-hand side of some expressions for presentation clarity. Based on the result of [41], utilizing (23) along the trajectories of (14), we simplify and as follows:Combining (25) and (26), we obtainwhere and . Based on the matrix inequality (20), for any we havewhere denotes the minimal eigenvalue of and , for all . From (28), for any positive integer , we can obtainAs , we have the following:which implies thatTherefore, the closed-loop asynchronous repetitive control system (14) is mean square stable in the sense of Definition 1.

Remark 2. In contrast to the repetitive control designs presented in the recent literature, Theorem 1 presents a clear framework to ensure mean square stability for discrete-time Markovian switching systems with an estimator-based asynchronous repetitive controller. Further, the above approach employs multiple and totally mode-dependent Lyapunov functional (21) in arriving at the result in Theorem 1, which leads to a less conservative result compared with the approach given in [40] that includes the mode-independent term, such as vs .
The condition given in (20) contains nonlinear product terms of unknown gain matrices, and thus, it is a nonlinear matrix inequality. It is generally difficult to solve this inequality as there are no known established methods currently available in the literature to solve such a nonlinear matrix inequality. In this paper, the full row rank assumption on the system output matrix , its SVD, and Lemma 1 are utilized to decompose the nonlinear product terms and solve the inequality.
The following theorem provides a sufficient condition to ensure mean square stability of the closed-loop asynchronous repetitive control system (20) in terms of an LMI and subsequently provides a method to synthesize the gain matrices via parametric matrices , , .

Theorem 2. For the given positive scalars , , and under Assumption 1, if there exist matrices , , , with appropriate dimensions and arbitrary matrices , , , , , , , , and with appropriate dimensions such that the following LMI is satisfied for all , :where , , , , , , and for :and and are block matrices with one row and two columns, i.e.,and and are denoted asthen the closed-loop asynchronous repetitive control system (14) is mean square stable. Furthermore, the estimator parametric matrices , and the related asynchronous repetitive control parametric matrices and are given bywhere , , and are defined in Assumption 1.

Proof. The proof of this theorem follows from Theorem 1. By Schur complement, (20) is equivalent towhere , , andDefine , by pre-multiplying and post-multiplying matrix inequality (37) with the matrixwe can obtain a matrix inequality as follows:Choose any , we notice thatwhere , and , denote the first and second rows of the block matrices , , respectively, i.e., , , and , .
Similarly, let , denote the first row of the block matrix , , and , for the second row of the block matrix , , respectively. Then, it can be found that the matrix inequality (40) is equivalent toOn the other hand, from Lemma 1, there exist matrices and such thatBy utilizing the SVD of system output matrix given in Assumption 1, equation (43) can be expressed aswhich yieldsMoreover, for any and , we select the matrices , , , , and as follows:Defineand for any and . Then, by substituting (43) and (47) into (42) and simplifying, we conclusively obtain the LMI (32). Concomitantly, the estimator parameter matrices , and the related asynchronous repetitive control parametric matrices and can be solved by (46) and (47).
Hence, if a solution to the LMI (32) exists, the closed-loop asynchronous repetitive control system (14) is mean square stable and the parametric matrices that need to be designed are obtained by using (36).

Remark 3. From (9) and (36) the design matrices , , , can be obtained as follows:where denotes a possible generalized inverse of . Although the design matrices associated with the estimator and controller appear to not only depend on the detection mode but also on the system mode , we consider some additional constraints in the process of solving LMI (32) and enforce that all the system modes lead to common design matrices. For example, consider for any given , then will be independent of the system mode . Under such circumstances, the design matrices are able to coincide with the previous design approach as described in (5) and (7).

Remark 4. Notice that if the system matrices and are independent of the system mode i, i.e., and are both constant matrices under all modes, then the additional constraints discussed in Remark 3 to render the gains , , , and to be independent of the system mode will no longer be needed. However, such requirements on system matrices and will reduce the applicability of the approach to a wider class of switched systems.

Remark 5. Theorem 2 provides an LMI-based sufficient condition for mean square stability of the closed-loop asynchronous repetitive control system (14). One can use efficient interior-point methods with a polynomial-time complexity to determine whether the LMI is feasible; we have used MATLAB LMI toolbox to solve the LMI (32). The feasibility of the LMI (32) is facilitated by the tuning parameters , , which can be utilized to adjust the contribution of the control and learning parts, as described for the deterministic systems in [32].

Remark 6. By taking into consideration both the control input and tracking error, a new cost functional can be defined as follows:Then, the optimal values of , , can be obtained by solving the following constrained stochastic optimization problem:The stochastic optimization problem (51) provides a framework to find optimal values for the adjustable parameters , , .

4. Numerical Example and Simulations

In this section, we consider a second-order RLC circuit system as shown in Figure 2, which can be modeled as a discrete-time linear Markovian switching system (2).

By choosing and as state variables, u as input, and the load voltage as output, the state-space representation of this circuit system is given by

Because of environmental changes and uncertainties, one can expect fluctuations in circuit parameters. We model the underlying system with these parameter fluctuations as a Markov jump system as follows:where and .

Upon discretization of these governing equations (zero-order hold approximation with sampling period ), we obtainwhere and .

We assume that the Markov chains and have three modes of operation, i.e., and . The transition probability matrices π and µ are correspondingly chosen as

The evolution of the system and controller/estimator modes is illustrated in Figures 3 and 4, respectively.

For , the system parameters are chosen as

The periodic reference signal is generated via sampling of the output of the following linear time-invariant (lti) exosystem:where , . The sampling period is seconds. The period of the continuous-time periodic reference signal is 4 seconds, i.e., . Then, the number of samples for each period of the periodic reference signal is .

As stated in Remark 5, we can choose a performance index,to seek the best overall control and learning performance. In this paper, we obtain , , , , , . By utilizing Theorem 2, we directly obtain that

We choose the initial value of as . The initial state values are chosen as , . Under the above parameters, evolution of the system state and estimation error are plotted in Figures 5 and 6, respectively. The trajectories of the reference signal and system output are shown in Figure 7. It can be observed that the overall closed-loop system achieves a satisfactory state estimation and reference tracking performance, which demonstrates the effectiveness of the proposed design scheme in Section 3.

Remark 7. Based on the numerical simulations results, the asynchronous repetitive control method proposed in this paper has the following advantages over the methods provided in [32-39]: (1) the proposed method does not require the controller or estimator to switch synchronously with the plant; (2) the proposed asynchronous repetitive control method is broader and inclusive in the sense that a variety of more restrictive cases can be obtained from it. For example, when and for , i.e., , the asynchronous repetitive control method reduces to the synchronous mode-dependent case. When , i.e., the mode detector has only one mode ; it reduces to the asynchronous mode-independent case. When , it reduces to the synchronous mode-independent case, i.e., the general form of repetitive control.

5. Conclusion

In this work, we provided an asynchronous repetitive control strategy for discrete-time Markovian switching systems. A hidden Markov model was adopted to describe the asynchronous phenomenon appearing between the system modes and controller modes. With a state-estimator-based asynchronous repetitive controller, sufficient conditions for mean square stability of the closed-loop stochastic system were derived by using some basic results from stochastic analysis and matrix inequalities. With the proposed approach, all the design matrices can be obtained by solving a set of linear matrix inequalities. To illustrate the approach and its feasibility, numerical simulations on a circuit system were shown to demonstrate the effectiveness of the proposed design scheme. In this paper, we have illustrated the approach using a simple numerical example; one potential future topic is to apply the formulation to an engineering application and conduct numerical and experimental investigations; we plan to collaborate with practicing engineers to pursue this. Further, another possible future work is to develop both necessary and sufficient conditions for such systems, which will also open up more concrete opportunities for applying the method to more practical systems.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant no. 61903296), Natural Science Basic Research Plan in Shaanxi Province of China (Grant no. 2018ZDXM-GY-169), Key Project of Natural Science Basic Research Plan in Shaanxi Province of China (Grant nos. 2019ZDLGY18-03), and the Thousand Talents Plan of Shaanxi Province for Young Professionals and the Science and Technology Planning Project of Xi’an Beilin District (Grant no. GX1919).

References

  1. O. L. V. Costa and M. D. Fragoso, “Discrete-time LQ-optimal control problems for infinite Markov jump parameter systems,” IEEE Transactions on Automatic Control, vol. 40, no. 12, pp. 2076–2088, 1995. View at: Publisher Site | Google Scholar
  2. Y. Kang, Z. Li, Y. Dong, and H. Xi, “Markovian-based fault-tolerant control for wheeled mobile manipulators,” IEEE Transactions on Control Systems Technology, vol. 20, no. 1, pp. 266–276, 2012. View at: Publisher Site | Google Scholar
  3. Z. Liu, S. Miao, Z. Fan, and J. Han, “Markovian switching model and non-linear DC modulation control of AC/DC power system,” IET Generation, Transmission & Distribution, vol. 11, no. 10, pp. 2654–2663, 2017. View at: Publisher Site | Google Scholar
  4. L. E. Svensson and N. Williams, “Optimal monetary policy under uncertainty: a Markov jump-linear-quadratic approach,” Review, vol. 90, no. 4, pp. 275–293, 2008. View at: Publisher Site | Google Scholar
  5. Q. Zhang and J.-F. Zhang, “Distributed parameter estimation over unreliable networks with Markovian switching topologies,” IEEE Transactions on Automatic Control, vol. 57, no. 10, pp. 2545–2560, 2012. View at: Publisher Site | Google Scholar
  6. G. Chen, J. Xia, and G. Zhuang, “Delay-dependent stability and dissipativity analysis of generalized neural networks with Markovian jump parameters and two delay components,” Journal of The Franklin Institute, vol. 353, no. 9, pp. 2137–2158, 2016. View at: Publisher Site | Google Scholar
  7. J. Dai and G. Guo, “Event-based consensus for second-order multi-agent systems with actuator saturation under fixed and Markovian switching topologies,” Journal of The Franklin Institute, vol. 354, no. 14, pp. 6098–6118, 2017. View at: Publisher Site | Google Scholar
  8. Y. Wang, L. Cheng, W. Ren, Z.-G. Hou, and M. Tan, “Seeking consensus in networks of linear agents: communication noises and Markovian switching topologies,” IEEE Transactions on Automatic Control, vol. 60, no. 5, pp. 1374–1379, 2015. View at: Publisher Site | Google Scholar
  9. X. Liu, X. Yu, G. Ma, and H. Xi, “On sliding mode control for networked control systems with semi-Markovian switching and random sensor delays,” Information Sciences, vol. 337-338, pp. 44–58, 2016. View at: Publisher Site | Google Scholar
  10. X. Liu, G. Ma, P. R. Pagilla, and S. S. Ge, “Dynamic output feedback asynchronous control of networked Markovian jump systems,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2018. View at: Publisher Site | Google Scholar
  11. S. Pan, J. Sun, and S. Zhao, “Stabilization of discrete-time Markovian jump linear systems via time-delayed and impulsive controllers,” Automatica, vol. 44, no. 11, pp. 2954–2958, 2008. View at: Publisher Site | Google Scholar
  12. Z. Wang, Y. Liu, and X. Liu, “Exponential stabilization of a class of stochastic system with Markovian jump parameters and mode-dependent mixed time-delays,” IEEE Transactions on Automatic Control, vol. 55, no. 7, pp. 1656–1662, 2010. View at: Publisher Site | Google Scholar
  13. P. Bolzern, P. Colaneri, and G. De Nicolao, “Stochastic stability of positive Markov jump linear systems,” Automatica, vol. 50, no. 4, pp. 1181–1187, 2014. View at: Publisher Site | Google Scholar
  14. X. Liu, X. Yu, X. Zhou, and H. Xi, “Finite-time H control for linear systems with semi-Markovian switching,” Nonlinear Dynamics, vol. 85, no. 4, pp. 2297–2308, 2016. View at: Publisher Site | Google Scholar
  15. X. Liu, G. Ma, X. Jiang, and H. Xi, “H stochastic synchronization for master-slave semi-Markovian switching system via sliding mode control,” Complexity, vol. 21, no. 6, pp. 430–441, 2016. View at: Publisher Site | Google Scholar
  16. W. Xie and Q. Zhu, “Self-triggered state-feedback control for stochastic nonlinear systems with Markovian switching,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2018. View at: Publisher Site | Google Scholar
  17. M. Sathishkumar, R. Sakthivel, F. Alzahrani, B. Kaviarasan, and Y. Ren, “Mixed H and passivity-based resilient controller for nonhomogeneous Markov jump systems,” Nonlinear Analysis: Hybrid Systems, vol. 31, pp. 86–99, 2019. View at: Publisher Site | Google Scholar
  18. Z. Wang and H. Wu, “Global synchronization in fixed time for semi-Markovian switching complex dynamical networks with hybrid couplings and time-varying delays,” Nonlinear Dynamics, vol. 95, no. 3, pp. 2031–2062, 2019. View at: Publisher Site | Google Scholar
  19. H. S. F. Li and S. Xu, “Fuzzy-model-based control for Markov jump nonlinear slow sampling singularly perturbed systems with partial information,” IEEE Transactions on Fuzzy Systems, vol. 27, no. 10, pp. 1952–1962, 2019. View at: Publisher Site | Google Scholar
  20. F. Li, S. Xu, H. Shen, and Q. Ma, “Passivity-based control for hidden Markov jump systems with singular perturbations and partially unknown probabilities,” IEEE Transactions on Automatic Control, 2019. View at: Publisher Site | Google Scholar
  21. R. Kavikumar, R. Sakthivel, O. M. Kwon, and B. Kaviarasan, “Reliable non-fragile memory state feedback controller design for fuzzy markov jump systems,” Nonlinear Analysis: Hybrid Systems, vol. 35, Article ID 100828, pp. 1–17, 2020. View at: Publisher Site | Google Scholar
  22. S. Shi, Z. Shi, and Z. Fei, “Asynchronous control for switched systems by using persistent dwell time modeling,” Systems & Control Letters, vol. 133, Article ID 104523, pp. 1–8, 2019. View at: Publisher Site | Google Scholar
  23. S. Shi, Z. Fei, P. Shi, and C. K. Ahn, “Asynchronous filtering for discrete-time switched T-S fuzzy systems,” IEEE Transactions on Fuzzy Systems, 2019. View at: Publisher Site | Google Scholar
  24. O. L. D. V. Costa, M. D. Fragoso, and M. G. Todorov, “A detector-based approach for the H2 control of Markov jump linear systems with partial,” IEEE Transactions on Automatic Control, vol. 60, no. 5, pp. 1219–1234, 2015. View at: Publisher Site | Google Scholar
  25. Z.-G. Wu, P. Shi, Z. Shu, H. Su, and R. Lu, “Passivity-based asynchronous control for Markov jump systems,” IEEE Transactions on Automatic Control, vol. 62, no. 4, pp. 2020–2025, 2017. View at: Publisher Site | Google Scholar
  26. A. de Oliveira and O. Costa, “-filtering for discrete-time hidden Markov jump systems,” International Journal of Control, vol. 90, no. 3, pp. 599–615, 2017. View at: Publisher Site | Google Scholar
  27. J. Song, Y. Niu, and Y. Zou, “Asynchronous sliding mode control of Markovian jump systems with time-varying delays and partly accessible mode detection probabilities,” Automatica, vol. 93, pp. 33–41, 2018. View at: Publisher Site | Google Scholar
  28. N. O. Pérez-Arancibia, T.-C. Tsao, and J. S. Gibson, “A new method for synthesizing multiple-period adaptive–repetitive controllers and its application to the control of hard disk drives,” Automatica, vol. 46, no. 7, pp. 1186–1195, 2010. View at: Publisher Site | Google Scholar
  29. S.-L. Chen and T.-H. Hsieh, “Repetitive control design and implementation for linear motor machine tool,” International Journal of Machine Tools and Manufacture, vol. 47, no. 12-13, pp. 1807–1816, 2007. View at: Publisher Site | Google Scholar
  30. Y. Li and Q. Xu, “Design and robust repetitive control of a new parallel-kinematic XY piezostage for micro/nanomanipulation,” IEEE/ASME Transactions on Mechatronics, vol. 17, no. 6, pp. 1120–1132, 2012. View at: Publisher Site | Google Scholar
  31. K. Zhou, D. Wang, B. Zhang, Y. Wang, J. Ferreira, and S. de Haan, “Dual-mode structure digital repetitive control,” Automatica, vol. 43, no. 3, pp. 546–554, 2007. View at: Publisher Site | Google Scholar
  32. M. Wu, L. Zhou, and J. She, “Design of observer-based robust repetitive-control system,” IEEE Transactions on Automatic Control, vol. 56, no. 6, pp. 1452–1457, 2011. View at: Publisher Site | Google Scholar
  33. C. Hu, B. Yao, Z. Chen, and Q. Wang, “Adaptive robust repetitive control of an industrial biaxial precision gantry for contouring tasks,” IEEE Transactions on Control Systems Technology, vol. 19, no. 6, pp. 1559–1568, 2011. View at: Publisher Site | Google Scholar
  34. L. Zhou, J.-H. She, and M. Wu, “Design of a discrete-time output-feedback based repetitive-control system,” International Journal of Automation and Computing, vol. 10, no. 4, pp. 343–349, 2013. View at: Publisher Site | Google Scholar
  35. Z. Shao, S. Huang, and Z. Xiang, “Robust repetitive control for a class of linear stochastic switched systems with time delay,” Circuits, Systems, and Signal Processing, vol. 34, no. 4, pp. 1363–1377, 2015. View at: Publisher Site |