Table of Contents Author Guidelines Submit a Manuscript
International Journal of Aerospace Engineering
Volume 2018, Article ID 4865745, 12 pages
https://doi.org/10.1155/2018/4865745
Research Article

Consensus Control of Time-Varying Delayed Multiagent Systems with High-Order Iterative Learning Control

1Equipment Management and Unmanned Aerial Vehicle Engineering College, Air Force Engineering University, Xi’an 710051, China
2Theory Training Department, Air Force Harbin Flight Academy, Harbin 150001, China

Correspondence should be addressed to Xiuxia Sun; moc.621@xxsyxcg

Received 17 March 2018; Revised 30 May 2018; Accepted 20 June 2018; Published 5 August 2018

Academic Editor: Giovanni Palmerini

Copyright © 2018 Xiongfeng Deng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We address the consensus control problem of time-varying delayed multiagent systems with directed communication topology. The model of each agent includes time-varying nonlinear dynamic and external disturbance, where the time-varying nonlinear term satisfies the global Lipschitz condition and the disturbance term satisfies norm-bounded condition. An improved control protocol, that is, a high-order iterative learning control scheme, is applied to cope with consensus tracking problem, where the desired trajectory is generated by a virtual leader agent. Through theoretical analysis, the improved control protocol guarantees that the tracking errors converge asymptotically to a sufficiently small interval under the given convergence conditions. Furthermore, the bounds of initial state difference and disturbances tend to zero; the bound of tracking errors also tends to zero. In the end, some cases are provided to illustrate the effectiveness of the theoretical analysis.

1. Introduction

In the past few decades, cooperative control of multiagent systems has been attracted outstanding attention. One of the main reasons is its wide application in various fields, such as military, aerospace, and industrial [1]. The research directions include formation control, containment control, tracking control, swarm control, and flocking control [26]. Generally speaking, the fundamental problem for cooperative control is consensus, where the objective is to design an appropriate control law so that the whole states of a group of agents eventually reach an agreement.

In the process of studying the cooperative problem of multiagent systems, diverse control strategies are proposed in many literatures. For example, the output regulation method and distributed containment control law for the containment control of linear multiagent systems were investigated in [7, 8], while an advanced clustering with the frequency sweeping method was used for the general linear multiagent systems with communication and input delay in [9]. Nevertheless, it should be emphasized that almost all practice systems involve nonlinear dynamics. In [10, 11], the distributed adaptive control protocol and pinning control algorithm for the consensus of a class of second-order leader-following nonlinear multiagent systems with directed topology were considered. In [12], a distributed cooperative method was applied to deal with the dynamic task planning of multiple satellites. Additionally, the neural network algorithm was also introduced to study the consensus problem of nonlinear multiagent systems [13, 14].

In practice, time delays always exist due to the impact of physical factors of the sensor. Therefore, it is of great significance to study the time delay system. In [15], a distributed consensus control protocol with decaying gains for linear discrete-time multiagent systems with delays and noises was considered; and the cases of communication delay and input delay were studied in [16, 17]. For the problem of time-varying delays, Xiao and Wang [18] investigated the state control problem of multiagent systems with bounded time-varying communication delays and Chen et al. [19] presented a robust control law for a class of time-varying delayed multiagent systems with noisy environment.

As we know, iterative learning control is based upon the idea that the performance of a system that performs the same task repeatedly can be improved by learning from previous iterations [20]. In early works, iterative learning control has been considered in various issues. In [21], an iterative learning control scheme was proposed to make the tracking error trajectory converge to the prespecified error trajectory. In [22], an adaptive iterative learning updating law was applied to solve the tracking problem of high precision motion systems. In [23], the packet dropout problem of nonlinear systems with iterative learning control was developed; and Wu et al. [24] solved the high precision satellite attitude tracking control by using iterative learning control. Additionally, a high-order iterative learning identification scheme was considered to extract the projectile’s aerodynamic drag coefficient curve from radar measured velocity data in [25]; and the stability analysis of a high-order iterative learning control problem for discrete-time systems and nonlinear switched systems was also studied in [26, 27], respectively. In recent years, the application of iterative learning control has been extended to multiagent systems. In [28], a D-type iterative learning control scheme was presented to deal with the tracking problem of multiagent systems. In [29], a distributed adaptive fuzzy iterative learning control algorithm for the coordination control problem of leader-following multiagent systems with unknown dynamics and nonrepeatable input disturbances was designed; and Meng and Jia [30] developed a finite-time consensus control protocol with iterative learning control for a class of multiagent systems. In addition, the iterative learning control was also considered to achieve formation control of multiagent systems [31, 32].

By analyzing the above literature, it is not difficult to find that there exist few results about the consensus problem of time-varying delayed multiagent systems with iterative learning control. Moreover, the application of high-order iterative learning control for multiagent systems is also seldom described in the existing papers. These facts inspired our current study. In this work, we divert our attention to the consensus control of time-varying delayed multiagent systems with high-order iterative learning control. The main contributions of this work are summarized as follows: (i) a class of time-varying delayed multiagent systems with directed communication topology is considered in this paper. The dynamic of each agent contains time-varying nonlinear dynamic and external disturbance, where the time-varying nonlinear term satisfies the global Lipschitz condition and the disturbance term satisfies norm-bounded condition; (ii) different from [26, 27], an improved high-order iterative learning control scheme is applied to guarantee the tracking error convergence in the iteration domain. Furthermore, we show that the bound of tracking errors also tends to zero when the bounds of initial state difference and disturbances tend to zero; and (iii) it is proven that the improved control protocol can effectively handle the consensus problem of multiagent systems with time-varying delays and external disturbances.

The rest of this paper is organized as follows. Some necessary preliminaries are introduced in Section 2. The problem formulation and main results about time-varying delayed multiagent systems with high-order iterative learning control are discussed in Sections 3 and 4, respectively. In Section 5, the effectiveness of the proposed control protocol is illustrated by simulations, and briefly, conclusions are outlined in Section 6.

2. Preliminaries

In this section, graph theory, definition, and lemma which will be utilized in this paper are introduced briefly.

2.1. Graph Theory

For a multiagent systems with agents, the exchange information among agents can be modeled as an interaction graph with nodes. Let denote a directed graph with a node set and a directed edge set . And the adjacency matrix of the directed graph is defined by , where if and only if and otherwise. Moreover, it is assumed that . For agent , the set of neighbors is denoted by . The Laplacian matrix is denoted by , where with .

In this work, an augmented graph is described as which consists of agents and one virtual leader agent. The communication between agents and the virtual agent is defined by the matrix . If agent can obtain the information of the virtual agent, then and otherwise.

2.2. Definition and Lemma

Definition 1 (see [33]). For a function , the norm is defined by The following property for λ norm is held.

Property 1. For functions , if , then have where .

Lemma 1 (see [34]). For a real positive series satisfies where and and exists then have

3. Construction of the Control Protocol for the Consensus Problem

In this section, the main works are carried out to formulate the consensus problem and construct the consensus control protocol. In addition, some assumptions are also provided.

Considering the multiagent systems which consist of identical agents with time-varying delays and nonlinear dynamics, the dynamics of the agent at iteration are described by where , , and are the state, control input, and output of the system, respectively; is bounded disturbance; the time and is given; are time delays; and . The functions and are piecewise continuous in ; and is differentiable in and , with and . In addition, it is assumed that if .

The matrix form of (6) is expressed as where , , , , , , and .

The dynamics of virtual leader agent are given as which may show a given bounded desired trajectory ; there exist unique bounded input and such that when , the virtual agent has a unique bounded state and output .

Similar to the expression of (7), the matrix form of (8) is expressed as where , , , , , and ; , is dimension unit matrix, and represents the Kronecker product.

Given the desired output trajectory of the virtual agent , the goal is to find a control consensus such that when , the output of all agents will track the desired output trajectory as close as possible.

According to (7) and (9), the consensus tracking error of the agent at the iteration is defined as and have where , , , , and .

Before introducing a high-order iterative learning control scheme, we give the following assumptions.

Assumption 1. The functions , , and and the partial derivatives and are uniformly globally Lipschitz in on the interval . That is, there exist constants , , and such that where .

Assumption 2. The functions , , , and are uniformly bounded with bounds , , , and which are denoted by

We now present the following improved high-order iterative learning control scheme for the multiagent systems (7) and (9): where , , is initial given control input, integer is the order of the iterative learning control scheme, and is a weighting parameter to prevent the large fluctuation of the input at the beginning of the iterative operation. The term may be allowed to vary with the iteration and let it fixed for simplicity in this paper. is the time-varying matrix to regulate the rate of convergence, which needs to satisfy certain convergence condition (will be introduced below). Like the description of [25, 27], , , and are learning matrices. In addition, it is assumed that , , and are bounded, and their bounds are defined by , , and , respectively, for instance,

Remark 1. If for (14), the iterative learning control scheme may be considered as a PID-type control law similar to the convention PID controller; and if , it can be seen as consisting of dual or multiple PID controllers. As more past learning errors are considered, it implies that the high-order iterative learning control scheme has better control effect than one-order. Generally speaking, the higher the orders of the control scheme, the better the iterative learning control performance is expected. However, the choice of in practice is normally not more than 3.

Remark 2. Like the choice of parameters of the traditional PID algorithm, the choice of , , and is usually based on a trial-and-error method. Considering that the purpose of is to increase the convergence speed of the system, is to eliminate the steady-state error of the system, and is to overcome the oscillation and improve the stability of the system, their weight can be tuned according to the control requirements. In addition, the control approaches such as a fuzzy adaptive method, a neural network algorithm, and particle swarm optimization algorithm can be also applied to obtain the optimal learning matrices.

4. Convergence Analysis

In this section, we will analyze the convergence of time-varying delayed multiagent systems (7) and (9) with a high-order iterative learning control scheme (14). The convergence conditions are shown in the following theorem.

For clarification of the remaining discussion, the variable will be omitted, and the following notations will be introduced. That is, where represents a function concerned.

The partial derivatives of are expressed by

Hence, it is easy to find that , where is the Lipschitz constant of the function on the interval . We now state the main result of this paper.

Theorem 1. Let the multiagent systems (7) and the virtual agent dynamic (9) satisfy Assumptions1and2 and suppose that the initial state difference is bounded. If there existand positive numbers satisfying and then the bounds of tracking errors and converge asymptotically to a sufficiently small interval as for . Furthermore, the bounds of initial state difference and disturbances tend to zero; the bound of tracking errors also tends to zero.

Proof. According to the multiagent systems (7) and (9), we have and then where .

Also, where and .

Substituting (11) into (14) gets

Furthermore, substituting (18), (20), (21), and (22) into (23) yields

Considering the condition (19), taking norms on both sides of (24) has where

Writing the integral expression for and taking norms, we get

In the light of Definition 1 and the fact that , we have

Considering (28) and taking norm on both sides of (27) obtain where

Obviously, there for a sufficiently large and then have

Similarly, taking norm on both sides of (25), we have where

Substituting (31) into (32) yields where

Considering the condition (19) again, we have for a sufficiently large , then . Hence, according to the Lemma 1, it is obtained that

Furthermore, we also have

From (31), (36), and (38), one can be observed that the tracking errors , , and are all relevant to the initial state error and disturbance bound . Furthermore, if , , and tend to zero, the bound of tracking errors also tends to zero through the iterative operations of the iterative learning control algorithm. The proof completed.

Remark 3. Under the convergence conditions (18) and (19), one finds that the convergence of iterative learning control is not influenced by the disturbances, the initial state difference, and even the choice of . However, the bound of the final tracking errors is directly influenced by those factors. In addition, the iterative learning control algorithm itself as using alone cannot eliminate disturbances or initial state difference, but it can realize a perfect tracking once those factors will not appear any more in the coming iterations.

5. Simulation Analysis

In this section, we will provide some cases to illustrate the effectiveness of the high-order iterative learning control scheme (14). Consider a time-varying delayed multiagent systems with five agents labeled as 1, 2, 3, 4, and 5. The virtual leader agent is labeled as 0. The communication graph with five agents and one virtual agent is shown in Figure 1.

Figure 1: Communication topology graph.

The adjacency matrix is

The dynamics of agent at the iteration are described as where and .

The dynamics of a virtual leader agent are given as where , and (41) indicates that the desired output trajectory is obtained by a desired input and state .

The parameters are set as follows: the initial position and velocity of five agents are given as and , respectively; the initial input ; the simulation time and time-varying delays and ; the initial states of a virtual agent are and ; the desired input ; the term ; and .

According to (40), we have

In order to analyze the effectiveness of a high-order iterative learning control scheme (14), the following cases are discussed. It should be noted that the parameters , , , and are obtained by the trial-and-error method.

Case 1. In this case, , , , , and .

Case 2. In this case, , and , and , and , and and .

Case 3. In this case, ; , , and ; , , and ; , , and ; and , , and .

Checking the convergence conditions (18) and (19) in Theorem 1, we get the following.

Case 1. .

Case 2.

Case 3. .

Then, we ensure the satisfaction of the convergence conditions (18) and (19).

The simulation results for Case 1, Case 2, and Case 3 at 50th iteration are shown in Figures 2, 3, 4, and 5, respectively.

Figure 2: State tracking trajectories for Case 1.
Figure 3: State tracking trajectories for Case 2.
Figure 4: State tracking trajectories for Case 3.
Figure 5: Control signal for Case 3.

Figures 2, 3, and 4 show the tracking trajectories of time-varying delayed multiagent systems with a high-order iterative learning control scheme under different orders at the 50th iteration, which indicate that the consensus is achieved by using the proposed control protocol. However, compared with Figures 3 and 4, the fluctuation and control effect of Figure 2 are the worst. For Figure 4, it is not difficult to find that the control effect is better than Figures 2 and 3. The results of Figures 2, 3, and 4 imply that the higher the orders and the better the control effect. Figure 5 gives the control input curves of five agents for Case 3 and implies that the input of agents can converge asymptotically to the desired input value.

Case 4. Comparison and analysis with other control protocols.

In order to illustrate the control effect, another control law is selected to compare with the proposed control scheme (14) (scheme 1) of this paper. Consider the high-order iterative learning control updating law (7) (scheme 2) in [27], which did not include specific provision for the first iteration term. Let the order , the parameter setting is same as the Case 2 of this paper. Check the convergence conditions (27), (28) of Theorem 9 in [27], then have , , , and , which mean the conditions are satisfied. The simulation results are shown in Figures 6 and 7.

Figure 6: Position tracking trajectories.
Figure 7: Velocity tracking trajectories.

Figures 6 and 7 show the comparison results of position and velocity with two control schemes. It is implied that the consensus of multiagent systems with the two control laws can be achieved. However, the control effects with the consensus law of this paper are better than the control law of [27].

In addition, the maximum tracking errors with different time delays for are also analyzed. The simulation results are shown in Figures 8 and 9.

Figure 8: Maximum tracking errors of position with different time delays.
Figure 9: Maximum tracking errors of velocity with different time delays.

Figures 8 and 9 give the maximum tracking errors of position and velocity with different time delays. Although the time delays are different, the consensus of multiagent systems with a high-order iterative learning control scheme is still achieved. That is to say, the convergence of the proposed control protocol is not affected by time delays.

6. Conclusions

In this paper, we have discussed the consensus control problem of time-varying delayed multiagent systems with directed communication topology. An improved high-order iterative learning control scheme is applied to solve the consensus tracking problem of multiagent systems. The desired tracking trajectory is produced by a virtual leader agent. Under the given convergence conditions, the consensus can be achieved by using the proposed control protocol. Based on simulation results, the control effect can be improved by utilizing a high-order iterative learning control scheme. Also, it is implied that the higher the orders, the better the control effect. In addition, compared with another control law, the better control effect with the proposed control scheme of this paper can be obtained. Furthermore, through analyzing the maximum state tracking errors of a multiagent with different time delays, the delays do not play a significant role in the convergence of high-order iterative learning control. In our future work, the consensus problem of multiagent systems with control input delays will be considered.

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the Aeronautical Science Foundation of China (Grand no. 20155896025).

References

  1. G. Yang, Q. Yang, V. Kapila, D. Palmer, and R. Vaidyanathan, “Fuel optimal manoeuvres for multiple spacecraft formation reconfiguration using multi-agent optimization,” International Journal of Robust and Nonlinear Control, vol. 12, no. 2-3, pp. 243–283, 2002. View at Publisher · View at Google Scholar · View at Scopus
  2. R. Dong and Z. Geng, “Consensus for formation control of multi-agent systems,” International Journal of Robust and Nonlinear Control, vol. 25, no. 14, pp. 2481–2501, 2015. View at Publisher · View at Google Scholar · View at Scopus
  3. Z. Qiu, S. Liu, and L. Xie, “Distributed constrained optimal consensus of multi-agent systems,” Automatica, vol. 68, pp. 209–215, 2016. View at Publisher · View at Google Scholar · View at Scopus
  4. B. Wang, J. Wang, L. Zhang, and B. Zhang, “Robust adaptive consensus tracking for higher-order multi-agent uncertain systems with nonlinear dynamics via distributed intermittent communication protocol,” International Journal of Adaptive Control and Signal Processing, vol. 30, no. 3, pp. 511–533, 2016. View at Publisher · View at Google Scholar · View at Scopus
  5. W. Li and W. Shen, “Swarm behavior control of mobile multi-robots with wireless sensor networks,” Journal of Network and Computer Applications, vol. 34, no. 4, pp. 1398–1407, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. R. Olfati-Saber, “Flocking for multi-agent dynamic systems: algorithms and theory,” IEEE Transactions on Automatic Control, vol. 51, no. 3, pp. 401–420, 2006. View at Publisher · View at Google Scholar · View at Scopus
  7. Q. Ma, F. L. Lewis, and S. Xu, “Cooperative containment of discrete-time linear multi-agent systems,” International Journal of Robust and Nonlinear Control, vol. 25, no. 7, pp. 1007–1018, 2015. View at Publisher · View at Google Scholar · View at Scopus
  8. H. Haghshenas, M. A. Badamchizadeh, and M. Baradarannia, “Containment control of heterogeneous linear multi-agent systems,” Automatica, vol. 54, pp. 210–216, 2015. View at Publisher · View at Google Scholar · View at Scopus
  9. L. Zeng and G. D. Hu, “Consensus of linear multi-agent systems with communication and input delays,” Acta Automatica Sinica, vol. 39, no. 7, pp. 1133–1140, 2013. View at Publisher · View at Google Scholar
  10. H. Du, Y. Cheng, Y. He, and R. Jia, “Second-order consensus for nonlinear leader-following multi-agent systems via dynamic output feedback control,” International Journal of Robust and Nonlinear Control, vol. 26, no. 2, pp. 329–344, 2016. View at Publisher · View at Google Scholar · View at Scopus
  11. Q. Song, J. Cao, and W. Yu, “Second-order leader-following consensus of nonlinear multi-agent systems via pinning control,” Systems & Control Letters, vol. 59, no. 9, pp. 553–562, 2010. View at Publisher · View at Google Scholar · View at Scopus
  12. C. Wang, J. Li, N. Jing, J. Wang, and H. Chen, “A distributed cooperative dynamic task planning algorithm for multiple satellites based on multi-agent hybrid learning,” Chinese Journal of Aeronautics, vol. 24, no. 4, pp. 493–505, 2011. View at Publisher · View at Google Scholar · View at Scopus
  13. J. Feng and G. X. Wen, “Adaptive NN consensus tracking control of a class of nonlinear multi-agent systems,” Neurocomputing, vol. 151, pp. 288–295, 2015. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Liu, Z. Chen, X. Zhang, and Z. Liu, “Neural-networks-based distributed output regulation of multi-agent systems with nonlinear dynamics,” Neurocomputing, vol. 125, pp. 81–87, 2014. View at Publisher · View at Google Scholar · View at Scopus
  15. S. Liu, L. Xie, and H. Zhang, “Distributed consensus for multi-agent systems with delays and noises in transmission channels,” Automatica, vol. 47, no. 5, pp. 920–934, 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. Y. Qian, X. Wu, J. Lü, and J.-a. Lu, “Consensus of second-order multi-agent systems with nonlinear dynamics and time delay,” Nonlinear Dynamics, vol. 78, no. 1, pp. 495–503, 2014. View at Publisher · View at Google Scholar · View at Scopus
  17. J. Xu, H. Zhang, and L. Xie, “Input delay margin for consensusability of multi-agent systems,” Automatica, vol. 49, no. 6, pp. 1816–1820, 2013. View at Publisher · View at Google Scholar · View at Scopus
  18. F. Xiao and L. Wang, “State consensus for multi-agent systems with switching topologies and time-varying delays,” International Journal of Control, vol. 79, no. 10, pp. 1277–1284, 2006. View at Publisher · View at Google Scholar · View at Scopus
  19. Y. Chen, J. H. Lü, and X. H. Yu, “Robust consensus of multi-agent systems with time-varying delays in noisy environment,” Science China Technological Sciences, vol. 54, no. 8, pp. 2014–2023, 2011. View at Publisher · View at Google Scholar · View at Scopus
  20. D. A. Bristow, M. Tharayil, and A. G. Alleyne, “A survey of iterative learning control,” IEEE Control Systems Magazine, vol. 26, no. 3, pp. 96–114, 2006. View at Publisher · View at Google Scholar · View at Scopus
  21. M.-X. Sun and Q.-Z. Yan, “Error tracking of iterative learning control systems,” Acta Automatica Sinica, vol. 39, no. 3, pp. 251–262, 2013. View at Publisher · View at Google Scholar · View at Scopus
  22. I. Rotariu, M. Steinbuch, and R. Ellenbroek, “Adaptive iterative learning control for high precision motion systems,” IEEE Transactions on Control Systems Technology, vol. 16, no. 5, pp. 1075–1082, 2008. View at Publisher · View at Google Scholar · View at Scopus
  23. X. Bu, F. Yu, Z. Hou, and F. Wang, “Iterative learning control for a class of nonlinear systems with random packet losses,” Nonlinear Analysis: Real World Applications, vol. 14, no. 1, pp. 567–580, 2013. View at Publisher · View at Google Scholar · View at Scopus
  24. B. Wu, D. Wang, and E. K. Poh, “High precision satellite attitude tracking control via iterative learning control,” Journal of Guidance, Control, and Dynamics, vol. 38, no. 3, pp. 528–534, 2015. View at Publisher · View at Google Scholar · View at Scopus
  25. Y. Chen, C. Wen, J.-X. Xu, and M. Sun, “High-order iterative learning identification of projectile’s aerodynamic drag coefficient curve from radar measured velocity data,” IEEE Transactions on Control Systems Technology, vol. 6, no. 4, pp. 563–570, 1998. View at Publisher · View at Google Scholar · View at Scopus
  26. X. Bu, Z. Hou, and F. Yu, “Stability of first and high order iterative learning control with data dropouts,” International Journal of Control, Automation and Systems, vol. 9, no. 5, pp. 843–849, 2011. View at Publisher · View at Google Scholar · View at Scopus
  27. X. Bu, F. Yu, Z. Fu, and F. Wang, “Stability analysis of high-order iterative learning control for a class of nonlinear switched systems,” Abstract and Applied Analysis, vol. 2013, Article ID 684642, 13 pages, 2013. View at Publisher · View at Google Scholar · View at Scopus
  28. S. Yang, J.-X. Xu, D. Huang, and Y. Tan, “Optimal iterative learning control design for multi-agent systems consensus tracking,” Systems & Control Letters, vol. 69, pp. 80–89, 2014. View at Publisher · View at Google Scholar · View at Scopus
  29. J. Li and J. Li, “Adaptive fuzzy iterative learning control with initial-state learning for coordination control of leader-following multi-agent systems,” Fuzzy Sets and Systems, vol. 248, pp. 122–137, 2014. View at Publisher · View at Google Scholar · View at Scopus
  30. D. Meng and Y. Jia, “Iterative learning approaches to design finite-time consensus protocols for multi-agent systems,” Systems & Control Letters, vol. 61, no. 1, pp. 187–194, 2012. View at Publisher · View at Google Scholar · View at Scopus
  31. Y. Liu and Y. Jia, “An iterative learning approach to formation control of multi-agent systems,” Systems & Control Letters, vol. 61, no. 1, pp. 148–154, 2012. View at Publisher · View at Google Scholar · View at Scopus
  32. D. Meng, Y. Jia, J. Du, and J. Zhang, “On iterative learning algorithms for the formation control of nonlinear multi-agent systems,” Automatica, vol. 50, no. 1, pp. 291–295, 2014. View at Publisher · View at Google Scholar · View at Scopus
  33. C. Liu, J. Xu, and J. Wu, “On iterative learning control with high-order internal models,” International Journal of Adaptive Control and Signal Processing, vol. 24, no. 9, pp. 731–742, 2010. View at Publisher · View at Google Scholar · View at Scopus
  34. G. Heinzinger, D. Fenwick, B. Paden, and F. Miyazaki, “Robust learning control,” in Proceedings of the 28th IEEE Conference on Decision and Control, pp. 436–440, Tampa, Florida, USA, December 1989.