Advanced Control, Navigation, and Signal Processing Methods for Multiple Autonomous Unmanned SystemsView this Special Issue
A Novel Fast Iterative Learning Control for Linear Discrete Systems with Parametric Disturbance and Measurement Noise
Precise industrial control technology is constantly in need of accurate and strong control. Error convergence for a typical linear system is very minimal when using a conventional iterative learning control strategy. This study develops a quick iterative learning control law to address this issue. We have presented a new PD iterative learning control approach which is basically grounded on backward error and control parameter rectification for a class of linear discrete time-invariant (LDTI) systems. We have deliberated the repetitive system, which has constraint disturbance and measurement noise. First, we have developed a form of the faster learning law along with a full explanation of the algorithm’s control factor generation process. And then, using the vector method in conjunction with the theory of spectral radius, sufficient conditions for the algorithm’s convergence are introduced for parameter estimation with no noise, parameter uncertainty but excluding the noise, parameter uncertainty with small perturbations, and noise in four different scenarios. Eventually, results show that convergence depends on the control law’s learning factor, the correction term, the factor of association, and the learning interval. Ultimately, the simulation results indicate that suggested approach has a faster error convergence as compared with classical PD algorithm.
Iterative learning control (ILC) [1, 2] is widely used controller due to its simple arrangement, which do not require precise model information. It can make the performance of the executed object meet the expected signal only after enough iterations in a partial interval. This learning procedure has been widely employed in the control of industrial and commercial robotic applications , batch processes in the process industry , aerodynamic systems , traffic control systems , electrical and power systems, and other areas due to the characteristics as mentioned earlier. However, most scholars focus on nonsystem control problems, and research on system iterative learning control problems is limited [7, 8].
In industrial applications, the controlled system parameters are usually time-varying, so the classical PID and in a combination with other control schemes are particularly fixed while controlling system with uncertain factors [9–11]. In addition, the analysis and design process of some existing modern control schemes  is complex and difficult. The designed control algorithm and structure should be simple enough and easy to implement to solve these problems. The control scheme should contain the characteristics of nonlinearity, robustness, flexibility, and learning ability. With the rapid development of intelligent control technology to solve the uncertainty and complexity of the controlled object, some neural network models and neural network training schemes have been applied to the design of system controllers [13, 14]. For example, as a feedforward controller, Plett  discussed how neural networks learn to imitate the inverse of the controlled object. However, the neural network has the disadvantages of slow learning speed and weak generalization ability, and there is no systematic method to determine its topology. Suppose there is not a timely manner sable control and compensation. In that case, the system noise and random interference will appear in the input end of the controller, which will greatly reduce the stability of the adaptive process and seriously affect the control accuracy. Adaptive filtering has been widely developed [16, 17], and neural network is the most commonly used in all kinds of nonlinear filtering. However, it is highly nonlinear in terms of parameters [18–20].
Above mentioned scholars are studying the model uncertainty in different fields such as model prediction, system identification , fault detection , motor control , and nonlinear control . Still, there is no specific control algorithm for satisfactory fast error convergence and specific to consider the system coupling, uncertainty, time-varying characteristics, measurement noise, and other factors. Adaptive control strategy is proposed in these literatures [25–27] which can compensate at some extent. An adaptive control is mainly used to deal with complex nonlinear systems with unknown parameters. H Tao and his team proposed a novel PD scheme for the system with multiple delays in literature  and also proposed point to point ILC  as well as PD-type iterative learning control algorithm for a class of discrete spatially interconnected systems . Based on Lyapunov stability theory, parameter novelty law is designed to achieve system stabilization and progressive tracking of target trajectory [31, 32]. Both some special nonlinear systems linearized to parameters [33, 34] and nonlinear systems with general structures  have achieved remarkable development. For systems that cannot be modeled or contain unmodeled states, literatures [36, 37] proposed the model-free adaptive control theory. However, these adaptive control methods cannot solve the problem of complete tracking over a finite time interval .
Arimoto  firstly proposed the D-type iterative learning algorithm, and some scholars had further proposed P-type, PD-type, and PID-type algorithms and higher-order learning algorithms [40–43]. On the basis of linear learning algorithms, some scholars have successively proposed a series of nonlinear learning algorithms such as Newton type and secant type , which can significantly accelerate the convergence speed of the learning algorithm. In classical control theory, we often analyze system stability and algorithm characteristics from the perspective of frequency domain, and iterative learning control is no exception. Some scholars analyze and design iterative learning control algorithms from the perspective of frequency domain . Because in the frequency domain analysis method, the convergence conditions of the system can be relaxed from infinite frequency bands to limited frequency bands, which is more robust to learning control. In the practical application of analysis and iterative learning, analysis methods based on the frequency domain are widely used. For a class of linear systems with disturbances, Norrlof [45, 46] proposed a new learning control method, using an analysis method based on the frequency domain to obtain the convergence conditions of the system and analyze the effects of different filter choices on system stability. Influence and the analysis of the robustness of the iterative learning algorithm further need to be explored in the sense of practical applications.
Most of the real systems are nonlinear systems. Therefore, the application of iterative learning control in nonlinear systems has very important theoretical significance and practical foundation. Pi Daoying [47–50] and others have done a lot of research work on the application of iterative learning in nonlinear systems. Literature  first analyzed the shortcomings of a single form of closed loop and open loop and then focused on a class of discrete nonlinear systems. For linear systems, an open-loop and closed-loop P-type iterative learning control algorithm is proposed, and the convergence of the algorithm is proved. Finally, it is concluded that the open-loop and closed-loop algorithm is better than the single closed-loop or open-loop algorithm. Liu Changliang  proposed an open-closed-loop PD-type iterative learning algorithm for general nonlinear discrete systems and proved the convergence of the algorithm.
In the analysis of nonlinear systems, Lyapunov is a very important analysis program. This method is widely implemented and explained in for the analysis of controllers of nonlinear systems. When conducting theoretical analysis and method design of nonlinear uncertain systems, this method is the most important one. The second method of Lyapunov is a qualitative method. Instead of solving the equation, it directly judges the stability of the system through a Lyapunov energy function, which provides great convenience for the analysis of nonlinear systems. Inspired by the second method of Lyapunov, the energy function of the iterative learning control scheme in the time domain and the iterative domain has been studied [52, 53], which provides a new research method for designing and analyzing its convergence in the iterative domain. ILC based on the energy function in the iterative domain is discussed in , and some scholars have developed new robust control methods  and adaptive learning control methods  on this basis. They can be used to solve problems with parameters or the research and design of nonparametric uncertain nonlinear system controller. In recent years, a combined energy function representing the energy of the system in the time domain and in the iterative domain has also been applied in iterative learning control . This method can ensure that the tracking error signal in the iterative domain achieves asymptotic convergence and is bounded in the time domain. By using the point to point tracking performance and by making the control input have monotonic convergence in the entire iterative interval, this scheme is suitable for a class of nonlinear systems that do not have globally uniform Lipschitz characteristics. Through the use and promotion of the energy function method, many new control theories and methods, such as nonlinear optimization methods  and inversion design methods , have been applied to iterative learning control as a new system design scheme.
The iterative learning control algorithm is essentially a process with two-dimensional characteristics: the time domain direction t and the iterative direction k. These two directions are independent of each other, so the iterative learning control system itself is a two-dimensional system.
Some scholars  analyzed the iterative learning control algorithm by using the more mature two-dimensional system theory and discussed and analyzed the stability of iterative learning on the time axis and the dimensionality of the iterative axis convergence issue. The stability theory of the two-dimensional system provides a very effective method for the design and convergence of the iterative learning control algorithm. The Roesser model in the two-dimensional system theory has become the most basic system in the iterative learning algorithm analysis. Aiming at the application of the Roesser model of the two-dimensional system in iterative learning control, Li Xiaodong, Fang Yong [61–65], and others have done research on linear continuous systems and linear discrete systems and put forward a large number of attractive theoretical results in ILC filed. Literatures [61–63] analyzed the convergence of the system through the two-dimensional system theory for LDTI and LTV systems and theoretically obtained the conditions for the system to achieve complete tracking after one iteration. However, this method cannot be directly applied in practice. Instead, the corresponding estimated value is used to approximate. The simulation results show that the method of approximating estimated value also has better convergence effect and speed. Literature  used a two-dimensional continuous-discrete model to analyze the performance of the linear continuous system and obtained some very meaningful academic results. Literature  aimed at the linear multivariable discrete system with uncertainty or variable initial state value; through the analysis of the two-dimensional system, the conditions for the convergence of the system were obtained, so that even if the parameters of the control system are subject to minor disturbances or even when the initial state conditions in each iteration change, the convergence of the iterative learning control algorithm can still be guaranteed.
Cichy, Galkowski, and their team [66, 67] investigated and designed iterative learning controllers based on the theory of linear matrix inequalities in two-dimensional systems. It not only analyzed the boundary value conditions that make the system stable but also analyzed the convergence and robustness of the iterative learning algorithm. There are some theoretical problems in the optimal control algorithm of the nonlinear dynamic system based on the maximum value principle. It is proposed to apply the linear matrix inequality method to the stability analysis and controller of the iterative learning control algorithm of the continuous-discrete system. The scheme is under design.
The iterative learning algorithms of the above scientific research results are all linear structures, and basically all adopt PID control methods. Is there a nonlinear learning law that allows the system to converge? If there is a nonlinear learning law, can it speed up the convergence of the system? In response to these problems, some scholars have successively proposed some nonlinear iterative learning control algorithms.
Tian Senping [68–71] and others analyzed iterative learning algorithms from a geometric perspective and proposed three new nonlinear control algorithms in the form of vector triangles, which opened up a new way of thinking for the research of iterative learning control algorithms. Togai  applied the optimization method to the design of iterative learning control. For the performance index function containing the square error term, the steepest descent method, Newton—Raphson method, and Gauss—Newton method were used to obtain three different nonlinear learning laws. In addition, the nonlinear methods of iterative learning control law include input penalty term analysis , norm optimization method [74–76], parameter optimization method , and so on.
The article can be divided in different sections in order to demonstrate the contribution briefly. The main contribution and results are comprised in the following sections. Problem formulation is briefly described in Section 2. The convergence analysis, theory of hypervector, spectral radius, and main conditions for the error convergence are elaborated in Section 3. Then, Section 4 shows the numerical example for the validity of the proposed algorithm. Finally, the results summarization of this paper is described in Section 5.
2. Problem Formulation
In order to explain clearly, we can take a single input and single output (SISO) linear discrete time-invariant (LDTI) system. We have taken this system including with monotonous parameter perturbation and measurement noise over a finite period:in which , , the notation is the number of iterations, you can see in the above equation the state is , and the control input is represented with , while the output is taken as , respectively. , , and are constant matrices of the corresponding dimension satisfying the condition. is the measurement noise of the system, and and are the uncertainty matrix of the system and the uncertainty input matrix at the time , such that
Here, and are constant matrices of the corresponding dimension satisfying the condition and define the structure of the uncertain state matrix and the uncertain input matrix; and are unknown matrices, satisfying and .
In the iterative learning process of system (1), the expected trajectory is set as , the iteration is unchanged, the corresponding expected state is , and the corresponding predicted control input is . Following assumptions are made.
Assumption 1. We can assume that the ideal initial state is the state which is initially equal to the desired state such that .
Assumption 2. It can be supposed that the required trajectory for the whole period is given in advance, which is independent of the number of iterations.
Assumption 3. It can be noted that any random given desired value , there is an expected signal and an expected control , so thatBased on system (1), define , , and output signal of iteration at time could be represented as follows:For ease of description, write the above expression in the form of a hypervector, introduce a hypervector as follows:The above equation can be described aswhere represents an uncertain value by the dynamics and uncertain parameters of system (1).
System (1), under the condition that Assumption 1–3 satisfied, considers a control rule of error backward association and subsequent control quantity correction.
The correction of the error before time t to the control quantity at the current time is as follows:The learning control rule of PD-type iterative is as follows:It can be clearly observed from Figure 1 that the correction of control magnitude (8a) is clarified in detail. The learning procedure of the th iteration can be elaborated as follows.
Firstly, we can see that the error for the particular point 1 can be taken as the control amount of moments for this process of the th iterations while the control value of this particular iteration and N parameters is represented in Table 1.
The error correction term at 2 modified the control amount for the instants for the cumulative th iteration. It can be clearly seen in Figure 2. For this particular iteration number, the rectification magnitude is presented in Table 2.
According to this method for a given trial as shown in Figure 2, we can see that the term error is and its respective rectified control amount for a particular in the iteration is given in detail. Furthermore, the convergence is also displayed in Figure 3. The corrected factor of control input is .
According to the above analysis, the correction quantity of each error to the control quantity of following moments can be plotted. The correction is the accumulation of the correction for all previous moments (see Table 3) aswhich is consistent with Equation (8a).
3. Convergence Analysis
Lemma 1. Let , , en .
Proof. With the triangle inequality of norm, , when , , so from the squeeze criterion, , so , where is the norm of matrices on . Particularly, when , we have .
Lemma 2. Let , if , then is called convergent matrix; the necessary and sufficient condition of its convergence is .
Proof. (necessity). Let be a convergency matrix; because of properties spectral radius, where is the norm of matrices on . So, and .
Sufficiency. Since , there exists a positive number , such that . Therefore, there exists a norm of matrices on say, such that . Since , it can imply that , so .
3.1. Case of a Determined Model without Measurement Noise
Theorem 1. Deliberating a LDTI SISO system (1) for which Assumptions 1–3 are fulfilled, the system model is determined, and no measurement noise is obtained; and , when PD-type fast ILC law (5) with respective improvement is implemented, if the nominated learning constraint matrix satisfies the following:
Then, the output of the system significantly tracks the reference signal. We can observe that when , the system also closely tracks the desired value such that approaches to for the particular time interval .
Proof. Rendering to the ILC algorithm (5), in the iteration, the control quantity at each moment in the interval can be represented as follows:If we introduce the following hypervector:then we havewhereSince the model is determined and has no measurement noise, the system matrix and also and . We can observe from (4) that it can be written as . Combining (Assumption 1) and equation (13), the‘‘ error sequence can be derived asFinally, we can drive Lemma 2, for which the particular and necessary condition of is , where is the spectral radius of and are eigenvalues of . is a lower triangle matrix as follows:Then, we can get the error convergence from the above derivation:Hence, the theorem is completely proved.
In the above formula , K is the number of repeated operations of the system. , , and , respectively, represent the state vector, output vector, and input vector of the system. is vector functions with appropriate dimensions. In iterative learning control, generally use , , and to represent the expected state, expected output, and expected input of the system.
3.2. Case of the Undetermined Model without Measurement Noise
Theorem 2. Consider a linear discrete time-invariant (SISO) system (1). For which Assumptions 1–3 are satisfied, the system model is uncertain but there is no measurement noise, such that , , and . When PD-type fast ILC law (5) with association correction is adopted, the following parameters must be satisfied:for which input signal completely tracks the desired signal. It can be observed that whenever the iterations increase like , we can see the respective output obtained as , for the time interval . And consider .
Proof. The control rule equation (13) is still available. Since the system model is determined and there is no measurement noise, i.e., , , . Equation (5) can be written as . Combining (Assumption 1) and equation (13), the error sequence can be derived asIt can be obtained and derived from Lemma 2 that the necessary and sufficient conditions are expressed as follows:
is where is the spectral radius of and are eigenvalues of matrix . The matrix is a lower triangular matrix as follows:The necessary and sufficient condition for the convergence of the system isThe theorem is proved.
3.3. Case of the Determined Model with Measurement Noise
If the model is determined and has measurement noise, i.e., , , . Equation (5) can be written as . Combining (Assumption 1) and equation (13), the error sequence can be derived as
Let , then we have .
When , .
When , .
When , .
For the repetitive perturbation, , , we can obtain from Lemma 2, the necessary and sufficient condition of is . represents the spectral radius of matrix , and are eigenvalues of matrix . Referring to the proving process of Theorem 1, it can be obtained that the necessary and sufficient condition of the system convergence is
When , the system output uniformly converges. Hence, it means the error .
For nonrepetitive perturbations, assume that the two-interval perturbations are bounded; there is a positive real number so that the perturbations in the two iterations satisfy .
Theorem 3. "Considering a SISO linear discrete time invariant system (1), first three assumption must be satisfied for this SISO system including nonrepetitive measurement noise . If the chosen learning variable vector meets the following conditions, the PD-type accelerated iterative learning control method (7) with association correction is used., the system’s output converges to a certain neighborhood of the expected trajectory; that is, when , , .
Proof. Equation (24) is still valid. For the nonrepeatable perturbations, there is a positive real number for the perturbations in the two iterations satisfying .
Take the norm of both sides of equation (24) asIf , according to Lemmas 1 and 2, , . Define , since , is bounded. Thus, we obtain the above inequality as .
According to the above analysis, the sufficient condition for system convergence isand the error will converge to a boundary, which is .
Theorem 3 is proved.
3.4. Case of Undetermined Model with Measurement Noise
If the system model is not determined and contains measurement noise, for which the system matrix all are considered not zero, equation (5) can be written as . Combined with equation (13), the error sequence can be derived as
Defining , then we have .
When , ;
When , ;
When , .
For the repetitive perturbation , , consider Lemma 2, for which the sufficient condition of is . . We can see that and are eigenvalues of matrix . For which we can see that the necessary and sufficient condition of the system convergence is
For nonrepetitive perturbations, assume that the two-interval perturbations are bounded; that is, there is a positive real number , so that the perturbations in the two iterations satisfy . . In all cases, noise is expressed as , that is, at the initial and added to the system for particular case.
Theorem 4. It is SISO example for a linear discrete time-invariant system (1). A nonrepetitive measurement noise must be present in order to meet Assumptions 1–3. Assume that the chosen learning parameter matrix meets the requirements of the accelerated iterative learning control method (7) when it is used:, the output of the system converges to a certain neighborhood of the expected trajectory; that is, when , , .
Proof. Equation (31) still holds and takes the norm to both sides of the equation:If , according to Lemmas 1 and 2, , so . Define ，since , is bounded, . Thus, we obtain the above inequality as .
We can get the condition as convergence such thatand the error will converge to a boundary, which is .
Theorem 4 is proved.
Depicting people’s association thinking, this paper proposes a new type of the association iterative learning control algorithm, which, with the help of kernel function (a monotonically decreasing function), uses the information of the present time to make prediction and correction of the future control input in the current iterative process. The information of the current time corrects the subsequent unlearned time, the closer the current time, the greater the influence, the smaller the opposite. Obviously, the kernel function makes the association iterative learning algorithm more reasonable. In the process of theoretical proof of convergence analysis, the kernel function is eliminated, so it is not reflected in the convergence condition. It is proved that the association algorithm and the traditional ILC have circumstances, but the simulation result of the fourth part of the paper shows that the algorithm does have much better convergence speed than the traditional iterative learning algorithm.
4. Simulation of Numerical Examples
The validation of the associative correction learning rule proposed in this article is carried out for a discussion of LDTI (1) SISO systems with repetitive parameter perturbation, and measurement noise in a finite time period is deliberated:in which ,
4.1. Case of Determined Model without Measurement Noise
If the system model is determined and there is no measurement noise, i.e., , , according to Theorem 1, the sufficient and necessary condition of system convergence is
Let the iterative proportional gain , the differential gain , the association factor , the correction factor , and the discrete time . The calculation results show thatsatisfies the convergence condition.
For , the mentioned algorithm degenerates to a typical PD-type ILC algorithm, whose convergence condition is , . An iterative learning algorithm converges more quickly when its spectral radius is lower, as per hypothesis.
We can take the expected trajectory for this particular system which is as shown in Figure 4, for the interval of . Also, it can be denoted that the state is represented as , , primary control input is . While using the accelerated PD-type learning rule proposed in this paper, the variation trend of first learning iteration to the 50th learning iteration is shown in Figure 5. The process can guarantee that converges to 0. Figure 6 shows the system output for different iterations such as first, fourth, seventh, and 11th, respectively, and the convergence of the algorithm can be seen in more detail.
Allow the iterative learning rate and differential gain to keep unaltered throughout the learning process when the typical PD-type learning rule is used. The variance trend is given in Figure 7 from the first to the fifth learning iteration. Additionally, the image depicts the variation trend associated with the acceleration method described in this study. When the permitted error is specified, the typical PD-type approach requires 13 iterations, but the accelerated PD-type iterative learning process requires just six. Given the allowable error , the standard PD-type technique requires 25 iterations, but the accelerated PD-type iterative learning approach requires just 11. It is straightforward to observe that applying the PD-type accelerated ILC method suggested in this research greatly accelerates the system’s convergence rate.
4.2. Case of the Undetermined Model with Measurement Noise
If the system model is not determined and contains measurement noise, for that particular system, we consider that as well as other parameters are nonzero too. According to Theorem 4, the sufficient condition for the system output to converge to a neighborhood of the expected trajectory is
The matrix pairs and are selected as
Assume and arewhere , , . We can simulate the process, for which and are generated by random function and measurement noise is randomly generated. The parameters of algorithm (5) are as follows: iterative proportional gain , differential gain , association factor , correction factor , and discrete time . The result of the simulation indicates thatmeets the convergence condition, while .
The desired signal is taken the same as previous, for the time particular period of . Also, we have taken other parameters same such as system states , , as well as the input is taken as as previous. We have deployed a faster PD-type ILC which is designed in this article; the changing trend of from the first iteration to the 50th iteration is shown in Figure 8. The algorithm can ensure that converges to 0.
In order to observe of the convergence process of the output trajectory, Figure 9 shows the comparison plots between the output and desired trajectory for the first, fourth, seventh, and 11th iterations.
If we take , the aforementioned law is reduced into the traditional PD-type ILC law, the convergence condition for , and . In a view of spectral radius concept, the convergence is very faster for the particular law. Therefore, we can see that an association correction ILC law proposed in this paper converges faster. Changing trend of from first iteration to 50th iteration in two algorithms is shown in Figure 10.
We can easily observe from Figure 10 that the system tracking error does not converge to 0, but it is bounded. Theorem 4 described the bounded error such as , . As illustrated in Figure 10, the system’s convergence performance is greatly boosted when the PD-type enhanced ILC method developed in this study is used.
According to Table 4, the error value for P, D, PD, and expedited suggested PD-type ILC regulations is 1.1217316 during first iteration. After 15 rounds, the P-type algorithms error is 0.062823, typical D-type law’s error is 0.07538, and the PD algorithm error is 0.024335, where the error of the suggested faster PD algorithm is 0.003683 and the tracking error of all ILC laws decreases progressively as the iteration number increases. However, based on the horizontal data in Table 4, the suggested accelerated PD law has the lowest tracking error when contrasted to other ILC laws (P, D, PD) with the same iteration number . As a result, it is clear from Table 4 that the suggested accelerated PD law in this article has a substantially faster convergence rate than other standard algorithm.
The autoassociative ILC is established for the typical ILC, specifically, utilizing the current knowledge to make accurate predictions input. In contrast with typical ILC, it can be described as follows: in specific trial, the prelearning time is precorrected utilizing the present data. The approach may minimize the number of trials which also increase convergence speed. The method suggested in this paper may differ due to the following reasons:(1)Particular method in this paper even though resembles the standard closed loop iterative learning technique by design, the idea is fundamentally distinct from the original typical PD-type. The typical closed-loop ILC law alters the current control input directly error from previous trial. This technique suggested in this study leverages the current time error to predict the amount of control required to ensure that it does not occur at all times, therefore serving as a precorrection.(2)Though the correlated ILC law suggested in this study is operationally alike to the standard higher-order discrete time control, there is a modification. In contrast with this, a novel iterative learning method offered in this paper corrects control input value of the corresponding time in the same trials.
The problem of discrete linear time-invariant systems with parameter perturbation and measurement noise is investigated in this paper. It proposes sufficient conditions for convergence of a PD-type ILC law with relative adjustment under the circumstances of parameter determined without measurement noise, parameter undetermined without noise, parameter determined with measurement noise, and parameter undetermined with measurement noise, respectively. We have proven that the traditional PD-type ILC algorithm has the small convergence radius as compared with proposed law for the same simulation conditions. We have also proven the convergence theoretically with the help of hypervector and spectral radius theory. Numerical simulation shows the efficiency of the proposed control. Ultimately, results depict that the control is able to track the expected trajectory completely within finite intervals when there are uncertain system parameters. In the case of measurement noise existing, the system’s output will converge to a neighborhood of the expected trajectory using the algorithm proposed in this paper. In future studies, it can also be considered that the same method is adopted for the nonlinear discrete systems with parameter perturbations and measurement noises. Also, we can drive the convergence of arbitrary bounded changes of initial conditions.
Data are cited in the main manuscript.
Conflicts of Interest
The authors confirm that there are no conflicts of interest existing in submitting this manuscript.
All authors approved the manuscript for publication in your jourīnal.
This particular work was financially supported by Foundation for Advanced Talents of Xijing University grant number XJ17B03.
S. Arimoto, S. Kawamura, and F. Miyazaki, “Bettering operation of robots by learning,” Journal of Robotic Systems, vol. 1, no. 2, pp. 123–140, 1984.View at: Publisher Site | Google Scholar
S. Riaz, H. Lin, M. Mahsud, D. Afzal, A. Alsinai, and M. Cancan, “An improved fast error convergence topology for PDα-type fractional-order ILC,” Journal of Interdisciplinary Mathematics, vol. 24, no. 7, pp. 2005–2019, 2021.View at: Publisher Site | Google Scholar
J. Liu, Y. Wang, H. T. Ting, and R. P. S. Han, “Iterative learning control based on radial basis function network for exoskeleton arm,” Advanced Materials Research, Trans Tech Publ, Bäch SZ, Switzerland, 2012.View at: Google Scholar
N. Sanzida and Z. K. Nagy, “Iterative learning control for the systematic design of supersaturation controlled batch cooling crystallisation processes,” Computers & Chemical Engineering, vol. 59, pp. 111–121, 2013.View at: Publisher Site | Google Scholar
K. K. Tan, S. Y. Lim, T. H. Lee, and H. Dou, “High-precision control of linear actuators incorporating acceleration sensing,” Robotics and Computer-Integrated Manufacturing, vol. 16, no. 5, pp. 295–305, 2000.View at: Publisher Site | Google Scholar
Z. Hou, J. Yan, J. X. Xu, and Z. Li, “Modified iterative-learning-control-based ramp metering strategies for freeway traffic control with iteration-dependent factors,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 2, pp. 606–618, 2012.View at: Publisher Site | Google Scholar
X. Ruan and J. Zhao, “Convergence monotonicity and speed comparison of iterative learning control algorithms for nonlinear systems,” IMA Journal of Mathematical Control and Information, vol. 30, no. 4, pp. 473–486, 2013.View at: Publisher Site | Google Scholar
S. Riaz, H. Lin, M. Waqas, F. Afzal, K. Wang, and N. Saeed, “An accelerated error convergence design criterion and implementation of lebesgue-p norm ILC control topology for linear position control systems,” Mathematical Problems in Engineering, vol. 2021, Article ID 5975158, 12 pages, 2021.View at: Publisher Site | Google Scholar
S. Srivastava and V. Pandit, “A PI/PID controller for time delay systems with desired closed loop time response and guaranteed gain and phase margins,” Journal of Process Control, vol. 37, pp. 70–77, 2016.View at: Publisher Site | Google Scholar
J. Zhang, “Design of a new PID controller using predictive functional control optimization for chamber pressure in a coke furnace,” ISA Transactions, vol. 67, pp. 208–214, 2017.View at: Publisher Site | Google Scholar
K. Vijaya Chandrakala and S. Balamurugan, “Simulated annealing based optimal frequency and terminal voltage control of multi source multi area system,” International Journal of Electrical Power & Energy Systems, vol. 78, pp. 823–829, 2016.View at: Publisher Site | Google Scholar
S. Zhou, M. Chen, C. J. Ong, and P. C. Y. Chen, “Adaptive neural network control of uncertain MIMO nonlinear systems with input saturation,” Neural Computing & Applications, vol. 27, no. 5, pp. 1317–1325, 2016.View at: Publisher Site | Google Scholar
K. Patan and M. Patan, “Neural-network-based iterative learning control of nonlinear systems,” ISA Transactions, vol. 98, pp. 445–453, 2020.View at: Publisher Site | Google Scholar
S. Riaz, H. Lin, F. Afzal, and A. Maqbool, “Design and implementation of novel LMI-based iterative learning robust nonlinear controller,” Complexity, vol. 2021, Article ID 5577241, 2113 pages, 2021.View at: Publisher Site | Google Scholar
G. L. Plett, “Adaptive inverse control of linear and nonlinear systems using dynamic neural networks,” IEEE Transactions on Neural Networks, vol. 14, no. 2, pp. 360–376, 2003.View at: Publisher Site | Google Scholar
H. H. Afshari, S. A. Gadsden, and S. Habibi, “Gaussian filters for parameter and state estimation: a general review of theory and recent trends,” Signal Processing, vol. 135, pp. 218–238, 2017.View at: Publisher Site | Google Scholar
Y. Hua, N. Wang, and K. Y. Zhao, “Simultaneous unknown input and state estimation for the linear system with a rank-deficient distribution matrix,” Mathematical Problems in Engineering, vol. 2021, Article ID 6693690, 11 pages, 2021.View at: Publisher Site | Google Scholar
X. Feng, Q. Li, and K. Wang, “Waste plastic triboelectric nanogenerators using recycled plastic bags for power generation,” ACS Applied Materials & Interfaces, vol. 13, no. 1, pp. 400–410, 2021.View at: Publisher Site | Google Scholar
C. Liu, Q. Li, and K. Wang, “State-of-charge estimation and remaining useful life prediction of supercapacitors,” Renewable and Sustainable Energy Reviews, vol. 150, no. 2, Article ID 111408, 2021.View at: Publisher Site | Google Scholar
K. Wang, C. Liu, J. Sun et al., “State of charge estimation of composite energy storage systems with supercapacitors and lithium batteries,” Complexity, vol. 2021, Article ID 8816250, 1 page, 2021.View at: Publisher Site | Google Scholar
J. Liu, X. Ruan, and Y. Zheng, “Iterative learning control for discrete-time systems with full learnability,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, 2020.View at: Publisher Site | Google Scholar
J. Zhang and K. Huang, “Fault diagnosis of coal-mine-gas charging sensor networks using iterative learning-control algorithm,” Physical Communication, vol. 43, Article ID 101175, 2020.View at: Publisher Site | Google Scholar
C. Zhang and H.-S. Yan, “Inverse control of multi-dimensional Taylor network for permanent magnet synchronous motor,” COMPEL-the International Journal for Computation and Mathematics in Electrical and Electronic Engineering, vol. 36, 2017.View at: Publisher Site | Google Scholar
W. Chen, J. Hu, Z. Wu, X. Yu, and D. Chen, “Finite-time memory fault detection filter design for nonlinear discrete systems with deception attacks,” International Journal of Systems Science, vol. 51, no. 8, pp. 1464–1481, 2020.View at: Publisher Site | Google Scholar
S. A. U. Islam, T. E. Nguyen, I. V. Kolmanovsky, and D. S. Bernstein, “Adaptive control of discrete-time systems with unknown, unstable zero dynamics,” in Proceedings of the in 2020 American Control Conference (Acc), IEEE, Denver, CO, USA, July 2020.View at: Google Scholar
S. Xiong and Z. Hou, “Model-free adaptive control for unknown mimo nonaffine nonlinear discrete-time systems with experimental validation,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, 2020.View at: Publisher Site | Google Scholar
K. Abidi, H. J. Soo, and I. Postlethwaite, “Discrete-time adaptive control of uncertain sampled-data systems with uncertain input delay: a reduction,” IET Control Theory & Applications, vol. 14, no. 13, pp. 1681–1691, 2020.View at: Publisher Site | Google Scholar
H. Tao, X. Li, W. Paszke, V. Stojanovic, and H. Yang, “Robust PD-type iterative learning control for discrete systems with multiple time-delays subjected to polytopic uncertainty and restricted frequency-domain,” Multidimensional Systems and Signal Processing, vol. 32, no. 2, pp. 671–692, 2021.View at: Publisher Site | Google Scholar
H. Tao, J. Li, Y. Chen, V. Stojanovic, and H. Yang, “Robust point-to-point iterative learning control with trial-varying initial conditions,” IET Control Theory & Applications, vol. 14, no. 19, pp. 3344–3350, 2020.View at: Publisher Site | Google Scholar
L. Zhou, H. Tao, W. Paszke, V. Stojanovic, and H. Yang, “PD-type iterative learning control for uncertain spatially interconnected systems,” Mathematics, vol. 8, no. 9, p. 1528, 2020.View at: Publisher Site | Google Scholar
M. T. Shahab and D. E. Miller, “Adaptive control of a class of discrete-time nonlinear systems yielding linear-like behavior,” Automatica, vol. 130, Article ID 109691, 2021.View at: Publisher Site | Google Scholar
S. Riaz, H. Lin, and M. P. Akhter, “Design and implementation of an accelerated error convergence criterion for norm optimal iterative learning controller,” Electronics, vol. 9, no. 11, p. 1766, 2020.View at: Publisher Site | Google Scholar
L. Liu, Y. J. Liu, A. Chen, S. Tong, and C. L. P. Chen, “Integral barrier Lyapunov function-based adaptive control for switched nonlinear systems,” Science China Information Sciences, vol. 63, no. 3, pp. 132203–132214, 2020.View at: Publisher Site | Google Scholar
S. Riaz, H. Lin, A. M. Bilal, and H. Ali, “Design of PD-type second-order ILC law for PMSM servo position control,” Journal of Physics: Conference Series, vol. 1707, no. 1, Article ID 012002, 2020.View at: Publisher Site | Google Scholar
R. Moghadam, P. Natarajan, K. Raghavan, and S. Jagannathan, “Online optimal adaptive control of a class of uncertain nonlinear discrete-time systems,” in Proceedings of the in 2020 International Joint Conference On Neural Networks (ijcnn), IEEE, Glasgow, UK, September 2020.View at: Google Scholar
R. Chi, Y. Hui, S. Zhang, B. Huang, and Z. Hou, “Discrete-time extended state observer-based model-free adaptive control via local dynamic linearization,” IEEE Transactions on Industrial Electronics, vol. 67, no. 10, pp. 8691–8701, 2020.View at: Publisher Site | Google Scholar
Y. Li and H. Dankowicz, “Adaptive control designs for control-based continuation in a class of uncertain discrete-time dynamical systems,” Journal of Vibration and Control, vol. 26, no. 21-22, pp. 2092–2109, 2020.View at: Publisher Site | Google Scholar
S. Riaz, H. Lin, and H. Elahi, “A novel fast error convergence approach for an optimal iterative learning controller,” Integrated Ferroelectrics, vol. 213, no. 1, pp. 103–115, 2021.View at: Publisher Site | Google Scholar
S. Arimoto, “Learning control theory for robotic motion,” International Journal of Adaptive Control and Signal Processing, vol. 4, no. 6, pp. 543–564, 1990.View at: Publisher Site | Google Scholar
Y. Chen and K. L. Moore, “An optimal design of PD-type iterative learning control with monotonic convergence,” in Proceedings of the Ieee Internatinal Symposium On Intelligent Control, IEEE, Vancouver, Canada, Octomber 2002.View at: Google Scholar
K.-H. Park, “A study on the robustness of a PID-type iterative learning controller against initial state error,” International Journal of Systems Science, vol. 30, no. 1, pp. 49–59, 1999.View at: Publisher Site | Google Scholar
H.-S. Ahn, K. L. Moore, and Y. Chen, “Monotonic convergent iterative learning controller design based on interval model conversion,” IEEE Transactions on Automatic Control, vol. 51, no. 2, pp. 366–371, 2006.View at: Publisher Site | Google Scholar
Y. Chen, Z. Gong, and C. Wen, “Analysis of a high-order iterative learning control algorithm for uncertain nonlinear systems with state delays,” Automatica, vol. 34, no. 3, pp. 345–353, 1998.View at: Publisher Site | Google Scholar
B. St�rmer and H. Leuthold, “Control over response priming in visuomotor processing: a lateralized event-related potential study,” Experimental Brain Research, vol. 153, no. 1, pp. 35–44, 2003.View at: Publisher Site | Google Scholar
M. Norrlöf and S. Gunnarsson, “Time and frequency domain convergence properties in iterative learning control,” International Journal of Control, vol. 75, no. 14, pp. 1114–1126, 2002.View at: Publisher Site | Google Scholar
M. Norrlöf and S. Gunnarsson, “Disturbance aspects of iterative learning control,” Engineering Applications of Artificial Intelligence, vol. 14, no. 1, pp. 87–94, 2001.View at: Publisher Site | Google Scholar
D. Pi and D. SeborgJ. ShouY. SunQ. Lin, “Analysis of current cycle error assisted iterative learning control for discrete nonlinear time-varying systems,” in Proceedings of the Smc 2000 Conference Proceedings. 2000 Ieee International Conference on Systems, Man and cybernetics.’cybernetics Evolving to Systems, Humans, Organizations, and Their Complex Interactions’(cat. No. 0, IEEE, Nashville, TN, USA, August 2000.View at: Google Scholar
G. Ji and Q. Luo, “An open-closed-loop PID-type iterative learning control algorithm for uncertain time-delay systems,” in Proceedings of the 2005 International Conference On Machine Learning And Cybernetics, IEEE, Guangzhou, China, November 2005.View at: Google Scholar
F. Ma and C. Li, “Open-closed-loop PID-type iterative learning control for linear systems with initial state error,” Journal of Vibration and Control, vol. 17, no. 12, pp. 1791–1797, 2011.View at: Publisher Site | Google Scholar
Z. Feng, Z. Zhang, and D. Pi, “Open-closed-loop PD-type iterative learning controller for nonlinear systems and its convergence,” in Proceedings of the 5th World Congress On Intelligent Control And Automation (IEEE Cat. No. 04EX788), IEEE, Hangzhou, China, June 2004.View at: Google Scholar
L. Changliang and J. Wangen, “Study for convergence of open-closed-loop pd-type iterative learning control of nonlinear discrete systems,” Electric Power Science and Engineering, vol. 6, 2012.View at: Google Scholar
M. French and E. Rogers, “Non-linear iterative learning by an adaptive Lyapunov technique,” International Journal of Control, vol. 73, no. 10, pp. 840–850, 2000.View at: Publisher Site | Google Scholar
J.-X. Xu and Y. Tan, “A composite energy function-based learning control approach for nonlinear systems with time-varying parametric uncertainties,” IEEE Transactions on Automatic Control, vol. 47, no. 11, pp. 1940–1945, 2002.View at: Publisher Site | Google Scholar
T.-Y. Kuc, J. S. Lee, and K. Nam, “An iterative learning control theory for a class of nonlinear dynamic systems,” Automatica, vol. 28, no. 6, pp. 1215–1221, 1992.View at: Publisher Site | Google Scholar
J.-X. Xu, “A survey on iterative learning control for nonlinear systems,” International Journal of Control, vol. 84, no. 7, pp. 1275–1294, 2011.View at: Publisher Site | Google Scholar
J.-X. Xu and B. Viswanathan, “Adaptive robust iterative learning control with dead zone scheme,” Automatica, vol. 36, no. 1, pp. 91–99, 2000.View at: Publisher Site | Google Scholar
J.-X. Xu, Y. Tan, and T.-H. Lee, “Iterative learning control design based on composite energy function with input saturation,” Automatica, vol. 40, no. 8, pp. 1371–1377, 2004.View at: Publisher Site | Google Scholar
B. J. Driessen, N. Sadegh, and K. S. Kwok, “Optimization-based drift prevention for learning control of underdetermined linear and weakly nonlinear time-varying systems,” in Proceedings of the 2001 American Control Conference.(Cat. No. 01CH37148), IEEE, Arlington, VA, USA, June 2001.View at: Publisher Site | Google Scholar
Y.-P. Tian and X. Yu, “Robust learning control for a class of nonlinear systems with periodic and aperiodic uncertainties,” Automatica, vol. 39, no. 11, pp. 1957–1966, 2003.View at: Publisher Site | Google Scholar
J. E. Kurek and M. B. Zaremba, “Iterative learning control synthesis based on 2-D system theory,” IEEE Transactions on Automatic Control, vol. 38, no. 1, pp. 121–125, 1993.View at: Publisher Site | Google Scholar
X.-D. Li, T. Chow, and J. Ho, “Iterative learning control for linear time-variant discrete systems based on 2-D system theory,” IEE Proceedings - Control Theory and Applications, vol. 152, no. 1, pp. 13–18, 2005.View at: Publisher Site | Google Scholar
Y. Fang and T. W. Chow, “Iterative learning control of linear discrete-time multivariable systems,” Automatica, vol. 34, no. 11, pp. 1459–1462, 1998.View at: Publisher Site | Google Scholar
X.-D. Li and J. K. Ho, “Further results on iterative learning control with convergence conditions for linear time-variant discrete systems,” International Journal of Systems Science, vol. 42, no. 6, pp. 989–996, 2011.View at: Publisher Site | Google Scholar
T. W. Chow and Y. Fang, “An iterative learning control method for continuous-time systems based on 2-D system theory,” IEEE transactions on circuits and systems I: Fundamental theory and applications, vol. 45, no. 6, pp. 683–689, 1998.View at: Publisher Site | Google Scholar
Y. Fang and T. W. Chow, “2-D analysis for iterative learning controller for discrete-time systems with variable initial conditions,” IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 50, no. 5, pp. 722–727, 2003.View at: Publisher Site | Google Scholar
L. Hladowski, K. Galkowski, Z. Cai, E. Rogers, C. T. Freeman, and P. L. Lewin, “Experimentally supported 2D systems based iterative learning control law design for error convergence and performance,” Control Engineering Practice, vol. 18, no. 4, pp. 339–348, 2010.View at: Publisher Site | Google Scholar
B. Cichy and K. GalkowskiP. DaobkowskiH. Aschemann, ““A new procedure for the design of iterative learning controllers using a 2D systems formulation of processes with uncertain spatio-temporal dynamics,” Control and Cybernetics, vol. 42, no. 1, pp. 9–26, 2013.View at: Google Scholar
X.-J. Fan, S.-P. Tian, and H.-P. Tian, “Iterative learning control of distributed parameter system based on geometric analysis,” in Proceedings of the in 2009 International Conference On Machine Learning And Cybernetics, IEEE, Baoding, China, August 2009.View at: Google Scholar
S.-l. Xie, S.-p. Tian, and Z.-d. Xie, “Fast algorithm of iterative learning control based on geometric analysis,” Control Theory & Applications, vol. 3, 2003.View at: Google Scholar
S.-L. Xie, S.-p. Tian, and Z.-d. Xie, “Iterative learning control nonlinear algorithms based on vector plots analysis,” Kongzhi Lilun yu Yingyong/Control Theory & Applications(China), vol. 21, no. 6, pp. 951–955, 2004.View at: Google Scholar
D. Xi-sheng, L. Zheng, and T. Sen-ping, “Iterative learning-control for distributed parameter systems based on vector-plot analysis,” Control Theory & Applications, vol. 26, no. 6, pp. 619–623, 2009.View at: Google Scholar
M. Togai and O. Yamano, “Analysis and design of an optimal learning control scheme for industrial robots: a discrete system approach,” in Proceedings of the in 1985 24th Ieee Conference On Decision And Control, IEEE, Lauderdale, FL, USA, December 1985.View at: Google Scholar
K. S. Lee and J. H. Lee, “Design of quadratic criterion-based iterative learning control,” Iterative Learning Control, Springer, Berlin, Germany, pp. 165–192, 1998.View at: Publisher Site | Google Scholar
N. Amann, D. H. Owens, and E. Rogers, “Iterative learning control for discrete-time systems with exponential rate of convergence,” IEE Proceedings - Control Theory and Applications, vol. 143, no. 2, pp. 217–224, 1996.View at: Publisher Site | Google Scholar
K.-S. Lee, W.-C. Kim, and J.-H. Lee, “Model-based iterative learning control with quadratic criterion for linear batch processes,” Journal of Institute of Control, Robotics and Systems, vol. 2, no. 3, pp. 148–157, 1996.View at: Google Scholar
S. Riaz, H. Lin, and M. P. Akhter, “Design and implementation of an accelerated error convergence criterion for norm optimal iterative learning controller,” Electronics, vol. 9, no. 11, p. 1766, 2020.View at: Google Scholar
D. H. Owens and K. Feng, “Parameter optimization in iterative learning control,” International Journal of Control, vol. 76, no. 11, pp. 1059–1069, 2003.View at: Publisher Site | Google Scholar