Precise industrial control technology is constantly in need of accurate and strong control. Error convergence for a typical linear system is very minimal when using a conventional iterative learning control strategy. This study develops a quick iterative learning control law to address this issue. We have presented a new PD iterative learning control approach which is basically grounded on backward error and control parameter rectification for a class of linear discrete time-invariant (LDTI) systems. We have deliberated the repetitive system, which has constraint disturbance and measurement noise. First, we have developed a form of the faster learning law along with a full explanation of the algorithm’s control factor generation process. And then, using the vector method in conjunction with the theory of spectral radius, sufficient conditions for the algorithm’s convergence are introduced for parameter estimation with no noise, parameter uncertainty but excluding the noise, parameter uncertainty with small perturbations, and noise in four different scenarios. Eventually, results show that convergence depends on the control law’s learning factor, the correction term, the factor of association, and the learning interval. Ultimately, the simulation results indicate that suggested approach has a faster error convergence as compared with classical PD algorithm.

1. Introduction

Iterative learning control (ILC) [1, 2] is widely used controller due to its simple arrangement, which do not require precise model information. It can make the performance of the executed object meet the expected signal only after enough iterations in a partial interval. This learning procedure has been widely employed in the control of industrial and commercial robotic applications [3], batch processes in the process industry [4], aerodynamic systems [5], traffic control systems [6], electrical and power systems, and other areas due to the characteristics as mentioned earlier. However, most scholars focus on nonsystem control problems, and research on system iterative learning control problems is limited [7, 8].

In industrial applications, the controlled system parameters are usually time-varying, so the classical PID and in a combination with other control schemes are particularly fixed while controlling system with uncertain factors [911]. In addition, the analysis and design process of some existing modern control schemes [12] is complex and difficult. The designed control algorithm and structure should be simple enough and easy to implement to solve these problems. The control scheme should contain the characteristics of nonlinearity, robustness, flexibility, and learning ability. With the rapid development of intelligent control technology to solve the uncertainty and complexity of the controlled object, some neural network models and neural network training schemes have been applied to the design of system controllers [13, 14]. For example, as a feedforward controller, Plett [15] discussed how neural networks learn to imitate the inverse of the controlled object. However, the neural network has the disadvantages of slow learning speed and weak generalization ability, and there is no systematic method to determine its topology. Suppose there is not a timely manner sable control and compensation. In that case, the system noise and random interference will appear in the input end of the controller, which will greatly reduce the stability of the adaptive process and seriously affect the control accuracy. Adaptive filtering has been widely developed [16, 17], and neural network is the most commonly used in all kinds of nonlinear filtering. However, it is highly nonlinear in terms of parameters [1820].

Above mentioned scholars are studying the model uncertainty in different fields such as model prediction, system identification [21], fault detection [22], motor control [23], and nonlinear control [24]. Still, there is no specific control algorithm for satisfactory fast error convergence and specific to consider the system coupling, uncertainty, time-varying characteristics, measurement noise, and other factors. Adaptive control strategy is proposed in these literatures [2527] which can compensate at some extent. An adaptive control is mainly used to deal with complex nonlinear systems with unknown parameters. H Tao and his team proposed a novel PD scheme for the system with multiple delays in literature [28] and also proposed point to point ILC [29] as well as PD-type iterative learning control algorithm for a class of discrete spatially interconnected systems [30]. Based on Lyapunov stability theory, parameter novelty law is designed to achieve system stabilization and progressive tracking of target trajectory [31, 32]. Both some special nonlinear systems linearized to parameters [33, 34] and nonlinear systems with general structures [35] have achieved remarkable development. For systems that cannot be modeled or contain unmodeled states, literatures [36, 37] proposed the model-free adaptive control theory. However, these adaptive control methods cannot solve the problem of complete tracking over a finite time interval [38].

Arimoto [39] firstly proposed the D-type iterative learning algorithm, and some scholars had further proposed P-type, PD-type, and PID-type algorithms and higher-order learning algorithms [4043]. On the basis of linear learning algorithms, some scholars have successively proposed a series of nonlinear learning algorithms such as Newton type and secant type [44], which can significantly accelerate the convergence speed of the learning algorithm. In classical control theory, we often analyze system stability and algorithm characteristics from the perspective of frequency domain, and iterative learning control is no exception. Some scholars analyze and design iterative learning control algorithms from the perspective of frequency domain [45]. Because in the frequency domain analysis method, the convergence conditions of the system can be relaxed from infinite frequency bands to limited frequency bands, which is more robust to learning control. In the practical application of analysis and iterative learning, analysis methods based on the frequency domain are widely used. For a class of linear systems with disturbances, Norrlof [45, 46] proposed a new learning control method, using an analysis method based on the frequency domain to obtain the convergence conditions of the system and analyze the effects of different filter choices on system stability. Influence and the analysis of the robustness of the iterative learning algorithm further need to be explored in the sense of practical applications.

Most of the real systems are nonlinear systems. Therefore, the application of iterative learning control in nonlinear systems has very important theoretical significance and practical foundation. Pi Daoying [4750] and others have done a lot of research work on the application of iterative learning in nonlinear systems. Literature [47] first analyzed the shortcomings of a single form of closed loop and open loop and then focused on a class of discrete nonlinear systems. For linear systems, an open-loop and closed-loop P-type iterative learning control algorithm is proposed, and the convergence of the algorithm is proved. Finally, it is concluded that the open-loop and closed-loop algorithm is better than the single closed-loop or open-loop algorithm. Liu Changliang [51] proposed an open-closed-loop PD-type iterative learning algorithm for general nonlinear discrete systems and proved the convergence of the algorithm.

In the analysis of nonlinear systems, Lyapunov is a very important analysis program. This method is widely implemented and explained in for the analysis of controllers of nonlinear systems. When conducting theoretical analysis and method design of nonlinear uncertain systems, this method is the most important one. The second method of Lyapunov is a qualitative method. Instead of solving the equation, it directly judges the stability of the system through a Lyapunov energy function, which provides great convenience for the analysis of nonlinear systems. Inspired by the second method of Lyapunov, the energy function of the iterative learning control scheme in the time domain and the iterative domain has been studied [52, 53], which provides a new research method for designing and analyzing its convergence in the iterative domain. ILC based on the energy function in the iterative domain is discussed in [54], and some scholars have developed new robust control methods [55] and adaptive learning control methods [56] on this basis. They can be used to solve problems with parameters or the research and design of nonparametric uncertain nonlinear system controller. In recent years, a combined energy function representing the energy of the system in the time domain and in the iterative domain has also been applied in iterative learning control [57]. This method can ensure that the tracking error signal in the iterative domain achieves asymptotic convergence and is bounded in the time domain. By using the point to point tracking performance and by making the control input have monotonic convergence in the entire iterative interval, this scheme is suitable for a class of nonlinear systems that do not have globally uniform Lipschitz characteristics. Through the use and promotion of the energy function method, many new control theories and methods, such as nonlinear optimization methods [58] and inversion design methods [59], have been applied to iterative learning control as a new system design scheme.

The iterative learning control algorithm is essentially a process with two-dimensional characteristics: the time domain direction t and the iterative direction k. These two directions are independent of each other, so the iterative learning control system itself is a two-dimensional system.

Some scholars [60] analyzed the iterative learning control algorithm by using the more mature two-dimensional system theory and discussed and analyzed the stability of iterative learning on the time axis and the dimensionality of the iterative axis convergence issue. The stability theory of the two-dimensional system provides a very effective method for the design and convergence of the iterative learning control algorithm. The Roesser model in the two-dimensional system theory has become the most basic system in the iterative learning algorithm analysis. Aiming at the application of the Roesser model of the two-dimensional system in iterative learning control, Li Xiaodong, Fang Yong [6165], and others have done research on linear continuous systems and linear discrete systems and put forward a large number of attractive theoretical results in ILC filed. Literatures [6163] analyzed the convergence of the system through the two-dimensional system theory for LDTI and LTV systems and theoretically obtained the conditions for the system to achieve complete tracking after one iteration. However, this method cannot be directly applied in practice. Instead, the corresponding estimated value is used to approximate. The simulation results show that the method of approximating estimated value also has better convergence effect and speed. Literature [64] used a two-dimensional continuous-discrete model to analyze the performance of the linear continuous system and obtained some very meaningful academic results. Literature [65] aimed at the linear multivariable discrete system with uncertainty or variable initial state value; through the analysis of the two-dimensional system, the conditions for the convergence of the system were obtained, so that even if the parameters of the control system are subject to minor disturbances or even when the initial state conditions in each iteration change, the convergence of the iterative learning control algorithm can still be guaranteed.

Cichy, Galkowski, and their team [66, 67] investigated and designed iterative learning controllers based on the theory of linear matrix inequalities in two-dimensional systems. It not only analyzed the boundary value conditions that make the system stable but also analyzed the convergence and robustness of the iterative learning algorithm. There are some theoretical problems in the optimal control algorithm of the nonlinear dynamic system based on the maximum value principle. It is proposed to apply the linear matrix inequality method to the stability analysis and controller of the iterative learning control algorithm of the continuous-discrete system. The scheme is under design.

The iterative learning algorithms of the above scientific research results are all linear structures, and basically all adopt PID control methods. Is there a nonlinear learning law that allows the system to converge? If there is a nonlinear learning law, can it speed up the convergence of the system? In response to these problems, some scholars have successively proposed some nonlinear iterative learning control algorithms.

Tian Senping [6871] and others analyzed iterative learning algorithms from a geometric perspective and proposed three new nonlinear control algorithms in the form of vector triangles, which opened up a new way of thinking for the research of iterative learning control algorithms. Togai [72] applied the optimization method to the design of iterative learning control. For the performance index function containing the square error term, the steepest descent method, Newton—Raphson method, and Gauss—Newton method were used to obtain three different nonlinear learning laws. In addition, the nonlinear methods of iterative learning control law include input penalty term analysis [73], norm optimization method [7476], parameter optimization method [77], and so on.

The article can be divided in different sections in order to demonstrate the contribution briefly. The main contribution and results are comprised in the following sections. Problem formulation is briefly described in Section 2. The convergence analysis, theory of hypervector, spectral radius, and main conditions for the error convergence are elaborated in Section 3. Then, Section 4 shows the numerical example for the validity of the proposed algorithm. Finally, the results summarization of this paper is described in Section 5.

2. Problem Formulation

In order to explain clearly, we can take a single input and single output (SISO) linear discrete time-invariant (LDTI) system. We have taken this system including with monotonous parameter perturbation and measurement noise over a finite period:in which , , the notation is the number of iterations, you can see in the above equation the state is , and the control input is represented with , while the output is taken as , respectively. , , and are constant matrices of the corresponding dimension satisfying the condition. is the measurement noise of the system, and and are the uncertainty matrix of the system and the uncertainty input matrix at the time , such that

Here, and are constant matrices of the corresponding dimension satisfying the condition and define the structure of the uncertain state matrix and the uncertain input matrix; and are unknown matrices, satisfying and .

In the iterative learning process of system (1), the expected trajectory is set as , the iteration is unchanged, the corresponding expected state is , and the corresponding predicted control input is . Following assumptions are made.

Assumption 1. We can assume that the ideal initial state is the state which is initially equal to the desired state such that .

Assumption 2. It can be supposed that the required trajectory for the whole period is given in advance, which is independent of the number of iterations.

Assumption 3. It can be noted that any random given desired value , there is an expected signal and an expected control , so thatBased on system (1), define , , and output signal of iteration at time could be represented as follows:For ease of description, write the above expression in the form of a hypervector, introduce a hypervector as follows:The above equation can be described aswhere represents an uncertain value by the dynamics and uncertain parameters of system (1).
System (1), under the condition that Assumption 13 satisfied, considers a control rule of error backward association and subsequent control quantity correction.
The correction of the error before time t to the control quantity at the current time is as follows:The learning control rule of PD-type iterative is as follows:It can be clearly observed from Figure 1 that the correction of control magnitude (8a) is clarified in detail. The learning procedure of the th iteration can be elaborated as follows.
Firstly, we can see that the error for the particular point 1 can be taken as the control amount of moments for this process of the th iterations while the control value of this particular iteration and N parameters is represented in Table 1.
The error correction term at 2 modified the control amount for the instants for the cumulative th iteration. It can be clearly seen in Figure 2. For this particular iteration number, the rectification magnitude is presented in Table 2.
According to this method for a given trial as shown in Figure 2, we can see that the term error is and its respective rectified control amount for a particular in the iteration is given in detail. Furthermore, the convergence is also displayed in Figure 3. The corrected factor of control input is .
According to the above analysis, the correction quantity of each error to the control quantity of following moments can be plotted. The correction is the accumulation of the correction for all previous moments (see Table 3) aswhich is consistent with Equation (8a).

3. Convergence Analysis

Lemma 1. Let , , en .

Proof. With the triangle inequality of norm, , when , , so from the squeeze criterion, , so , where is the norm of matrices on . Particularly, when , we have .

Lemma 2. Let , if , then is called convergent matrix; the necessary and sufficient condition of its convergence is .

Proof. (necessity). Let be a convergency matrix; because of properties spectral radius, where is the norm of matrices on . So, and .
Sufficiency. Since , there exists a positive number , such that . Therefore, there exists a norm of matrices on say, such that . Since , it can imply that , so .

3.1. Case of a Determined Model without Measurement Noise

Theorem 1. Deliberating a LDTI SISO system (1) for which Assumptions 13 are fulfilled, the system model is determined, and no measurement noise is obtained; and , when PD-type fast ILC law (5) with respective improvement is implemented, if the nominated learning constraint matrix satisfies the following:

Then, the output of the system significantly tracks the reference signal. We can observe that when , the system also closely tracks the desired value such that approaches to for the particular time interval .

Proof. Rendering to the ILC algorithm (5), in the iteration, the control quantity at each moment in the interval can be represented as follows:If we introduce the following hypervector:then we havewhereSince the model is determined and has no measurement noise, the system matrix and also and . We can observe from (4) that it can be written as . Combining (Assumption 1) and equation (13), the‘‘ error sequence can be derived asFinally, we can drive Lemma 2, for which the particular and necessary condition of is , where is the spectral radius of and are eigenvalues of . is a lower triangle matrix as follows:Then, we can get the error convergence from the above derivation:Hence, the theorem is completely proved.
In the above formula , K is the number of repeated operations of the system. , , and , respectively, represent the state vector, output vector, and input vector of the system. is vector functions with appropriate dimensions. In iterative learning control, generally use , , and to represent the expected state, expected output, and expected input of the system.

3.2. Case of the Undetermined Model without Measurement Noise

Theorem 2. Consider a linear discrete time-invariant (SISO) system (1). For which Assumptions 13 are satisfied, the system model is uncertain but there is no measurement noise, such that , , and . When PD-type fast ILC law (5) with association correction is adopted, the following parameters must be satisfied:for which input signal completely tracks the desired signal. It can be observed that whenever the iterations increase like , we can see the respective output obtained as , for the time interval . And consider .

Proof. The control rule equation (13) is still available. Since the system model is determined and there is no measurement noise, i.e., , , . Equation (5) can be written as . Combining (Assumption 1) and equation (13), the error sequence can be derived asIt can be obtained and derived from Lemma 2 that the necessary and sufficient conditions are expressed as follows:
is where is the spectral radius of and are eigenvalues of matrix . The matrix is a lower triangular matrix as follows:The necessary and sufficient condition for the convergence of the system isThe theorem is proved.

3.3. Case of the Determined Model with Measurement Noise

If the model is determined and has measurement noise, i.e., , , . Equation (5) can be written as . Combining (Assumption 1) and equation (13), the error sequence can be derived as

Let , then we have .

When , .

When , .

When , .

For the repetitive perturbation, , , we can obtain from Lemma 2, the necessary and sufficient condition of is . represents the spectral radius of matrix , and are eigenvalues of matrix . Referring to the proving process of Theorem 1, it can be obtained that the necessary and sufficient condition of the system convergence is

When , the system output uniformly converges. Hence, it means the error .

For nonrepetitive perturbations, assume that the two-interval perturbations are bounded; there is a positive real number so that the perturbations in the two iterations satisfy .

Theorem 3. "Considering a SISO linear discrete time invariant system (1), first three assumption must be satisfied for this SISO system including nonrepetitive measurement noise . If the chosen learning variable vector meets the following conditions, the PD-type accelerated iterative learning control method (7) with association correction is used., the system’s output converges to a certain neighborhood of the expected trajectory; that is, when , , .

Proof. Equation (24) is still valid. For the nonrepeatable perturbations, there is a positive real number for the perturbations in the two iterations satisfying .
Take the norm of both sides of equation (24) asIf , according to Lemmas 1 and 2, , . Define , since , is bounded. Thus, we obtain the above inequality as .
According to the above analysis, the sufficient condition for system convergence isand the error will converge to a boundary, which is .
Theorem 3 is proved.

3.4. Case of Undetermined Model with Measurement Noise

If the system model is not determined and contains measurement noise, for which the system matrix all are considered not zero, equation (5) can be written as . Combined with equation (13), the error sequence can be derived as

Defining , then we have .

When , ;

When , ;

When , .

For the repetitive perturbation , , consider Lemma 2, for which the sufficient condition of is . . We can see that and are eigenvalues of matrix . For which we can see that the necessary and sufficient condition of the system convergence is

For nonrepetitive perturbations, assume that the two-interval perturbations are bounded; that is, there is a positive real number , so that the perturbations in the two iterations satisfy . . In all cases, noise is expressed as , that is, at the initial and added to the system for particular case.

Theorem 4. It is SISO example for a linear discrete time-invariant system (1). A nonrepetitive measurement noise must be present in order to meet Assumptions 13. Assume that the chosen learning parameter matrix meets the requirements of the accelerated iterative learning control method (7) when it is used:, the output of the system converges to a certain neighborhood of the expected trajectory; that is, when , , .

Proof. Equation (31) still holds and takes the norm to both sides of the equation:If , according to Lemmas 1 and 2, , so . Define ,since , is bounded, . Thus, we obtain the above inequality as .
We can get the condition as convergence such thatand the error will converge to a boundary, which is .
Theorem 4 is proved.
Depicting people’s association thinking, this paper proposes a new type of the association iterative learning control algorithm, which, with the help of kernel function (a monotonically decreasing function), uses the information of the present time to make prediction and correction of the future control input in the current iterative process. The information of the current time corrects the subsequent unlearned time, the closer the current time, the greater the influence, the smaller the opposite. Obviously, the kernel function makes the association iterative learning algorithm more reasonable. In the process of theoretical proof of convergence analysis, the kernel function is eliminated, so it is not reflected in the convergence condition. It is proved that the association algorithm and the traditional ILC have circumstances, but the simulation result of the fourth part of the paper shows that the algorithm does have much better convergence speed than the traditional iterative learning algorithm.

4. Simulation of Numerical Examples

The validation of the associative correction learning rule proposed in this article is carried out for a discussion of LDTI (1) SISO systems with repetitive parameter perturbation, and measurement noise in a finite time period is deliberated:in which ,

4.1. Case of Determined Model without Measurement Noise

If the system model is determined and there is no measurement noise, i.e., , , according to Theorem 1, the sufficient and necessary condition of system convergence is

Let the iterative proportional gain , the differential gain , the association factor , the correction factor , and the discrete time . The calculation results show thatsatisfies the convergence condition.

For , the mentioned algorithm degenerates to a typical PD-type ILC algorithm, whose convergence condition is , . An iterative learning algorithm converges more quickly when its spectral radius is lower, as per hypothesis.

We can take the expected trajectory for this particular system which is as shown in Figure 4, for the interval of . Also, it can be denoted that the state is represented as , , primary control input is . While using the accelerated PD-type learning rule proposed in this paper, the variation trend of first learning iteration to the 50th learning iteration is shown in Figure 5. The process can guarantee that converges to 0. Figure 6 shows the system output for different iterations such as first, fourth, seventh, and 11th, respectively, and the convergence of the algorithm can be seen in more detail.

Allow the iterative learning rate and differential gain to keep unaltered throughout the learning process when the typical PD-type learning rule is used. The variance trend is given in Figure 7 from the first to the fifth learning iteration. Additionally, the image depicts the variation trend associated with the acceleration method described in this study. When the permitted error is specified, the typical PD-type approach requires 13 iterations, but the accelerated PD-type iterative learning process requires just six. Given the allowable error , the standard PD-type technique requires 25 iterations, but the accelerated PD-type iterative learning approach requires just 11. It is straightforward to observe that applying the PD-type accelerated ILC method suggested in this research greatly accelerates the system’s convergence rate.

4.2. Case of the Undetermined Model with Measurement Noise

If the system model is not determined and contains measurement noise, for that particular system, we consider that as well as other parameters are nonzero too. According to Theorem 4, the sufficient condition for the system output to converge to a neighborhood of the expected trajectory is

The matrix pairs and are selected as

Assume and arewhere , , . We can simulate the process, for which and are generated by random function and measurement noise is randomly generated. The parameters of algorithm (5) are as follows: iterative proportional gain , differential gain , association factor , correction factor , and discrete time . The result of the simulation indicates thatmeets the convergence condition, while .

The desired signal is taken the same as previous, for the time particular period of . Also, we have taken other parameters same such as system states , , as well as the input is taken as as previous. We have deployed a faster PD-type ILC which is designed in this article; the changing trend of from the first iteration to the 50th iteration is shown in Figure 8. The algorithm can ensure that converges to 0.

In order to observe of the convergence process of the output trajectory, Figure 9 shows the comparison plots between the output and desired trajectory for the first, fourth, seventh, and 11th iterations.

If we take , the aforementioned law is reduced into the traditional PD-type ILC law, the convergence condition for , and . In a view of spectral radius concept, the convergence is very faster for the particular law. Therefore, we can see that an association correction ILC law proposed in this paper converges faster. Changing trend of from first iteration to 50th iteration in two algorithms is shown in Figure 10.

We can easily observe from Figure 10 that the system tracking error does not converge to 0, but it is bounded. Theorem 4 described the bounded error such as , . As illustrated in Figure 10, the system’s convergence performance is greatly boosted when the PD-type enhanced ILC method developed in this study is used.

According to Table 4, the error value for P, D, PD, and expedited suggested PD-type ILC regulations is 1.1217316 during first iteration. After 15 rounds, the P-type algorithms error is 0.062823, typical D-type law’s error is 0.07538, and the PD algorithm error is 0.024335, where the error of the suggested faster PD algorithm is 0.003683 and the tracking error of all ILC laws decreases progressively as the iteration number increases. However, based on the horizontal data in Table 4, the suggested accelerated PD law has the lowest tracking error when contrasted to other ILC laws (P, D, PD) with the same iteration number [34]. As a result, it is clear from Table 4 that the suggested accelerated PD law in this article has a substantially faster convergence rate than other standard algorithm.

The autoassociative ILC is established for the typical ILC, specifically, utilizing the current knowledge to make accurate predictions input. In contrast with typical ILC, it can be described as follows: in specific trial, the prelearning time is precorrected utilizing the present data. The approach may minimize the number of trials which also increase convergence speed. The method suggested in this paper may differ due to the following reasons:(1)Particular method in this paper even though resembles the standard closed loop iterative learning technique by design, the idea is fundamentally distinct from the original typical PD-type. The typical closed-loop ILC law alters the current control input directly error from previous trial. This technique suggested in this study leverages the current time error to predict the amount of control required to ensure that it does not occur at all times, therefore serving as a precorrection.(2)Though the correlated ILC law suggested in this study is operationally alike to the standard higher-order discrete time control, there is a modification. In contrast with this, a novel iterative learning method offered in this paper corrects control input value of the corresponding time in the same trials.

5. Conclusions

The problem of discrete linear time-invariant systems with parameter perturbation and measurement noise is investigated in this paper. It proposes sufficient conditions for convergence of a PD-type ILC law with relative adjustment under the circumstances of parameter determined without measurement noise, parameter undetermined without noise, parameter determined with measurement noise, and parameter undetermined with measurement noise, respectively. We have proven that the traditional PD-type ILC algorithm has the small convergence radius as compared with proposed law for the same simulation conditions. We have also proven the convergence theoretically with the help of hypervector and spectral radius theory. Numerical simulation shows the efficiency of the proposed control. Ultimately, results depict that the control is able to track the expected trajectory completely within finite intervals when there are uncertain system parameters. In the case of measurement noise existing, the system’s output will converge to a neighborhood of the expected trajectory using the algorithm proposed in this paper. In future studies, it can also be considered that the same method is adopted for the nonlinear discrete systems with parameter perturbations and measurement noises. Also, we can drive the convergence of arbitrary bounded changes of initial conditions.

Data Availability

Data are cited in the main manuscript.

Conflicts of Interest

The authors confirm that there are no conflicts of interest existing in submitting this manuscript.

Authors’ Contributions

All authors approved the manuscript for publication in your jourīnal.


This particular work was financially supported by Foundation for Advanced Talents of Xijing University grant number XJ17B03.