Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6629820 | https://doi.org/10.1155/2021/6629820

Qibing Jin, Youliang Ye, Wu Cai, Zeyu Wang, "Recursive Identification for Fractional Order Hammerstein Model Based on ADELS", Mathematical Problems in Engineering, vol. 2021, Article ID 6629820, 16 pages, 2021. https://doi.org/10.1155/2021/6629820

Recursive Identification for Fractional Order Hammerstein Model Based on ADELS

Academic Editor: Rohit Salgotra
Received01 Dec 2020
Revised04 Feb 2021
Accepted15 Feb 2021
Published23 Feb 2021

Abstract

This paper deals with the identification of the fractional order Hammerstein model by using proposed adaptive differential evolution with the Local search strategy (ADELS) algorithm with the steepest descent method and the overparameterization based auxiliary model recursive least squares (OAMRLS) algorithm. The parameters of the static nonlinear block and the dynamic linear block of the model are all unknown, including the fractional order. The initial value of the parameter is obtained by the proposed ADELS algorithm. The main innovation of ADELS is to adaptively generate the next generation based on the fitness function value within the population through scoring rules and introduce Chebyshev mapping into the newly generated population for local search. Based on the steepest descent method, the fractional order identification using initial values is derived. The remaining parameters are derived through the OAMRLS algorithm. With the initial value obtained by ADELS, the identification result of the algorithm is more accurate. The simulation results illustrate the significance of the proposed algorithm.

1. Introduction

Currently, systems in industrial processes have become more and more increasingly complex, such as chemical plants, robotic arms, etc. This requires us to provide a more accurate mathematical model. Therefore, many parameter identification methods have been studied for system modelling and identification, such as bilinear systems, linear systems, and nonlinear systems [14]. Chen et al. proposed a recursive identification method to solve the problem of bilinear-parameter model identification [5]. An adaptive data filtering-based gradient iterative algorithm is presented for the system parameter identification [6]. In [7], based on Bayesian optimization, a serial two-level decomposition structure is introduced into the deep hybrid model.

As a widely represented mathematical model, the nonlinear model has attracted more and more attention. The Hammerstein model is a typical nonlinear model composed of static nonlinear links and dynamic linear links. It can reflect the characteristics of process characteristics and describe a series of nonlinear processes such as neutralization processes [8,9], lithium-ion battery thermal systems [10,11], heat dissipation systems [12], and so on.

Due to the wide application of the Hammerstein model, how to identify accurate mathematical models has become the research direction of many researchers. At present, some methods that can effectively identify the Hammerstein model are proposed, such as the least squares algorithm [13], maximum likelihood algorithm [14,15], etc. There are also some improved algorithms based on traditional algorithms, such as support vector machines [16,17] and multi-innovative stochastic gradient [18,19].

In recent years, many scholars have researched the static nonlinear module of the Hammerstein model, hoping to obtain a mathematical model with higher accuracy and wider applicability. Some methods to describe the nonlinear part are proposed, such as radial basis functions [20], neural fuzzy networks [21,22], polynomials [23], etc. However, these studies rarely focus on dynamic linear modules, which are based on the integer order. Compared with integer-order differential operators, fractional order operators have more accurate results in the expression of heredity, memory, and other phenomena. Fractional calculus extends the order of calculus from the domain of integers to real numbers and even complex numbers. The increased degrees of freedom of fractional order operators can enable fractional order systems to describe some complex physical phenomena more accurately, such as the fuel cell physical phenomena [24], voltammetric E-tongue system [25], electric vehicle Li-ion batteries [26], etc. Experiments show that fractional order systems can more accurately model actual systems. Therefore, the research on the identification of fractional order Hammerstein model becomes very meaningful.

At present, some identification methods of fractional order systems have been studied. The enhanced response sensitivity approach can reduce the sensitivity of the identification parameter results in measurement noise [27]. Block pulse functions can identify fractional order systems with initial conditions [28]. The recursive instrumental variable algorithm has acceptable accuracy for fractional order models in the absence of noise [29]. However, there are few studies about the identification problem of fractional order Hammerstein model. In [30], parameters of the fractional Hammerstein model are identified by using particle swarm optimization algorithm, which does not consider the internal relationship between system parameters. In [31], the Levenberg–Marquardt Algorithm combined with two decomposition principles is developed to identify parameters of the fractional Hammerstein model. However, when the system has noise interference, the effect of identification is not satisfactory.

The intelligent optimization algorithm has attracted increasingly scholars’ attention because of its generalization, simple parameter setting, and easy programming. It has been applied to the field of parameter optimization. In [32], the modified particle swarm optimization algorithm was presented and applied to the parameter identification of fractional order PID controllers. In [33], an innovative metaheuristic algorithm called Hybrid Gray Wolf Optimization Algorithm was proposed, and it is applied to search the best parameters of the fuel cell. However, the results of the intelligent optimization algorithm may be inaccurate because of falling into local optimization.

Based on the above background, this paper will adopt a new method to parameterize the nonlinear and linear coefficients and fractional order of the fractional Hammerstein model, which has rarely been considered before. The identification of each parameter has a corresponding mathematical derivation. In the proposed method, the ADELS algorithm adds a domain search strategy and improves the setting of algorithm parameters. Parameters of the fractional order Hammerstein model including the fractional order are identified. The identification result provides a relatively accurate initial value for the subsequent algorithms and solves the problem that most algorithms rely on the initial value. Then, the fractional order identification method is proposed by using the principle of steepest gradient descent, and the auxiliary model recursive least squares algorithm is used to estimate the coefficients of the fractional order Hammerstein model. Numerical simulations prove the effectiveness of the proposed methods.

The main contributions are to propose an adaptive differential evolution with local search strategy (ADELS) algorithm for identifying the initial parameters of the fractional order Hammerstein system and develop new recursive identification methods for identifying the fractional order and coefficients of the fractional order nonlinear system by using the auxiliary model. Compared with the classic DE, the proposed ADELS algorithm has higher estimation accuracy because of the adaptive strategy and the introduction of Chebyshev sequence. The self-adaptive operator and mutation strategy make the individual take the corresponding search strategy according to the fitness function value, and the Chebyshev sequence has the characteristics of periodicity, randomness, and ergodicity, which makes up for the defect that DE is easy to fall into local optimum. Based on the auxiliary model, the recursive algorithm of fractional order Hammerstein model is given by using the overparameterized least squares and gradient descent algorithm. Both fractional orders and coefficients of the system are considered. The proposed ADELS algorithm can be applied to many fields like robust control, multi-objective optimization, and economic dispatch problems [3436]. The proposed recursive identification can be extended to some other systems such as linear, bilinear, and nonlinear systems [3739].

In this paper, the mathematical background of identifying the fractional Hammerstein model is discussed in Section 2. In Section 3, an improved DE algorithm is proposed. The overparameterized auxiliary model recursive least squares algorithm and fractional order identification algorithm are discussed in Section 4. In Section 5, numerical simulations prove the significance of the proposed method. Finally, Section 6 gives some conclusions.

2. Mathematical Background

2.1. Fractional Differentiation

Fractional calculus is the general form of integer calculus. So far, there are many definitions of fractional calculus. In some cases, different definitions of fractional calculus are completely equivalent. The rationality of these definitions has been proven in the literature. Among them, the definitions of three types of fractional calculus are widely used: Grünwald–Letnikov (GL), Riemann–Liouville (RL), and Caputo definitions [40].

In this article, considering the ease of calculation, the GL definition will be used [41]. The α fractional derivative of is defined as follows:where and are the limits, is the sampling time, and [.] stands for the integer part. Newton’s binomial coefficient can be calculated by using the Euler’s Gamma function as follows:

By using to replace the binomial coefficient as follows:

The fractional derivative of can be approximated as [42]where denotes when a = 0.

2.2. Differential Evolution

Differential evolution (DE) algorithm has better stability and global search ability. The population of each generation of DE is composed of NP parameter vectors with dimension D. Each individual is represented aswhere means the sequence of individuals in the population, denotes the number of evolutionary generations, and NP is the size of the population.

In DE, random initial populations are generated by a uniform probability distribution. After initialization, the population evolves by using three steps: mutation, crossover, and selection.

2.2.1. Mutation

For each target vector at generation G, the mutation vector is generated as follows:where , and are different individuals randomly selected from the population except the individual and is the mutation operator that defines the amplitude of the difference vector. When the generated mutation vector exceeds the boundary value, it will be replaced by the newly generated vector according to the boundary update rule. Some other mutation strategies have also been widely used in the DE algorithm [4345].

2.2.2. Crossover

To increase the diversity of trial vectors, crossover operations are introduced as follows:

is the crossover operator that controls the probability of accepting the mutation in each dimension, is a random number generated by a uniform distribution in [0,1], represents the dimension of the individual, and is a uniformly distributed random integer.

2.2.3. Selection

The trial vector and the target vector are compared according to the greedy criterion, and the vector with the best fitness value will appear in the next generation population. The selection operation to minimize the fitness function can be expressed as

Repeat the three steps above until the termination condition is met, such as the total number of iterations or the accuracy requirement of the algorithm.

2.3. Fractional Order Hammerstein Model

The fractional order Hammerstein model is shown in Figure 1.

The system inputs and outputs are and , is the Gaussian white noise, and is the measured output. It consists of a static nonlinear part and a dynamic linear part. The fractional order differential equation of the linear part can be expressed as

The fractional order transfer function of the linear part is defined as follows:where are the coefficients and is the fractional order. They are unknown and need to be identified. Without loss of generality, assume . The input of the linear link is generated by the system input through the nonlinear block, which can be expressed by the following formula:where are unknown coefficients to be identified and are a series of basis functions.

In summary, the fractional Hammerstein system discussed in this article can be expressed as follows:

3. Adaptive Differential Evolution with Local Search Strategy

The DE algorithm is an intelligent search algorithm worked by cooperation and competition among individuals in the population. It still remains the population-based global search strategy in the evolutionary algorithm. As an efficient parallel search algorithm, the DE algorithm has strong global convergence and robustness, which is worthy of theoretical and applied research.

The adaptive differential evolution with the local search strategy (ADELS) algorithm proposed in this paper has been innovatively improved in three places. Mutation and crossover operators in the DE algorithm control the enlargement ratio of the deviation vector and the probability that the trial vector comes from the mutation vector, which plays a pivotal role. Therefore, the operators in the proposed algorithm adopt the strategy of adaptive change according to the individual fitness value, which improves the optimization efficiency of the algorithm. Different mutation strategies also affect the accuracy of parameter identification. In this paper, different mutation strategies are selected according to the fitness value, which further improves the efficiency of the algorithm. Then, the local search ability of the algorithm is improved by using the Chebyshev chaotic sequence, which makes up for the shortcoming that the DE algorithm is easy to fall into the local optimum.

3.1. Parameters Adaption

Mutation operator and crossover operator affect the optimization ability and the convergence speed of the algorithm. Generally, these operators are fixed constants. In this paper, the individual score is obtained based on the fitness value of each individual in each generation by introducing a nonlinear scoring method, and the score ranges from 0 to 1. The operator will be determined based on the score of each individual, which makes the operator more adaptable. The scoring rules based on the minimized objective function are as follows:where mean the minimum, maximum, and average fitness function values in the current generation population, respectively, where means the value of the individual fitness function in the G generation. It can be seen from the above formula that individuals with fitness function values below the average get higher scores. Then, the formulas of mutation operator and crossover operator can be expressed as follows:where and are the -th mutation operator and the -th crossover operator at generation ; and represent the limits of the mutation operator; and and are the limits of the crossover operator.

It can be seen from the above formulas that the value of the operator changes with the individual’s fitness value score. When the individual’s fitness value is low, the score will be high according to equation (13), and the value of and will decrease. When the individual’s fitness value is higher than the average, the score is low, and the value of and will increase accordingly. The individual is close to the optimal solution when the individual’s fitness value is low, and the required mutation range and crossover probability are reduced. Then, the values of the mutation operator and crossover operator need to be reduced accordingly to find the optimal value. On the contrary, when the individual’s fitness value is high, the value of the operator needs to be increased to search for the optimal solution in a larger range.

3.2. Mutation Strategy

In each iteration, individuals are sorted according to the fitness value, and the population will be divided into two subpopulations with different individual fitness function. Two subpopulations adopt different mutation strategies as follows:where and are different individuals randomly selected from the population except for the individual and represents the best individual of the previous generation population. From equation (16), it is worth noticing that at least four individuals in the population are needed.

3.3. Local Search Strategy

To avoid the algorithm falling into local optimum, a Chebyshev chaotic sequence is introduced to perform a local search for the optimal individual. Chaotic systems have the characteristics of periodicity, randomness, and ergodicity, which can increase the diversity of individual populations. The family of Chebyshev polynomial can be expressed as

Then, the Chebyshev map can be expressed aswhere m is the order of the polynomial. In this paper, to get better ergodicity of the mapping, the parameter of the Chebyshev map is set to m = 4 and .

Then, a dual search strategy is introduced to search for the best gene for each dimension of the optimal individual. Due to the randomness and ergodicity of chaos, chaotic mapping has a better performance in improving the diversity of the population compared to a uniform probability distribution. The local search strategy is as follows:where and are obtained by the local search, and they will be compared with the current optimal individual based on the fitness value. If a new optimal individual is obtained, it will be randomly copied to ten individuals of the next generation population.

Because of adaptive operators, different mutation strategies, and local search based on the Chebyshev chaotic map, the algorithm’s optimization ability and convergence speed have been significantly improved according to the following numerical simulations. The effectiveness of the ADELS algorithm will be proved in the numerical simulation in Section 4. The proposed ADELS Algorithm 1 is shown as follows.

(1)Define the objective function ;
(2)Initialize parameters of the Chebyshev map: m and ;
(3)Initialize individuals ;
(4)Evaluate all the individuals in the population by the objective function ;
(5)Initialize the number of iteration k = 1;
(6)While (k < max number of iterations N)
(7)For each individual
(8)Update operators adaptively (equations (13)–(15));
(9)The mutation vector is obtained by mutation (equations (16));
(10)If the generated mutation vector exceeds the boundary, a new mutation vector is generated randomly, until it is within the boundary;
(11)The trial vectors is obtained by equations (7);
(12)The best individuals is obtained by greedy selection (equations (8));
(13)Find the current best according to the local search strategy (equations (17)–(19));
(14)If a new optimal individual is obtained, it will be randomly copied to ten individuals of the next generation population;
(15)End
(16)k = k + 1;
(17)End while;
(18)Postprocess results and visualization.

4. The Identification Algorithm of Fractional Order Hammerstein Model

In the fractional order Hammerstein model, the polynomial coefficients of the static nonlinear block, the coefficients of the dynamic linear block, and the fractional order are all needed to be determined, which is rarely concerned before. In this paper, a set of input and output data will be first used to obtain the initial values of coefficients and the fractional order through the proposed ADELS algorithm. Then, the initial value will be used to get the final parameter identification result of the fractional order Hammerstein model through an over-parameterized based auxiliary model recursive least squares (OAMRLS) algorithm and the steepest descent method.

4.1. Parameter Estimation Using Overparameterized Auxiliary Model Recursive Least Squares (OAMRLS) Algorithm

The OAMRLS algorithm is required to get the initial values of the parameters, which have been calculated in the ADELS algorithm. According to equations (1) and (2), the fractional Hammerstein system can be expressed as follows:

The input-output relations can be written in the regression form aswhere is the noise-free output of the system and is the variable vector including input-output data, which can be expressed as

The estimated vector is given bywhere

It should be noted that the vector contains an unknown variable , which makes it impossible to directly use the least squares algorithm or gradient descent method to solve the problem. In view of this, the auxiliary model is used to estimate the unknown variable . It can be seen from Figure 2 that is used as an input to build an auxiliary model. The main idea of the auxiliary model is that the real output of the system is replaced by the output of the auxiliary model ; then, the parameter identification problem can be solved by and . The parameters of the auxiliary model will eventually approximate the real ones after iteration.

According to Figure 2, is the nonlinear block of the auxiliary system, and represents the linear block of the auxiliary system. The expression of the auxiliary model regression form can be written as follows:where the estimate of can be used as the value of the auxiliary model information vector and the parameter estimate of can be used as the value of the auxiliary model parameter vector :

Then, the output of the auxiliary model is used to represent the estimate of :

Define the criterion function as

Then, the estimated can be obtained by minimizing the criterion function as the following formula:

Provided the inverse exists, then can be obtained by using least squares estimation:

Then, a recursive version is given as the following formula:where the adaptation gain matrix P is generally chosen to be started by

For the problem of overparameterization, to ensure the uniqueness of parameter identification, the value of can be assumed to be 1 without loss of generality. Then, the value of and can be calculated as follows:

Based on the identification result of the ADELS algorithm, the OAMRLS algorithm cooperates with the following order identification method to complete the identification of the system parameters.

4.2. Fractional Order Identification Based on Steepest Descent Method

The initial value of the fractional order has been given by the ADELS algorithm; then, the iterative optimization of the fractional order is performed according to the steepest descent method. Define the criterion function as

The criterion function can be minimized by the steepest descent method:where satisfies and is the partial derivative of to which can be expressed as follows:where

According to formula (10), is defined aswhere . As for the polynomial , it can be replaced by [46,47], the inverse Laplace transform of iswhere can be expressed as follows:

Then, can be expressed as follows:

Consequently, can be obtained by equations (39)–(45).

Combined with ADELS and OAMRLS algorithms, the identification process of the fractional Hammerstein model is shown as follows.

5. Experimental Results

5.1. Benchmark Functions

Firstly, 15 benchmark functions will be used to verify the effectiveness of the ADELS algorithm, and the results will be compared with other intelligent optimization algorithms proposed in recent years. These functions include unimodal and multimodal functions. The function expression and search range are shown in Table 1.


FunctionRange

F1[−100,100]
F2[−10,10]
F3[−100,100]
F4[−500,500]
F5[−10,10]
F6[−100,100]
F7[−100,100]
F8[−100,100]
F9[−100,100]
F10[−100,100]
F11[−32,32]
F12[−100,100]
F13[−100,100]
F14[−100,100]
F15[−5.12,5.12]

In this paper, the ADELS algorithm will be compared with three algorithms including JADE [48], GWO [49], and WOA [50]. Their parameter settings are given in Table 2.


AlgorithmParameter settings

JADECR = 0.5, F = 0.5, size = 100,
GWOSize = 50
WOASize = 50
ADELSSize = 50,

In this section, the population size of the four algorithms is set to 50, the number of iterations is 500, and the simulation dimensions of the benchmark function are 10 and 30. The simulation results are shown in Tables 3 and 4.


FJADEGWOWOAADELS
AveStdAveStdAveStdAveStd

F12.55E–222.43E–225.46E–334.78E–333.35E–862.52E–8300
F27.65E–115.09E–117.26E–206.93E–202.13E–545.88E–521.80E–1900
F32.01E–221.98E–220.45870.200.0866.7E–0200
F4−11684.98142.47−6281.5861.64−11535.271685.05−12391.8376.85
F51.96E–214.00E–213.46E–327.43E–323.12E–852.68E–8400
F61.01E–159.11E–161.14E–272.34E–273.15E–787.32E–7900
F717.692.0626.670.8727.540.330.239119.20
F82.3E–020.100.940.220.490.281.50E–325.62E–48
F91.78E–176.95E–183.52E–301.63E–291.79E–808.55E–8200
F105.25E–215.43E–217.65E–334.43E–332.44E–831.12E–8000
F113.05E–124.89E–124.42E–144.53E–153.73E–152.55E–151.15E–153.63E–15
F12−675.4910.04−476.5166.40−870.001.02E–12−870.003.61E–13
F139.60E–104.28E–091.4E–037.8E–035.1E–030.007800
F149.11E–126.81E–133.68E–115.30E–120000
F1526.513.412.693.380000


FJADEGWOWOAADELS
AveStdAveStdAveStdAveStd

F19.16E–863.47E–858.14E–703.06E–698.55E–892.07E–8800
F26.74E–562.22E–552.88E–402.65E–402.43E–579.34E–575.77E–1910
F31.46E–041.78E–042.56E–067.58E–071.99E–042.2E–0400
F4−3542.12672.55−2806.47322.09−3439.87567.21−4189.8360.45
F52.74E–881.22E–871.01E–684.31E–681.34E–894.04E–8900
F61.42E–806.29E–801.94E–644.90E–641.79E–845.36E–8400
F76.676.38E–016.535.31E–016.604.79E–011.99E–032.61E–03
F86.81E–021.01E–011.02E–017.3E–024.77E–028.06E–021.50E–325.62E–48
F95.18E–852.15E–842.08E–674.60E–671.58E–843.50E–8400
F106.55E–872.32E–867.91E–691.83E–686.55E–872.91E–8600
F113.02E–152.13E–156.75E–151.74E–153.73E–152.47E–154.80E–141.09E–14
F12−90.000−82.637.16−90.000−90.000
F136.80E–021.02E–011.8E–022.7E–025.59E–028.67E–023.11E–022.67E–02
F14002.16E–111.85E–110000
F15005.99E–011.850000

The convergence curves of ADELS, GWO, WOA, and JADE are as shown in Figures 3 and 4; ADELS is very competitive with other metaheuristic algorithms due to its adaptive operators and local search with the Chebyshev chaotic map.

The unimodal function can be used to evaluate the exploitation capability of the optimization algorithm, whereas the multimodal function is very effective to evaluate the exploration capability of the algorithm. The results reported in Tables 3 and 4 indicate that the exploitation and exploration capabilities of ADELS are significantly improved compared to other algorithms.

5.2. Identification of Fractional Hammerstein Model

In this section, several different algorithms will be compared to prove the effectiveness of the proposed algorithm. The fractional Hammerstein model is presented as follows:

The parameter vector to be identified is

The input u is a persistent excitation signal sequence with zero mean and unit variance. (t) is the stochastic Gaussian noise with zero mean and . Then, the output y(t) is generated by the corresponding fractional Hammerstein model.

The ADELS algorithm is used to estimate initial parameter values of the nonlinear and linear parts of the system. The population size of the algorithm is set to 40, the minimum and maximum values of the mutation operator are 0.1 and 0.9, the minimum and maximum values of the crossover operator are 0.1 and 0.9, and the number of iterations of the algorithm is set to 200. The parameters estimated by the optimization algorithm converge to the true value, and the relative quadratic error is as follows:where is the true parameter vector and is the estimated parameter vector. The simulation results compared with other approaches are shown in Table 5. It reveals that more accurate initial values can be obtained by ADELS than other heuristic algorithms.


ApproachParameterRqe

1GWO1.08631.77691.82391.86150.51590.29470.09320.28410.1459
2JADE0.38281.90531.63461.59200.53990.27280.08170.26030.3875
3ADELS1.17301.53981.85161.72890.50650.28820.09500.29700.0539

Simulation results obtained by different algorithms are given in Table 6. In the case of the same initial value obtained by ADELS, the accuracy of OAMRLS compared to KTRLS has been improved effectively.


ApproachParameterRqe

1GWO & OAMRLS1.32331.48211.89951.71730.50390.29120.09530.30220.0133
2JADE & OAMRLS1.36991.45821.90441.73390.50650.29140.09460.30630.0332
3ADELS & KTSG1.19321.56741.88671.72890.50650.28820.09500.29690.0486
4ADELS & OAMRLS1.30601.48991.89771.71000.50290.29120.09560.30070.0069

The performance of the intelligent optimization algorithm to search for the initial value can be measured by Root Mean Square Error (RMSE) and Mean Square Error (MSE), which are defined as follows:where is the system output and is the estimated system output. The output estimation errors of different intelligent optimization algorithms are depicted in Figure 5.

Using the initial estimation value, OAMRLS and the steepest descent method are combined to estimate the accurate parameter vector. Figure 6 shows the actual outputs and the identified outputs regarding Algorithm 2, and the output error of the system identified by Algorithm 2 is shown in Figure 7. It can be seen from Figure 8 that the identified model can stably track the real output of the system under the step response. Figure 9 illustrates the trend of estimated fractional order convergence curve.

(1)Collect the input/output data {};
(2)Obtain the initial of unknown parameters by using Algorithm 1;
(3)While (t < max number of iterations N)
(4)Estimate the value of system coefficient according to the equation (31);
(5)Update the fractional order according to the equation (38);
(6)If the criterion function value satisfies the error accuracy
(7)Break;
(8)End;
(9)t = t + 1;
(10)End while;
(11)Postprocess results and visualization.

6. Summary

This paper outlines the identification methods for fractional order Hammerstein models. To improve the accuracy of identification, the heuristic algorithm is considered to search the initial value. Simulation results show that the heuristic algorithm is easy to fall into local optimum. To solve this problem, ADELS algorithm is proposed. In this algorithm, combined with Chebyshev mapping, genes are adaptively searched in each generation, which leads to faster convergence and more accurate results. The initial values of the coefficients of the linear part and the nonlinear part are obtained by the ADELS. Furthermore, the fractional order of the transfer function obtains the initial value through the ADELS. Then, the method of fractional order Hammerstein model coefficient estimation is proposed and compared with other algorithms. When the initial value is obtained, the parameters can be accurately identified by the corresponding OAMRLS algorithm. The fractional order is iterated by the steepest descent algorithm through derivation. Through comparison with other algorithms, the simulation results show the effectiveness of the proposed algorithm. The proposed methods in this paper can be applied to other literatures [5156] such as parameter identification problems of different systems, engineering applications, fault diagnosis, and so on.

Data Availability

The code used in this paper was written in MATLAB 2018a; the data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (61673004) and the Fundamental Research Funds of the Central Universities of China (XK1802-4).

References

  1. F. Ding, L. Lv, J. Pan, X. Wan, and X.-B. Jin, “Two-stage gradient-based iterative estimation methods for controlled autoregressive systems using the measurement data,” International Journal of Control, Automation and Systems, vol. 18, no. 4, pp. 886–896, 2020. View at: Publisher Site | Google Scholar
  2. X. Zhang, F. Ding, L. Xu, and E. Yang, “Highly computationally efficient state filter based on the delta operator,” International Journal of Adaptive Control and Signal Processing, vol. 33, no. 6, pp. 875–889, 2019. View at: Publisher Site | Google Scholar
  3. X. Zhang, F. Ding, and E. Yang, “State estimation for bilinear systems through minimizing the covariance matrix of the state estimation errors,” International Journal of Adaptive Control and Signal Processing, vol. 33, no. 7, pp. 1157–1173, 2019. View at: Publisher Site | Google Scholar
  4. M. Li, X. Liu, and F. Ding, “Filtering-based maximum likelihood gradient iterative estimation algorithm for bilinear systems with autoregressive moving average noise,” Circuits, Systems, and Signal Processing, vol. 37, no. 11, pp. 5023–5048, 2018. View at: Publisher Site | Google Scholar
  5. M. Chen, F. Ding, R. Lin, T. Y. Ng, Y. Zhang, and W. Wei, “Maximum likelihood least squares-based iterative methods for output-error bilinear-parameter models with colored noises,” International Journal of Robust and Nonlinear Control, vol. 30, no. 15, pp. 6262–6280, 2020. View at: Publisher Site | Google Scholar
  6. X. Jin, H. X. Wang, X. Y. Wang et al., “Adaptive gradient-based iterative algorithm for multivariabe controlled autoregressive moving average systems using the data filtering technique,” Complexity, vol. 2018, Article ID 9598307, 11 pages, 2018. View at: Publisher Site | Google Scholar
  7. X.-B. Jin, H.-X. Wang, X.-Y. Wang, Y.-T. Bai, T.-L. Su, and J.-L. Kong, “Deep-learning prediction model with serial two-level decomposition based on Bayesian optimization,” Complexity, vol. 2020, Article ID 4346803, 14 pages, 2020. View at: Publisher Site | Google Scholar
  8. Z. Zou, D. Zhao, X. Liu et al., “Pole-placement self-tuning control of nonlinear Hammerstein system and its application to pH process control,” Chinese Journal of Chemical Engineering, vol. 23, no. 8, pp. 1364–1368, 2015. View at: Publisher Site | Google Scholar
  9. J. G. Smith, S. Kamat, and K. P. Madhavan, “Modeling of pH process using wavenet based Hammerstein model,” Journal of Process Control, vol. 17, no. 6, pp. 551–561, 2007. View at: Publisher Site | Google Scholar
  10. R. Quachio and C. Garcia, “MPC relevant identification method for hammerstein and wiener models,” Journal of Process Control, vol. 80, pp. 78–88, 2019. View at: Publisher Site | Google Scholar
  11. K.-K. Xu, H.-D. Yang, and C.-J. Zhu, “A novel extreme learning machine-based hammerstein-wiener model for complex nonlinear industrial processes,” Neurocomputing, vol. 358, pp. 246–254, 2019. View at: Publisher Site | Google Scholar
  12. H. O. Garcés, J. M. Palma, A. J. Rojas, and V. Valdebenito, “Model analysis of heat transfer by hammerstein systems and optical instrumentation,” IFAC-Papers OnLine, vol. 52, no. 1, pp. 442–447, 2019. View at: Publisher Site | Google Scholar
  13. S. Dong, L. Yu, W.-A. Zhang, and B. Chen, “Robust extended recursive least squares identification algorithm for hammerstein systems with dynamic disturbances,” Digital Signal Processing, vol. 101, 2020. View at: Google Scholar
  14. L. Ma and X. Liu, “Recursive maximum likelihood method for the identification of hammerstein ARMAX system,” Applied Mathematical Modelling, vol. 40, no. 13-14, pp. 6523–6535, 2016. View at: Publisher Site | Google Scholar
  15. G. Giordano and J. Sjöberg, “Maximum likelihood identification of wiener-hammerstein system with process noise,” IFAC-Papers OnLine, vol. 51, no. 15, pp. 401–406, 2018. View at: Publisher Site | Google Scholar
  16. M. A. Dhaifallah and D. T. Westwick, “Support vector machine identification of output error hammerstein models,” IFAC Proceedings Volumes, vol. 44, no. 1, pp. 13948–13953, 2011. View at: Publisher Site | Google Scholar
  17. M. Al Dhaifallah and D. T. Westwick, “Identification of NARX hammerstein models based on support vector machines,” IFAC Proceedings Volumes, vol. 41, no. 2, pp. 4999–5004, 2008. View at: Publisher Site | Google Scholar
  18. S. Cheng, Y. Wei, D. Sheng, Y. Chen, and Y. Wang, “Identification for Hammerstein nonlinear ARMAX systems based on multi-innovation fractional order stochastic gradient,” Signal Processing, vol. 142, pp. 1–10, 2018. View at: Publisher Site | Google Scholar
  19. L. Li, X. Ren, and F. Guo, “Modified multi-innovation stochastic gradient algorithm for Wiener-Hammerstein systems with backlash,” Journal of the Franklin Institute, vol. 355, no. 9, pp. 4050–4075, 2018. View at: Publisher Site | Google Scholar
  20. W. Mi, H. Rao, T. Qian, and S. Zhong, “Identification of discrete hammerstein systems by using adaptive finite rational orthogonal basis functions,” Applied Mathematics and Computation, vol. 361, pp. 354–364, 2019. View at: Publisher Site | Google Scholar
  21. L. Jia, X. Li, and M.-S. Chiu, “The identification of neuro-fuzzy based MIMO hammerstein model with separable input signals,” Neurocomputing, vol. 174, pp. 530–541, 2016. View at: Publisher Site | Google Scholar
  22. F. Li, L. Jia, D. Peng, and C. Han, “Neuro-fuzzy based identification method for hammerstein output error model with colored noise,” Neurocomputing, vol. 244, pp. 90–101, 2017. View at: Publisher Site | Google Scholar
  23. W. Zhao, “Recursive identification for hammerstein systems with diminishing excitation signals,” IFAC-PapersOnLine, vol. 52, no. 24, pp. 151–157, 2019. View at: Publisher Site | Google Scholar
  24. M. A. Taleb, O. Béthoux, and E. Godoy, “Identification of a PEMFC fractional order model,” International Journal of Hydrogen Energy, vol. 42, no. 2, pp. 1499–1509, 2017. View at: Publisher Site | Google Scholar
  25. S. Kumar and A. Ghosh, “Identification of fractional order model for a voltammetric E-tongue system,” Measurement, vol. 150, Article ID 107064, 2020. View at: Google Scholar
  26. Q. Zhang, Y. Shang, Y. Li, N. Cui, B. Duan, and C. Zhang, “A novel fractional variable-order equivalent circuit model and parameter identification of electric vehicle Li-ion batteries,” Isa Transactions, vol. 97, pp. 448–457, 2020. View at: Publisher Site | Google Scholar
  27. G. Liu, L. Wang, W. L. Luo, J. K. Liu, and Z. R. Lu, “Parameter identification of fractional order system using enhanced response sensitivity approach,” Communications in Nonlinear Science and Numerical Simulation, vol. 67, pp. 492–505, 2019. View at: Publisher Site | Google Scholar
  28. Y. Lu, Y. Tang, X. Zhang, and S. Wang, “Parameter identification of fractional order systems with nonzero initial conditions based on block pulse functions,” Measurement, vol. 158, 2020. View at: Publisher Site | Google Scholar
  29. M. Chetoui and M. Aoun, “Instrumental variables based methods for linear systems identification with fractional models in the EVI context,” in Proceedings of the 16th International Multi-Conference on Systems, Signals & Devices, pp. 90–95, IEEE, New York, NY, USA, March 2019. View at: Google Scholar
  30. K. Hammar, T. Djamah, and M. Bettayeb, “Fractional hammerstein system identification using particle swarm optimization,” in Proceedings of the International Conference on Modelling, Jeju Island, South Korea, September 2015. View at: Google Scholar
  31. K. Hammar, T. Djamah, and M. Bettayeb, “Fractional Hammerstein system identification based on two decomposition principles,” IFAC-PapersOnLine, vol. 52, no. 13, pp. 206–210, 2019. View at: Publisher Site | Google Scholar
  32. L. Xu, B. Song, M. Cao, and Y. Xiao, “A new approach to optimal design of digital fractional-order PI D controller,” Neurocomputing, vol. 363, pp. 66–77, 2019. View at: Publisher Site | Google Scholar
  33. D. Miao, W. Chen, W. Zhao, and T. Demsas, “Parameter estimation of PEM fuel cells employing the hybrid grey wolf optimization method,” Energy, vol. 193, 2020. View at: Publisher Site | Google Scholar
  34. Y. Dai, D. Wu, S. Yu, and Y. Yan, “Robust control of underwater vehicle-manipulator system using grey wolf optimizer-based nonlinear disturbance observer and H-infinity controller,” Complexity, vol. 2020, Article ID 6549572, 17 pages, 2020. View at: Publisher Site | Google Scholar
  35. M. Du, Z. Cheng, Y. Zhang, and S. Wang, “Multiobjective optimization of tool geometric parameters using genetic algorithm,” Complexity, vol. 2018, Article ID 9692764, 14 pages, 2018. View at: Publisher Site | Google Scholar
  36. A. Wu and Z.-L. Yang, “An elitist transposon quantum-based particle swarm optimization algorithm for economic dispatch problems,” Complexity, vol. 2018, Article ID 7276585, 15 pages, 2018. View at: Publisher Site | Google Scholar
  37. P. Gong, W.-Q. Wang, F. Li, and H. C. So, “Sparsity-aware transmit beamspace design for FDA-MIMO radar,” Signal Processing, vol. 144, pp. 99–103, 2018. View at: Publisher Site | Google Scholar
  38. J. Pan, X. Jiang, X. Wan, and W. Ding, “A filtering based multi-innovation extended stochastic gradient algorithm for multivariable control systems,” International Journal of Control, Automation and Systems, vol. 15, no. 3, pp. 1189–1197, 2017. View at: Publisher Site | Google Scholar
  39. J. Pan, W. Li, and H. Zhang, “Control algorithms of magnetic suspension systems based on the improved double exponential reaching law of sliding mode control,” International Journal of Control, Automation and Systems, vol. 16, no. 6, pp. 2878–2887, 2018. View at: Publisher Site | Google Scholar
  40. I. Podlubny, L. Dorcak, and I. Kostial, “On fractional derivatives, fractional-order dynamic systems and PI λDμ-controllers,” in Proceedings of the 36th IEEE Conference on Decision and Control, vol. 5, pp. 4985–5499, San Diego, CA, USA, 1997. View at: Publisher Site | Google Scholar
  41. A. Dzieliński and D. Sierociuk, “Stability of discrete fractional order state-space systems,” IFAC Proceedings Volumes, vol. 39, no. 11, pp. 505–510, 2006. View at: Google Scholar
  42. I. Podlubny, Fractional Differential Equations, Academic Press, Cambridge, MA, USA, 1999.
  43. S. Birogul, “Hybrid harris hawk optimization based on differential evolution (HHODE) algorithm for optimal power flow problem,” IEEE Access, vol. 7, pp. 184468–184488, 2019. View at: Publisher Site | Google Scholar
  44. K. Miao and Z. Wang, “Neighbor-induction and population-dispersion in differential evolution algorithm,” IEEE Access, vol. 7, pp. 146358–146378, 2019. View at: Publisher Site | Google Scholar
  45. Y.-X. Zhang and J. Gou, “Adaptive differential evolution algorithm based on restart mechanism and direction information,” IEEE Access, vol. 7, pp. 166803–166814, 2019. View at: Publisher Site | Google Scholar
  46. A. D. Polyanin and A. V. Manzhirov, Handbook of Integral Equations, CRC Press, Washington, DC, USA, 2nd edition, 2008.
  47. J. Wang, Y. Wei, T. Liu, A. Li, and Y. Wang, “Fully parametric identification for continuous time fractional order hammerstein systems,” Journal of the Franklin Institute, vol. 357, no. 1, pp. 651–666, 2020. View at: Publisher Site | Google Scholar
  48. J. Zhang and A. C. Sanderson, “Adaptive differential evolution with optional external archive,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 945–958, 2009. View at: Google Scholar
  49. S. Mirjalili, S. M. Mirjalili, and A. Lewis, “Grey wolf optimizer,” Advances in Engineering Software, vol. 69, pp. 46–61, 2014. View at: Publisher Site | Google Scholar
  50. S. Mirjalili and A. Lewis, “The whale optimization algorithm,” Advances in Engineering Software, vol. 95, pp. 51–67, 2016. View at: Publisher Site | Google Scholar
  51. J. Ding, “The hierarchical iterative identification algorithm for multi-input-output-error systems with autoregressive noise,” Complexity, vol. 2017, Article ID 5292894, 11 pages, 2017. View at: Publisher Site | Google Scholar
  52. J. Ding, J. Chen, J. Lin, and L. Wan, “Particle filtering based parameter estimation for systems with output-error type model structures,” Journal of the Franklin Institute, vol. 356, no. 10, pp. 5521–5540, 2019. View at: Publisher Site | Google Scholar
  53. X. Zhao, Z. Lin, Bo Fu, Li He, and Na Fang, “Research on automatic generation control with wind power participation based on predictive optimal 2-degree-of-freedom pid strategy for multi-area interconnected power system,” Energies, vol. 11, 3325 pages, 2018. View at: Publisher Site | Google Scholar
  54. L. Wang, H. Liu, Le Van Dai, and Y. Liu, “Novel method for identifying fault location of mixed lines,” Energies,, vol. 11, 2018. View at: Publisher Site | Google Scholar
  55. R. Hong, C. Guorong, L. Yao, and H. Hongli, “Research on rfid fault diagnosis method based on SVM and PSO algorithm,” in Proceedings of the 2018 IEEE International Conference of Safety Produce Informatization (IICSPI), pp. 122–126, Chongqing, China, December 2018. View at: Google Scholar
  56. B. Li, L. Zhang, B. Zhang, W. Wang, and Z. Hao, “Fault diagnosis method based on-IBP neural network,” in Proceedings of the 2019 CAA Symposium on Fault Detection, Supervision and Safety for Technical Processes (SAFEPROCESS), pp. 93–97, Xiamen, China, July 2019. View at: Google Scholar

Copyright © 2021 Qibing Jin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views69
Downloads71
Citations

Related articles