Abstract

Eddy-current displacement sensor (ECS) has been applied widely to the production of modern industry by reason of its characteristics of high sensitivity, good reliability, powerful anti-interference capacity, and noncontact measurement. However, it cannot be used when severe temperature drift occurs at high temperature. Some traditional compensation methods are difficult to achieve good performance with neglecting the nonlinearity. Hence, it is essential to propose a better method for temperature compensation. A novel temperature compensation approach for ECS problem using an improved sparrow search algorithm (ISSA) and radial basis function neural network (RBFNN) is proposed in this article. In the ISSA, a chaos strategy is introduced in the algorithm for avoiding local optimal point, and an elite opposition-based learning strategy is integrated to promote global search ability of ISSA with high efficiency. RBFNN is elected to model the temperature drift, and its parameters are determined by the proposed ISSA. The proposed method compensates the significant deterministic errors caused by temperature variation within a wide temperature range. The experimental schemes were designed to the effectiveness of the proposed method according to data fusion technology. The various test results obtained confirm the potential and effectiveness of the proposed approach compared to some other traditional temperature compensation methods presented in the literatures.

1. Introduction

Displacement sensor has been widely utilized in modern industry for detecting the movement, position of objects, and other physical quantities. An eddy-current displacement sensor (ECS) is one of the most commonly used types of noncontact displacement sensor in the modern industry [1]. Eddy-current testing plays an irreplaceable role in numerous fields such as nuclear power testing and aviation manufacturing due to its unique advantages. Since the magnetic field is insensitive to the physical presence of nonconductive substances such as dust, dirt, and oil, the ECS operates well even in the polluted and bad environments [2]. The ECS is preferred when the absolute position measured is needed within the scope of confined conditions. Moreover, the ECS possesses stable, accurate, and low-cost characteristics [3].

As is well known, the performance of the ECS depends entirely on detection accuracy. Although the key components have been optimized from multidimensional aspects in the process of sensor design, there are inevitably detection errors due to the coupling of the sensor preparation process, detection methods, environmental factors, and other comprehensive factors. Hence, it is of importance to establish an effective error compensation mechanism and develop high-precision error compensation methods to increase the detection accuracy of sensors. Moreover, it has been the focus and difficult topic in the field of eddy-current testing for a long time.

However, it is calibrated by using a linear model neglecting the nonlinearity to lead to the low sensitivity and serious linearity in the engineering practice. To gain better performance of the eddy-current displacement sensor, various compensation methods have been proposed in numerous literatures to improve the sensitivity and perform high-accuracy calibration. Compensation methods can be divided mainly into two methods: hardware method and software method. The former is a type of temperature-controlling method on the basis of machining and experimental techniques [4]. This method can bring out better stability and high accuracy, but it requires complex design and maintenance cost at a high price. A signal conditioning circuit to ECS to ensure the ECS output linear for the full range was proposed by Kumar et al. [5]. A self-temperature compensation approach by use of an analog multiplier was proposed by Zhao et al. [6], which decreases the circuit thermal drift to surpass an order of magnitude. Zheng et al. [7] proposed a novel differential circuit to eliminate the exponential hysteresis temperature drift problem for the high-temperature used eddy-current displacement sensors. The latter is the phenomenological method called temperature compensation using the mathematical model. Moreover, the complexity and usage cost of the temperature compensation are significantly less than those of the hardware method. Temperature compensation of ECS based on a temperature-voltage model using a curve fitting method was proposed by Zheng et al. [8]. A temperature compensation method using a binary regression method was proposed by Lei et al. [9]. Li et al. [10] presented an adaptive mutation particle swarm optimization optimized support vector regression (AMPSO-SVR) integrated with AdaBoost.RT algorithm to solve the silicon piezo-resistive pressure sensor. Temperature compensation of ECS based on a genetic optimized wavelet neural network was presented by Wu et al. [11]. Apparently, the temperature-induced errors can be eliminated by mathematical models using software technology. Therefore, the improved temperature compensation mathematical modeling and its solver are employed in this paper.

In the proposed algorithm, the temperature compensation model of ECS is considered a mathematical approximation problem, and the ECS is established as a function expression of the temperature and output measurement. Considering the shortcomings of the traditional method of the temperature compensation model for ECS, the paper adopts the technique of data fusion to analyze the datum of multisensor. In the meantime, an improved sparrow search algorithm is applied to determine the model parameters, and a high-accuracy temperature compensation model for the ECS is obtained. Moreover, various tests using different methods under different temperature scenes have been carried out in order to evaluate the performance of the proposed method.

This paper is organized as follows. Section 2 introduces four types of the framework of temperature compensation. In Section 3, the original sparrow search algorithm and an improved sparrow search algorithm are described. Section 4 provides a brief introduction to data acquisition experiment and preliminary analysis and processing of collected data. The comparative study of the improved method compared with the original sparrow search algorithm, particle swarm optimization, and multiple polynomial regression against temperature compensation experiment towards collected data is presented in Section 5. Finally, Section 6 gives conclusions and suggestions for the relevant future work.

2. Temperature Calibration Methods

A temperature compensation problem for ECS can be considered a type of mathematical approximation problem. Four types of modeling methods are given to deal with the problem in terms of the available literature.

2.1. Multiple Polynomial Regression Method

In general, polynomial regression approach is widely utilized to fit the mathematical model of ECS empirically to data. Statistically, polynomial regression is referred for constructing the relationship model in both independent and dependent variables. Linear regression is the most basic multiple polynomial regressions, in which the expression between output and input variables is written as an th-order polynomial. Here, the temperature compensation approach by use of multiple polynomial regression based on least squares regression is realized. The polynomial expression of the -th degree with two variables is presented as follows: where is the coefficients of the polynomial and are different input variables.

In the employment of temperature calibration for ECS error, the input variable is denoted as the temperature . The output variable for the ECS problem is denoted as . The error term is defined as the dependent variable. In particular, the method is called binary linear regression (BPR).

For the polynomial regression approach, it is crucial to the optimal degree of the polynomial to match the data. Generally, the sum of squares of the residuals would reduce when the order of polynomial increases. However, since a higher degree of the polynomial expression is not always to gain a better fitting output at some data sample point, it may result in overfitting the data and add to the computation load. Hence, the degree of the polynomial expression relies on the categorization of sample and can be confirmed statistically. In this circumstance, the optimal fitting to the sample is obtained in terms of the minimum sum of squared residuals.

2.2. Least Square Support Vector Machine

Based on a kernel function and support vector machine (SVM), the basic principle of mapping the inputs from a nonlinear and separable space to a higher dimensional space to better divide the inputs is implemented by least square support vector machine (LSSVM), and the detailed expression can be described as follows [12]: where is the basic kernel function that meets the Mercer constraint, is a set of weight vector in the given space, and is the bias term.

The regression approach for solving equation (2) can be equally regarded as a convex constraint optimization problem such as expression (3) by using a regularization item. where is the error variable at time and is a regulation constant. is the output value corresponding to .

Then, according to the KKT constraint conditions, a Lagrange function is constructed by use of the derivative of all the variables. The derived result is obtained as follows. where is the vector of Lagrange multipliers.

Consequently, equation (4) can be converted to equation (5) by using a group of linear equations for a system based on the vector calculation perspective. where is a column vector. In this vector, each element is composed of 1, and is an matrix composed of kernel mapping and data point. where is called the positive definite kernel function.

Since is a positive definite matrix with symmetric construction, the solution of equation (5) can be transformed as that of where denotes the unit matrix.

Finally, the regression model for the LSSVM algorithm is gained as follows.

2.3. Backpropagation Neural Network

An experimental model by use of sample fitting is established to avoid describing the complex mechanism. It is hard to establish an accurate model under multivariate and nonlinear conditions. The neural network behaves well while dealing with nonlinear mapping. It has been widely used in the function approximation of multidimensional nonlinear modeling.

In theory, arbitrary nonlinear mapping from input to output can be implemented by using the backpropagation neural network (BPNN) using an error backpropagation algorithm [13]. BPNN is a multiple-layer feedforward network. It usually has one or more hidden layers. In the hidden layer, the sigmoid transfer function is usually used to be regarded as a basis function, and in the output layer, a pure linear function is adopted.

In the application of the neural network, function approximation is solved by a supervised learning method. The sample pairs of input and corresponding output are defined as the training set. The first procedure is to train the network, where the error between the actual and desired output decreases continually through changing the weights and bias of network neurons [14]. The training process is terminated when termination condition is met. In this circumstance, the parameters obtained for the network would be held for the next network learning. In the predicting stage, the trained network saved is estimated to obtain the corresponding output of given unlabeled inputs.

The BPNN model of ECS has two hidden layers and one output layer. The one-dimensional output and two-dimensional input are the displacement value to be measured and the voltage outputs of both ECS and temperature sensor, respectively. A hyperbolic tangent sigmoid function named as equation (9) is regarded as a transfer function in the hidden layers.

The BP neural network is constructed by adjusting the weights and biases to obtain the optimal network parameters. In the training network and the testing network, the mean squared error (MSE) objective function is calculated, respectively, to guide the process of optimizing.

2.4. Radial Basis Function Neural Network

The radial basis function neural network (RBFNN) is another kind of neural network. The RBFNN and multilayer perceptron model (MLP) are used under most circumstances when dealing with the function approximation problem. In particular, RBF has stronger robustness and faster training speed than MLP for sample data with noise. In addition, MLP is easily trapped into local optimal value during searching for global minima when the search space is complex [15].

The RBFNN is comprised of three feedforward architectures such as the input, hidden, and output layers. The hidden layer accounts for establishing a nonlinear mapping between the spaces of input and output. Its activation function is named as a radial basis function, which has monotonical property in terms of distance from the central point. Generally, the Gaussian function is selected as where and represent the center and the radius of the activation function and is referred to as the Euclidian distance.

Consequently, the output layer is composed of the linear combination of the hidden layer output as follows. where represents the weights between the output layer and the -th neuron in the hidden layers and denotes the number of neurons in the hidden layer.

Generally, RBFNN using linear superposition of RBFs is used to approximate the goal function. The training process of RBFNN is to determine the center and width of the hidden layer firstly; then, the weights of hidden neurons via minimizing the cost function shown in equation (12) are adjusted. where is the regularization factor used for preventing overfitting of the network. The introduction of the item containing can simplify the overall complexity of the model.

3. Improved Sparrow Search Algorithm Method

The sparrow search algorithm (SSA) is a novel swarm intelligence algorithm presented by Xue and Shen in 2020 [16]. It is derived from the research of sparrows’ foraging. SSA is known to be few in the parameter setting, with simple principles, and easy to implement. Moreover, SSA achieves satisfactory optimization results when dealing with much optimization problems. However, there are still several disadvantages that existed, such as plunging into the local optimum easily. In the swarm intelligence algorithm, the integration of the chaos strategy can bring the algorithm the ability to escape from the local optimum and better global search ability. The application of elite opposition-based learning [17] can help the search ability of the discoverer to spend less computation than the general opposition-based learning. Hence, the chaos strategy and elite opposite-based learning strategy are introduced to improve the algorithm for better performance.

3.1. Original Sparrow Search Algorithm

The mathematic description of the SSA is as follows. The position of sparrows is expressed by the following equation. where refers to the total of sparrows and is the total dimension of the variables.

The fitness matrixes are as follows:

There are two different sparrow roles in the population: discoverer and follower. In the algorithm running, both of them are responsible for different behaviors and strategies. Discoverers play the guide role in the population. They steer other individuals for searching food, so the discoverer is the core component of the population. Position update for the discoverer is described below. where is a positive random number which is not greater than 1. is a random value obeying normal distribution. represents matrix filled with 1. represents the current iterations, and refers to the maximum iterations. The value at the -th dimension of the -th sparrow is described as . and denote, respectively, the threshold of alert and safety. Their possible ranges are defined as and . For , it is determined that there are no predators existing; hence, the discoverers can search in the large search scope. On the other hand, when reaches or even greater, it is to say that some individuals have sent out a warning of predators; consequently, all the individuals adjust their positions to defend against other predators.

During the searching process, the follower would track closely the discoverer in order to gain better quality food. Moreover, some followers monitor the discoverers with excellent predation to increase their own proportion. Then, the follower position is updated by where is the current optimal position found by the discover in the -th iteration and denotes the current worst position in the -th iteration. is a matrix consisting of elements with values of only 1 or -1, and . For , it means that the -th sparrow with worse fitness tends to change its position for avoiding starvation.

The sparrow with low fitness quickly yearn for a better position when it detects danger, while the sparrow with best fitness moves randomly for the consideration of being away from danger to increase the diversity of the group. The detailed mathematical formation is given as follows. where is the current global optimum in the -th iteration. is the parameter controlling step size obeying a normal distribution. denotes a random number and controls the movement direction and step size of the current sparrow. indicates the current fitness value of the sparrow. and are the optimal fitness value and worst fitness value so far in the population, respectively. is a small enough positive value in order to prevent the occurrence of 0 in the denominator. For , it indicates that the current sparrow is not the best one and has perceived danger, and its position needs to be adjusted for safety. In the meantime, for , it shows that the sparrow of the current optimal population is in danger and needs to move to another sparrow position to avoid danger.

3.2. Improved Sparrow Search Algorithm Method

In the SSA, the discoverer plays a guiding role in the development of solutions. Once the discoverer enters into the local optimal position, the performance will become worse and it is difficult to escape from local optima. Therefore, it is of great importance to improve the searching method for discoverers of sparrow. Since the search scope of the discoverer in SSA is fixed and the method is single, the exploration and production capacity cannot be adjusted along with the algorithm. Hence, multiple strategies are added in the SSA approach, which makes the discoverer find better solutions instead of getting into the local optimum. The first strategy is the chaos strategy. An improved tent chaotic sequence [18] is applied to initialize the population in order to improve the randomness and robustness of the SSA. When the algorithm enters the stagnation state, an iterative chaotic map with infinite collapses [19] (ICMIC) is used to disturb the population. Then, new solutions are generated to drive the algorithm to escape from the local optimum. Also, the elite opposition-based learning strategy [17] is introduced in the discoverer stage of the proposed SSA to increase the diversity of the group.

In sum, the improved sparrow search algorithm is combined with the chaos strategy and elite opposite-based learning strategy to balance the local and global optimization ability, and shrink operation is added to accelerate convergence. Consequently, ISSA gets rid of the disadvantages of SSA without affecting the performance.

3.2.1. Chaos Strategy

Chaos is a common nonlinear phenomenon in nature. Since chaotic variables have some properties of randomness, ergodicity, and regularity. The application of chaos in swarm intelligence can not only keep the population with enough diversity and improve the quality of the population effectively but also drive the algorithm out of the local optimum and deliver better global search ability. Chaotic sequences with high randomness improve the diversity and convergence of solutions. These characteristics have been verified after many studies in relevant literature [20].

Chaos initialization can generate the initial population with better quality effectively. Therefore, an improved tent chaos [18], which is with uniform ergodicity and fast convergence, is selected for population initialization. The improved tent chaos overcomes the problems of small period and unstable period points in the original tent chaos and has stronger randomness. The one-dimensional self-mapping expression of the improved tent chaos after Bernoulli shift transformation is shown in

New solutions are generated by applying chaotic disturbance to the algorithm for avoiding being trapped into the local optimization and improving the optimization accuracy. Therefore, chaotic perturbation is introduced in this paper, and the chaotic map used is ICMIC [19]. The map shows stronger chaotic characteristics due to its high Lyapunov exponent, and the generated sequences are distributed in the symmetric region . The one-dimensional self-mapping expression of the mapping is given as follows. where is an adjustable parameter. A series of chaotic maps with good performance can be obtained by adjusting it.

The resulting chaotic sequence is used as a parameter to adjust the amplitude and direction of the chaotic disturbance. The mathematical description of chaotic disturbance is adjusted as follows. where is the vector consisting of the coordinates of the -th individual and represents the one-dimensional ICMIC sequence generated by equation (19).

The adaptive weight is introduced to obtain a better balance between the global and local search ability. The expression of the involved adaptive weight is shown in

The discoverer position update equation with adaptive weight is given as follow. where are random numbers obeying the distribution.

3.2.2. Elite Opposition-Based Learning Strategy

The opposition-based learning strategy [21] performs well in the improvements of some optimization algorithms. The given strategy can be promoted to increase the diversity of the group by generating reverse solutions in the search space. The elite opposition-based learning [17] has less computation than the general opposition-based learning, but the improvement effect on the algorithm performance is also excellent. Due to the consideration of improving the search ability of the discoverer, the elite opposition-based learning strategy is applied to the position updating of the discoverer. The principle is described as follows. where is a positive random number which is not greater than 1 and , are the maximum and minimum of the coordinate values of all individuals in the -th dimension, respectively.

The elite opposition-based learning strategy is added in the discoverer stage of SSA to improve the diversity of the population, which makes the follower search more flexible.

3.3. Implementation of ISSA

There are always many undermined parameters in the methods mentioned before. These parameters are significant for the performance of the method; empirical parameters are not always the best parameter. But it is complicated and time-consuming to search for the best parameter from all the value space exhaustively. SSA is efficient to solve the problem without searching the whole value space. Through introducing several schemas, ISSA is better than SSA on the global search ability without affecting the merits of high efficiency. Therefore, ISSA is implemented to find the best parameter of the RBF model. The algorithm is described as follows. (1)Initialize parameters (number of sparrows , the max iteration itermax, the proportion of discoverer , the proportion of follower , the proportion of elite individual , and alert threshold ), and initialize the population position composed of the model parameters with equation (18)(2)Generate the corresponding model from model parameters, and calculate the error between the model compensation result and the actual value as its fitness value. Rank the population by fitness, and find the individuals with the current maximum or minimum fitness(3)Select the sparrows with high fitness value as the discoverer, and update their position fellow using equation (22)(4)Rank the population by fitness value, and select elite individuals according to proportion to form set . Carry out elite opposite-based learning according to equation (23)(5)The remaining individuals act as followers. Update the position of followers according to equation (16)(6)Choose randomly some sparrows to form the watchers, and update their position according to equation (17)(7)Calculate the average fitness and the variance, and determine whether to disturb the population according to the variance. If yes, a chaotic sequence is generated according to equation (19), and the better solution in the group is chaotically disturbed according to equation (20). The new solution is updated obeying the greedy rule(8)Perform shrink operation(9)Check whether the termination condition is satisfied. If not, return to step (2); otherwise, output the best position and minimum cost

The flow chart of ISSA is described in Figure 1.

4. Response of ECS under Different Temperature and Displacement Conditions

4.1. Thermal Tests

There are two thermal testing methods for collecting data: the soak and the ramp methods. In the soak method, every time the target temperature is reached or stabilization time is arrived, the measurements were recorded. On the contrary, the temperature changes linearly in the ramp method, and the whole test is quite faster than the soak method. Both the ramp and the soak methods were employed by Araghi and Landry [22]. Since the sensor needs considerably long time to be stable, the soak method can gain more accurate results than the ramp method. Hence, the soak method for the thermal test is adopted in this paper.

According to data fusion technology, an ECS and a temperature sensor are mounted inside the same thermal chamber, and their outputs are recorded under stable condition when the temperature point is reached. Moreover, a series of voltages including the output of ECS and temperature sensor are obtained through the thermal test. Then, the correlations between the result and temperature variations can be analyzed mathematically. The ECS was tested with the temperature range from 20°C to 55°C at 5°C intervals, and the displacement ranges from 0 mm to 2.4 mm at 0.2 mm intervals. Additionally, the average of the three test results is calculated as the credible result to eliminate the influence of chance factors.

4.2. Response and Analysis

Figure 2 shows the response result of the ECS and temperature sensor under different temperature and displacement conditions. From Figure 2, the output voltage of the ECS changes nonlinearly with the displacement and temperature fluctuation. In addition, the temperature drift is nonlinear. Figure 3 shows the variation of relative error () at certain displacement with temperature before compensation. At reference temperature , the obtained is calculated for each displacement. Here, the used is given by where refers to output voltages at reference temperature and is the output voltages obtained at temperature for the ECS. It can be seen that the error with temperature is nonlinear, and the maximum absolute value of reaches about 20% before compensation. It is impossible to ignore the drifts caused by temperature variations.

To describe the influence caused by temperature, three evaluating indicators are introduced: zero temperature coefficient (ZTC) , temperature sensitivity coefficient (TSC) , and additional temperature error (ATE) . They can be calculated by the following equations: where is the maximum difference when the displacement is 0, is the maximum difference under a certain displacement, is the temperature change of the ECS, and is the full range output. It is calculated that ZTC is , TSC is , and ATE is 40.43%.

Traditionally, the least square-based linear calibration is applied to decrease the error. After the linear calibration, ZTC becomes , TSC is reduced to , and ATE is 10.17%. It cannot deliver acceptable results in real-world applications by using a traditional simple linear calibration model, for the deterministic error is still nonnegligible. Hence, there is room for improvement to find a better compensation method.

5. Data Compensation Experiment and Result Analysis

5.1. Response and Analysis

For the purpose of verifying the optimization ability of the proposed approach, various approaches suggested from the literatures such as second-order BPR, third-order BPR, fourth-order BPR, LSSVM, BP, RBF, particle swarm optimization-based least square support vector machine (PSO-LSSVM) [23], particle swarm optimization-based backpropagation (PSO-BP) [24], particle swarm optimization-based radial basis function (PSO-RBF) [25], SSA-optimized LSSVM (SSA-LSSVM), SSA-optimized BPNN (SSA-BP), SSA-optimized RBFNN (SSA-RBF), ISSA-optimized LSSVM (ISSA-LSSVM), ISSA-optimized BPNN (ISSA-BP), and ISSA-optimized RBFNN (ISSA-RBF) were investigated in the experiment. Since optimization algorithms concerned in these experiments are population-based optimization techniques, the parameter set is defined as an individual in the group. The mean squared error (MSE) between the ideal displacement and the compensated displacement is regarded as the fitness of each algorithm in the optimization process. For BPNN [19], the network contains two hidden layers. The first hidden layer has 7 neurons, and the second hidden layer has 4 neurons.

The parameters to be determined by algorithms for LSSVM is penalty coefficient and kernel function coefficient . And BPNN and RBFNN are for the weights and biases of neurons. All the parameters to be determined of each model comprise the position of individuals. The algorithms run to find the best position. When the algorithm ends, the best model parameter can be determined from the position of the best individual obtained by the algorithm. All the related algorithms were operated with united population size and iterations for fairness. Specifically, the population size is 100, and the maximum iteration is 100. Parameter settings such as PSO, SSA, and ISSA are demonstrated in Table 1. In the ISSA, the ED value is determined by the performance analysis.

The whole program of LSSVM is realized by the given LS-SVMlab toolbox and existing libsvm toolbox based on the program of MATLAB [26]. When training, the ratio of the training set to test set is 4 : 1. All set is formed randomly, and the union of the training set and test set is the whole data set. Each algorithm was implemented by 30 runs in order to eliminate the influence of occasional results. In the meantime, the cross-validation experiment was taken to prevent overfitting [27].

5.2. Analysis of the ISSA Parameters

Some key parameters directly affect the performance and efficiency of the optimization approach. Hence, the effect of the proportion of elite individual ED for the proposed ISSA was tested in this set of experiments. In order to find out the relation of parameters on performance, a series of experiments were performed to describe the impact of the parameters. In all the experiments, the other parameters such as population size and runs remain in the same settings. All the cases of the parameter ED value may be impossible to give to evaluate the performance. Hence, the ED value is varied from 5 to 95. The results of the experiments for different parameters are demonstrated in Table 2 and Figure 4.

As seen from Figure 4 and Table 2, the mean value of MSE is the smallest when . At the same time, the variance is relatively low; also, the minimum of MSE is the smallest. With the increase in ED, the calculation cost increases, but there is no significant improvement in performance when ED is larger than 45%. Hence, the proportion of elite individual ED takes 30% in the following experiments.

5.3. Response of Experiment and Analysis

In the experiment, the average values of multiple runs will be taken as the final compensation result to better evaluate the performance of the related algorithms. The compensation results of different algorithms are given in Table 3.

As shown in Table 3, it can be inferred that BPR only provides slight performance improvement of the sensor by increasing the order of BPR, and the error is still unacceptable. Among all the models, RBFNN is generally better than the other two models in describing the nonlinearity of temperature drift. However, better parameters of the model can be found through using the swarm intelligence algorithm instead of the gradient descent algorithm; the performance indexes have been improved by one or two orders of magnitude. In particular, the ISSA approach presents much better performance than both SSA and PSO. Moreover, the satisfactory compensation result can be gained by optimization algorithms based on the framework of RBFNN. Hence, outstanding learning ability and generalization performance were implemented by the proposed ISSA-optimized RBFNN. The given ISSA-RBF makes the best of the advantages of ISSA and RBFNN to achieve very satisfying compensation results. The results indicate that the proposed methods can decrease the ATE to 0.07%, while there is a minimum decrease of 7.25% with the polynomial regression method, of 0.50% with BP, and of 0.03% with LSSVM. Also, the proposed method achieved lowest on ZTC and TSC. On variance of the result, the ISSA-RBF is lowest, reduced to about half of SSA-RBF, about two-thirds of PSO-RBF, and much better than others, which means that the robustness of the proposed method is best among the mentioned method.

The convergence curves of the MSE value yielded by eleven algorithms with the exception of BPR and LSSVM are demonstrated in Figure 5. The ISSA can search for better solution than PSO and SSA within the same model. When the other algorithms are in stagnation behavior, the ISSA can still find new solutions to further reduce the error.

Table 4 shows robustness analysis on different approaches mentioned before. The minimum, maximum, mean value, and variance of the MSE obtained before are regarded as indicators. Among the involved methods, the best result in all the indicators is obtained by ISSA-RBF.

To better evaluate the generalization performance of the proposed ISSA, the compensation result coming from the train set and test set by using ISSA-RBF is shown in Figure 6. It shows that the ISSA-RBF method is highly fitting the sample under without overfitting.

6. Conclusions

An improved compensation optimization method ISSA-RBF based on ISSA and RBFNN is put forward in this article to compensate for the nonlinear error aroused by temperature variation for the ECS problem. In the temperature compensation model, RBFNN is implemented to compensate for the error caused by temperature variation, with ISSA being used to obtain its best parameters. We firstly introduced the chaos strategy and elite opposite-based learning strategy in the ISSA to overcome the shortage of trapping into the local optimum easily and to increase its global searching ability. Next, the analysis of key parameters of the ISSA was performed. The suitable parameter settings for the ISSA are gained. Finally, the comparative experiments with alternatives were implemented on actual sample data to assess the optimization ability of the proposed method combing ISSA and RBFNN. The results show that the proposed ISSA approach acquires better results both accuracy and robustness than other methods given in this paper. Moreover, it shows that ISSA achieved remarkable improvement over the original SSA algorithm.

In the future, the stochastic model of the ECS noise will be established in order to provide better performance and decrease the remaining error after compensation. The proposed ISSA will be employed to obtain optimal parameters for the novel compensation model and provide continuous high-accuracy dynamic compensation for the ECS problem.

Data Availability

In this research, we measured our own data to develop the algorithm and used the data to validate our algorithm (contact [email protected]).

Conflicts of Interest

The authors declare no conflict of interest.

Acknowledgments

This work was supported by the Anhui Provincial Natural Science Foundation under Grant nos. 2008085MF197 and 1708085ME132, Key Project of Natural Science Research of Anhui Provincial Department of Education under Grant no. KJ2016A431, and Graduate Innovation Project of Anqing Normal University under Grant no. 2021yjsXSCX104.