Abstract

The nonlinear function fitting is an essential research issue. At present, the main function fitting methods are statistical methods and artificial neural network, but statistical methods have many inherent strict limits in application, and the back propagation (BP) neural network used widely has too many optimized parameters. For the gaps and lacks of existing researches, the FOA-GRNN was proposed and compared with the GRNN, GA-BP, PSO-BP, and BP through three nonlinear functions from simplicity to complexity for verifying the accuracy and robustness of the FOA-GRNN. The experiment results showed that the FOA-GRNN had the best fitting precision and fastest convergence speed; meanwhile the predictions were stable and reliable in the Mexican Hat function and Rastrgrin function. In the most complex Griewank function, the prediction of FOA-GRNN was becoming unstable and the model did not show better than GRNN model adopting equal step length searching method, but the performance of FOA-GRNN is superior to that of GA-BP, PSO-BP, and BP. The paper presents a new approach to optimize the parameter of GRNN and also provides a new nonlinear function fitting method, which has better fitting precision, faster calculation speed, more few adjusted parameters, and more powerful processing ability for small samples. The processing capacity of FOA for treating high complex nonlinear function needs to be further improved and developed in the future study.

1. Introduction

In various fields of science and technology, the relationship between different variables is usually described by the function. Some functional relations can be derived from classical theoretical analysis, which not only provides theoretical basis for further analysis and research, but also facilitates the solution of practical engineering problems. However, many engineering problems are difficult to directly derive the functional expression between variables, or even if the expression can be obtained, the formula is very complex, which is not conducive to further analysis and calculation. Due to the research needs, it is expected to obtain the functional relationship between these variables. At this time, the fitting method can be used to obtain the approximate functional expression between variables by combining the known experimental data with the mathematical method.

According to the fitting problem, the fitting methods are generally divided into linear fitting methods and nonlinear fitting methods. For the linear data fitting problem, linear fitting methods usually adopt a set of simple, linearly independent basis functions to approximate the experimental data. However, the linear fitting methods have many inherent strict limits in practical application; assume that independent variables are normal distribution, nonrelevance, and independent; function form relating to independent variables and dependent variable must be preexisting and linearity [1]. For the nonlinear data fitting problem, it is usually divided into two cases for processing. One way is to use variable substitution to turn it into a linear problem and then solve it. The other is the problem that cannot linearize (processing is more troublesome) and can use the method such as artificial neural network to achieve. The artificial neural network has strong mapping and generalization ability, which can self-organize, self-study, and fit the arbitrary nonlinear relationship between dependent variable and independent variables without accurate mathematical model [2, 3]. The back propagation (BP) neural network is one of most popular artificial neural networks, but the BP neural network needs a large number of data samples for training and also has too many optimized parameters.

Generalized regression neural network (GRNN) has excellent performance of anti-interference, good approximation ability, fast convergence speed, and the ability of autonomous learning. In addition, in the treatment of small sample or unstable data, the GRNN has a better effect [4, 5]. Currently, GRNN is widely used in engineering fields [610]. This study contributes to existing studies by applying GRNN for nonlinear function approximation and comparing with linear regression model and BP neural network to assess the strengths and weaknesses of GRNN.

Although the GRNN does not need preexisting function model, the SPREAD parameter of GRNN has significant influence on the prediction performance. Therefore, in order to select an appropriate SPREAD parameter, researchers have raised multiple evolutionary algorithms for optimizing neural network, such as genetic algorithm (GA) [11, 12], particle swarm optimization (PSO) [13, 14], tabu search algorithm (TSA) [15, 16], ant colony algorithm (ACA) [17, 18], etc.

However, traditional evolutionary algorithms have common disadvantages, such as complex program, difficulty to understand, slow convergence rate, etc., so the scholar Pan proposed the fruit fly optimization algorithm (FOA) in 2012 [19]. Currently, few studies have conducted FOA on the optimization of GRNN. Although FOA-GRNN model was further designed by Yongli Zhang et al. in 2018, FOA-GRNN was only directly applied to the research issue of technological innovation, not compared with other algorithms or models to verify the its advantages in the algorithm performance such as nonlinear fitting ability and predictive stability [20]. Therefore, this study is undertaken to assess various intelligent algorithms to test whether FOA is the accurate optimization algorithm with best fitting precision and fastest calculation speed, to fill the research gap.

This paper utilized the fruit fly optimization algorithm (FOA) to optimize the SPREAD parameter of GRNN and established the FOA-GRNN model and then designed three nonlinear functions from simplicity to complexity, compared FOA-GRNN with GRNN, GA-BP, PSO-BP, and BP models in the same experimental conditions for verifying the accuracy and robustness of the FOA-GRNN model.

The rest of this study is organized as follows. The principle of GRNN, FOA, GA, PSO, and BP are described in Section 2. The simulation experiments are given in Section 3. Finally, conclusions are drawn and future research is suggested in Section 4.

2. Generalized Regression Neural Network

2.1. The Basic Theory of GRNN

In 1991, American scholar Donald F. Specht proposed the generalized regression neural network (GRNN), which is a special form of radial basis function (RBF) neural network [21]. GRNN has strong nonlinear mapping ability and is suitable for solving various nonlinear problems. In addition, when the sample data is small, GRNN also has a better prediction effect. At present, GRNN has been widely applied in various fields, such as enterprise operation management, bioengineering, energy demand forecasting, structural analysis, and control decision-making system [5, 10, 2224].

The network structure of GRNN is composed of input layer, pattern layer, summation layer, and output layer, as shown as Figure 1, wherein represents network input vector and represents network out vector.

The number of neurons in the input layer is equal to the dimension n of the input vector in the learning sample. Each neuron is a simple distribution unit, and these neurons directly pass input variables to the pattern layer.

The number of neurons in the pattern layer is the number n of learning samples, and each neuron corresponds to one learning sample. The transfer function of the ith neuron in the model layer is as follows.

In formula (1), Pi is the output of the ith neuron in the pattern layer, σ is the smoothing factor, and Xi is the learning sample corresponding to the ith neuron.

The summation layer contains the summation unit in the denominator and the summation unit in the numerator. The summation unit in the denominator performs arithmetic summation on the outputs of all neurons in the pattern layer. The sum of connection weights between all neurons of pattern layer and a neuron in the summation layer are 1, and the transfer function is shown in formula (2).

The molecular summation unit is a weighted summation of the output of neurons in the pattern layer; the element j in the output sample Yi is the connection weight between pattern layer neuron i and summation layer neuron j, as shown in formula (3).

The output layer Y is divided by the output of the numerator summation unit and the denominator summation unit in the summation layer, as shown in formula (4).

2.2. FOA-GRNN Model

When using GRNN for modeling, as long as the training sample is determined, the corresponding network structure and the connection weight between neurons are also determined accordingly, so that the stability of GRNN is very good; its training process is actually the process of optimizing smooth parameter σ. By changing the parameter σ, the transfer function of the hidden layer is adjusted to obtain the best regression estimation result. If the value of parameter σ is not appropriate, GRNN cannot achieve the expected training effect. Because the simulation tool used in this paper is MATLAB, the parameter SPREAD is used to express the smooth parameter σ.

In order to obtain the ideal parameter SPREAD and make the prediction model have good generalization ability, fruit fly optimization (FOA) algorithm is adopted to optimize the smooth parameter SPREAD in GRNN model.

Fruit fly optimization algorithm (FOA) is a global optimization algorithm by simulating fruit fly foraging behavior. FOA has the advantages of simple structure, low computation complexity, and high execution efficiency, proposed by scholar Pan Wen-Chao in 2012 [19]. The optimization process for searching the parameter SPREAD of GRNN model through FOA is shown in Figure 2.

The specific optimization steps of FOA-GRNN are as follows.

Step 1. Randomly initialize fruit fly position, setting iteration number, and population size.

Step 2. Randomly set the direction and distance of each fruit fly.

Step 3. Calculate the distance between the fruit fly and the origin (= +) and the determination value of flavor concentration (S=1/Di).

Step 4. Establish the fitness function. One flavor concentration determination value S represents one position of fruit fly, then it is regarded as one SPREAD parameter of GRNN, substituted into the fitness function to obtain the fitness of the position of fruit fly. In this study, the reciprocal of root mean square error (RMSE) between the predicted values and actual values is treated as the fitness function of fruit fly.

Step 5. Regenerate a new fruit fly population. The best fruit fly with highest fitness is pick out, then around the best fruit fly, the new fruit flies will be regenerated and born. The direction and distance of new fruit fly is adjusted based on the position of the best fruit fly of previous generation.

Step 6. Repeat Steps 3 to 5 and the fruit fly iteratively searches for the optimal parameter to determine whether the flavor concentration is better than the previous iteration.

Step 7. Judge whether to meet the requirements of the terminating condition, finally, obtaining the optimal parameter SPREAD and GRNN model.

2.3. Other Intelligent Algorithms
2.3.1. BP Neural Network

Back propagation (BP) neural network is a multilayer feed forward network proposed by Rumelhart and McCelland in 1986, and it is one of neural network modes applied widely. It can self-learn, self-organize, and fit any nonlinear function. During training process, the BP neural network takes minimum sum of squared errors as learning goal and continuously adjusts the weights and thresholds via steepest descent method to approach the desired output [25]. Due to learning rule is steepest descent method, the BP neural network easily falls into local optima, unable to obtain the global optimum [26].

2.3.2. Genetic Algorithm

Genetic algorithm (GA) is an evolutionary algorithm proposed by American professor Holland in 1962. It follows along with the biology inheritance and the evolutionary mechanism of “survival of the fittest” to search the global optimal solution. During optimum process, genetic algorithm firstly transforms the problem solutions into population individuals embodied by code and initializes population randomly, in which each individual represents one candidate solution. Then genetic algorithm calculates the fitness of each population individual, reserves superior individuals with high fitness, and eliminates inferior individuals with low fitness through selection, crossover, and mutation operations; evolution repeats until the best individual is searched, which is decoded to obtain the optimal solution [20, 26].

2.3.3. Particle Swarm Optimization Algorithm

Particle swarm optimization (PSO) is an evolutionary computation technique based on swarm intelligence. It stems from the simulation of bird flock's looking for food and has been proposed by Kennedy and Eberhart in 1995. The PSO algorithm maps the solution space to the particle swarm, and each particle is characterized with fitness, position, and speed. The fitness distinguishes superior and inferior particle; the speed determines the direction and distance of particle moving. The optimal position particle pass is called “private best position”, and the optimal position in entire particle swarm's best position is called “global best position”. When particle moves in search space, each particle's movement is guided by private best position as well as global best position. The better particle will be found around the private best position and global best position, which will be updated until obtaining the optimal solution represented by global best position [13, 27].

Weights and thresholds of BP neural network have great influence on the fitting precision. Therefore, in order to continuously enhance fitting precision, genetic algorithm and particle swarm optimization algorithm are introduced to BP neural network for obtaining the optimal weights and thresholds, and finally, the BP neural network optimized by genetic algorithm (GA-BP) and BP neural network optimized by particle swarm optimization algorithm (PSO-BP) are established.

3. Simulation Experiments

In the sequence from simplicity to complexity, we designed three nonlinear functions. Each prediction model is set to the same experimental conditions. Using each nonlinear function, our study randomly generated 10,000 data, and the first 8000 were considered as training data; the following 2000 were treated as testing data.

3.1. Parameters Setting and Performance Criteria

For verifying the accuracy and robustness of the FOA-GRNN model, the FOA-GRNN is compared with GRNN, GA-BP, PSO-BP, and BP. The experimental conditions are set to equal, that means input and output variables, training and testing dataset, fitness function, population scale, and loops number are the same.

In FOA-GRNN, the SPREAD parameter of GRNN is the optimal targets of FOA; one fruit fly position represents one parameter. In GA-BP and PSO-BP, the weights and thresholds of BP are regarded as the population individual of GA and PSO. The reciprocal of root mean square error (RMSE) between prediction values and actual values is treated as individual fitness function. Population size and evolution generations of each evolutionary algorithm are set to 20 and 20.

Moreover, crossover and mutation probability of genetic algorithm is set to 0.2 and 0.1. In PSO, the particle maximum, particle maximum, velocity maximum, velocity minimum, and acceleration constants c1&c2 are set to 0.55,0.05,1, -1, and 1.49445. For BP neural network, the network topological structure is set to 2-4-1, the convergence criterion is RMSE less than or equal to 0.0001, and the iteration maximum, learning rate, and training function are 100, 0.1, and traingd (Table 1).

The root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) are used to assess the fitting precision of models. The smaller the value of RMSE, MAE, or MAPE is, the less the forecasting error is, the better the model fit is, and vice versa.

The formulae of three performance criteria are shown as follows:

where represents actual output value, represents predicted output value, and n is sample size.

3.2. Experiment 1: Mexican Hat Function

The first nonlinear function for testing is Mexican Hat function [28, 29], shown in formula (8) and Figure 3.

In formula (8), the range of the variables x and y is .

To search for the best population individual, FOA-GRNN, GRNN, GA-BP, and PSO-BP evolved 20 generations; the fitness value of best population individual in every generation appeared in Figure 4.

As shown in Figure 4, FOA-GRNN and PSO-BP begin to converge at the 4 and 16 generations, GRNN adopting equal step length searching method cannot capture the optimal speed parameter, and the GA-BP model converges slowly. The comparison result is shown in Table 2.

In Table 2, the most correct predictions are exhibited by FOA-GRNN (Total Error=57.2698, RMSE=0.0412, MAE=0.0286, and MAPE=0.1468), followed by GRNN (Total Error=138.3150, RMSE=0.0961, MAE=0.0692, and MAPE=0.3719). Due to various local extrema of testing function, the GA-BP (Total Error=833.1369, RMSE=0.5256, MAE=0.4166, and MAPE=2.0451), PSO-BP (Total Error=768.0228, RMSE=0.4824, MAE=0.3840, and MAPE=1.7451) models and BP (Total Error=861.4234, RMSE=0.5436, MAE=0.4307, and MAPE=2.3267) perform poorly. Figure 4 and Table 2 proved that FOA-GRNN had the fastest convergence speed and best fitting precision.

Otherwise, with the same training and testing data, the programs of FOA-GRNN, GRNN, GA-BP, PSO-BP, and BP models had been run repeatedly for several times, each running of FOA-GRNN and GRNN models returned the same result, but the predicted results of GA-BP, PSO-BP, and BP models were differently within the same test scenario. That indicated the FOA-GRNN and GRNN models had been trained sufficiently and had gained the optimal solution, the predictions of FOA-GRNN and GRNN models were stable and reliable. However, for GA-BP, PSO-BP, and BP models, the sample size was small; thorough training had not been received, so the optimal solution could not be obtained; models exhibited unsaturated and unstable state

3.3. Experiment 2: Rastrgrin Function

The second nonlinear function for testing is Rastrgrin function [30, 31]; its formula and graph are shown in formula (9) and Figure 5.

In formula (9), the range of the variables x and y is .

Figure 6 revealed that FOA-GRNN had the fastest convergence speed and the highest accuracy, the GRNN adopting equal step length searching method cannot capture the optimal speed parameter. The detailed predictive comparison is shown in Table 3.

Table 3 certified that the most accurate prediction is FOA-GRNN, followed by GRNN, PSO-BP, GA-BP, and BP. Meanwhile, the FOA-GRNN and GRNN models were stable; PSO-BP, GA-BP, and BP were unsaturated and unstable.

3.4. Experiment 3: Griewank Function

The third nonlinear function for testing is Griewank function [32, 33]. It is descripted in formula (10) and Figure 7.

In formula (10), the range of the variables x and y is .

The third testing function is more complex, with more local extreme values. Figure 8 and Table 4 revealed that, in convergence speed and fitting precision, FOA-GRNN is superior to GA-BP and PSO-BP, but it has no advantage over GRNN adopting equal step length searching method; the FOA needs further improvement and development.

4. Conclusion and Discussion

In response to the gaps and shortages of current prediction methods, this study established FOA-GRNN model, compared FOA-GRNN model with other four prediction models raised by previous scholar (GRNN, GA-BP, PSO-BP, and BP). Simulation results showed that the FOA-GRNN model proposed by this study had advantageous properties such as better fitting precision, faster calculation speed, fewer adjusted parameters, and stronger nonlinear fitting capability. The main conclusions are summarized as follows.

First, compare with BP model, GRNN model has fewer adjusted parameters, better fitting precision, and more suitable for small samples. The parameters searched by BP neural network model are the whole network weights and thresholds, so the number of parameters is huge. For example, in this study, BP neural network has 2 input nodes, 2 hidden nodes, and 1 output node; the parameters required to be sought are 17. Therefore, the BP neural network model is difficult to obtain global optimization and easily fall into the local extremum, and the convergence rate is relatively slow. Moreover, BP needs larger training dataset; the small or incomplete sample data will lead to insufficient training and uncertain predictions. But the parameter needed to be adjusted by GRNN model is only one, so the GRNN model has better fitting precision, faster convergence speed, and more suitable for small sample data.

Second, FOA can get global optimum solution more rapidly and greatly enhance the convergence rate of the GRNN model. FOA which is an artificial intelligence algorithm is put forward by foraging act of research and observation about groups of fruit flies. Compared with random assignment or loops with step-in for searching parameters, the FOA can obtain the best SPREAD parameter faster and improve the convergence speed of the GRNN model.

Third, the fitting precision of optimized BP neural network model is better than that of BP neural network model. In GA or PSO, population individuals are filtered according to fitness value, superior individuals with high fitness will be retained, and inferior individuals with low fitness will be eliminated; the new generation inherits the previous generation, and also superior to the previous generation, so cycle evolution, until meeting the termination conditions, gets the fittest individual. The GA or PSO algorithm obtains the optimal weights and thresholds of BP neural network more easy and quick than the random search method. Therefore, the predictions of GA-BP and PSO-GA are more accurate than BP neural network model.

Nevertheless, more research is needed to present opportunities for making perfect predictions. As the nonlinear function becomes more and more complex, the local extrema are increasing, and the improvement effect of FOA is getting smaller and smaller, which indicates that the processing capacity for treating complex nonlinear function of FOA needs to be further improved and developed. The same problem also exists in GA and PSO. Moreover, the FOA-GRNN model should be further tested and developed through its application in other fields of science, math, technology, and engineering.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This research was funded by the National Social Science Fund of China (Grant no. 17BGL202).