Research Article  Open Access
Hongping Hu, Yangyang Li, Yanping Bai, Juping Zhang, Maoxing Liu, "The Improved Antlion Optimizer and Artificial Neural Network for Chinese Influenza Prediction", Complexity, vol. 2019, Article ID 1480392, 12 pages, 2019. https://doi.org/10.1155/2019/1480392
The Improved Antlion Optimizer and Artificial Neural Network for Chinese Influenza Prediction
Abstract
The antlion optimizer (ALO) is a new swarmbased metaheuristic algorithm for optimization, which mimics the hunting mechanism of antlions in nature. Aiming at the shortcoming that ALO has unbalanced exploration and development capability for some complex optimization problems, inspired by the particle swarm optimization (PSO), the updated position of antlions in elitism operator of ALO is improved, and thus the improved ALO (IALO) is obtained. The proposed IALO is compared against sine cosine algorithm (SCA), PSO, Mothflame optimization algorithm (MFO), multiverse optimizer (MVO), and ALO by performing on 23 classic benchmark functions. The experimental results show that the proposed IALO outperforms SCA, PSO, MFO, MVO, and ALO according to the average values and the convergence speeds. And the proposed IALO is tested to optimize the parameters of BP neural network for predicting the Chinese influenza and the predicted model is built, written as IALOBPNN, which is against the models: BPNN, SCABPNN, PSOBPNN, MFOBPNN, MVOBPNN, and ALOBPNN. It is shown that the predicted model IALOBPNN has smaller errors than other six predicted models, which illustrates that the IALO has potentiality to optimize the weights and basis of BP neural network for predicting the Chinese influenza effectively. Therefore, the proposed IALO is an effective and efficient algorithm suitable for optimization problems.
1. Introduction
Optimization problems exist in scientific research and engineering areas [1–3], such as statistical physics [4, 5], computer science [6], artificial intelligence [7], and pattern recognition [8].
For every optimization problem, there is at least one global optimal solution and there may be some local optimal solutions as well as a global optimal solution. Many researchers wish to seek the global optimum for solving optimization problems. Therefore, many methods are created and applied to solve optimization problems. In particular, swarm intelligence algorithms proposed give strong support. Genetic algorithm (GA) proposed by Holland in 1992 [9] simulates Darwinian evolution, and particle swarm optimization (PSO) proposed in 1995 [10] simulates birds’ behavior. And GA algorithm and PSO algorithm are constantly improved and applied to many aspects, such as complex system [11], hyperlastic materials [12], radiation detectors [13], and reaction kinetic parameters of biomass pyrolysis [14].
Since then, swarm intelligence algorithms have been constantly proposed and widely applied to find the global optimum of the optimization problems in various fields. For example, Mothflame optimization algorithm (MFO) [15] mimics the moth eventually converging towards the light. Multiverse optimizer (MVO) [16] was proposed on the base of three concepts in cosmology: white hole, black hole, and wormhole. Sine cosine algorithm (SCA) [17] creates multiple initial random candidate solutions and requires them to fluctuate outwards or towards the best solution using a mathematical model based on sine and cosine functions. Whale optimization algorithm (WOA) [18] mimics the social behavior of humpback whales. And its improvement proposed for global optimization consists of three strategies: the chaotic initialization phase, Gaussian mutation, and a chaotic local search with a “shrinking” strategy [19]. The antlion optimizer (ALO) [20] mimics the hunting mechanism of antlions in nature, which has been improved and applied into automatic generation control [21], cluster analysis [22], photovoltaic cell [23], power systems [24], and parameter estimation of photovoltaic models [25]. Harris Hawks Optimizer (HHO) [26] proposed in 2019 is a novel populationbased, natureinspired optimization paradigm, whose main inspiration is the cooperative behavior and chasing style of Harris hawks in nature called surprise pounce. HHO is used to perform the function optimizations and the realworld engineering problems.
Swarm intelligence algorithms can be also applied into feature selection (FS), such as gravitational search algorithm (GSA) inspired by Newton’s law of gravity which is combined with evolutionary crossover and mutation operators [27], an efficient optimizer based on the simultaneous use of the Grasshopper Optimization Algorithm (GOA), selection operators, and Evolutionary Population Dynamics (EPD) [28], the Binary Dragonfly Algorithm (BDA) using timevarying transfer functions [29], and binary Salp Swarm Algorithm (SSA) with asynchronous updating rules and a new leadership structure [30].
Swarm intelligence algorithms have been also applied to optimize the weights and basis of artificial neural networks for prediction and classification. For example, SCA and GA are used to optimize the weight and basis of artificial neural network for predicting the direction of stock market index, respectively [31, 32]. An improved dynamic particle swarm optimization with AdaBoost algorithm is used to optimize the parameters of generalized radial basis function neural network for stock market prediction [33], and an Improved Exponential Decreasing Inertia WeightParticle Swarm Optimization Algorithm is utilized to optimize the parameters of radial basis function neural network for the air quality index (AQI) prediction [34], respectively. Artificial tree (AT) algorithm was improved and applied to optimize the parameters of artificial neural network for predicting influenzalike illness [35]. MVO algorithm was combined with PSO algorithm to optimize the parameters of Elman neural network for classification of endometrial carcinoma with gene expression [36]. Based on Gaussian mutation and a chaotic local search that are employed to increase the population diversity of MFO and the flame updating process of MFO for better exploiting the locality of the solutions, respectively, the proposed CLSGMFO approach [37] is used to perform the function optimizations and is combined with a hybrid kernel extreme learning machine (KELM) model for financial prediction. Based on oppositionbased learning (OBL) and the drawbacks of GWO, OBLGWO [38] is proposed to tune the parameters of KELM for dealing with two realworld problems: second major selection (SMS) problem and thyroid cancer diagnosis (TCD) problem.
In this paper, the updated position of antlions inspired by PSO in elitism operator of ALO is improved and then the improved ALO (IALO) is obtained. The proposed IALO is compared against SCA, PSO, MFO, MVO, and ALO by performing on 23 classical benchmark functions. The results are that IALO is superior to SCA, PSO, MFO, MVO, and ALO according to the average values and the convergence speeds. Then, IALO is tested to optimize the parameters of BP neural network for predicting the Chinese influenza and the predicted model is built, written as IALOBPNN, which is compared with the models: BPNN, SCABPNN, PSOBPNN, MFOBPNN, MVOBPNN, and ALOBPNN. The results illustrate that the IALO has potentiality to optimize the weights and basis of BP neural network for predicting the Chinese influenza effectively. Therefore, the proposed IALO is an effective and efficient algorithm for optimization.
The remaining of the paper is organized as follows. The original ALO is displayed in Section 2. The proposed IALO is described in Section 3. Section 4 shows the comparison results of SCA, PSO, MFO, MVO, ALO, and IALO algorithms for 23 benchmark functions, which show better search performances and fast convergence speeds. In Section 4, IALO is also utilized to optimize the parameters of BP neural network (BPNN) for predicting the Chinese influenza. Discussion is presented in Section 5. And conclusion and future direction are given in Section 6.
2. The Antlion Optimizer
Antlion optimizer (ALO) proposed by Seyedali Mirjalili in 2015 is a novel natureinspired algorithm that mimics the hunting mechanism of antlions in nature. There are five main steps of hunting prey in ALO: the random walk of ants, building traps, entrapment of ants in traps, catching preys, and rebuilding taps. In fact, the ALO algorithm mimics interaction between antlions and ants in the trap, where ants are allowed to move stochastically for food in the search space and antlions hunt the ants using the traps. The matricesanddenote the matrices for saving the position of ants and antlions, respectively.
Let be the fitness (objective) function during optimization. The matricesandare the matrices for saving the fitness values of ants and antlions.
In ALO algorithm, there exist six operators as follows:
(i) Random Walks of Ants. Ants update their positions with random walk at every step of optimization. A random walk is based on the following:where is a stochastic function defined as follows: calculates the cumulative sum, is the maximum number of iterations, shows the current iteration, and is a random number generated with uniform distribution in the interval of 0,1. Since every search space has a boundary (range of variable) in order to keep the random walks inside the search space, the ants are normalized using the following equation (min–max normalization) before updating position of ants:where is the minimum of random walk of th variable, is the maximum of random walk in th variable, is the minimum of th variable at th iteration, and indicates the maximum of th variable at th iteration.
(ii) Trapping in Antlion’s Pits. Random walks of ants are affected by antlions traps. Equationsandshow that ants randomly walk in a hypersphere defined by the vectors and around a selected antlion, where is the minimum of all variables at th iteration, indicates the vector including the maximum of all variables at th iteration, is the minimum of all variables for th ant, is the maximum of all variables for th ant, and shows the position of the selected th antlion at th iteration.
(iii) Building Trap. A roulette wheel in ALO algorithm is employed to select the fitter antlions for catching ants based on the fitness value during optimization.
(iv) Sliding Ants towards Antlion. According to the fitness values, antlions can build traps proportionally and ants move randomly. Once the ants are in the trap, antlions shoot sands outwards the center of the pit. This behaviour slides down the trapped ant that is trying to escape, where the radius of ants random walks hypersphere is decreased adaptively. The equationsandshrink the radius of updating ants positions and mimic sliding process of ant inside the pits, where is a ratio, is the minimum of all variables at th iteration, and indicates the vector including the maximum of all variables at th iteration.
In (10) and (11), , where is the maximum number of iterations and is a constant defined based on the current iteration ( when , when , when , when , and when , shown in Figure 1). Basically, the constant can adjust the accuracy level of exploitation.
(v) Catching Prey and Rebuilding the Pit. When an ant reaches the bottom of the pit, it is caught by an antlion. The antlion updates its position to the latest position of the hunted ant to enhance its chance of catching new prey as follows:where shows the position of selected th antlion at th iteration and indicates the position of th ant at th iteration.
(vi) Elitism. Elitism is an important characteristic of evolutionary algorithms in order to maintain the best solution(s) obtained at any stage of optimization process. In ALO algorithm, the best antlion is considered to be an elite. Every ant randomly walks around a selected antlion by the roulette wheel and the elite simultaneously as follows:where is the random walk around the antlion selected by the roulette wheel at th iteration, is the random walk around the elite at th iteration, and indicates the position of th ant at th iteration.
Let be a function that generates the random initial solutions, manipulates the initial population provided by the function , and returns true when the end criterion is satisfied. Based on the above proposed operators, the ALO algorithm is defined as a threetuple as follows:where the functions , , and are defined as follows:where is the matrix of the position of ants, includes the position of antlions, contains the corresponding fitness of ants, and has the fitness of antlions.
3. The Improved ALO Algorithm
In PSO, there are two updated methods: the velocity update and the position update of the particle. The updated velocity and position of every particle in PSO are shown as follows:where and are the current velocity and position of the th particle at the th iteration, and are acceleration coefficients that control the influence of the and on the search process, respectively, is a random number in 0,1, is the current best position of all the particles at the th iteration, is the best position among all the particles at whole iterations, and is the inertia weight which is the nonnegative constant less than 1.
The structure of (18) in PSO is employed as the updated approach of the elitism operator of ALO, where the adopted method is that , , , , and are replaced by , , , in (13), and , respectively. Thus the improved elitism operator of ALO is obtained as follows:Thus the ALO is improved, named by IALO.
The concrete steps of the IALO algorithm are as follows.
Step 1. Initialize the population of ants and antlions randomly. Calculate the fitness of ants and antlions. Find the best antlions and assume it as the elite (determined optimum).
Step 2. For every ant, select an antlion using Roulette wheel, update and using (10) and (11), create a random walk and normalize it using (5) and (7), and update the position of ant using (20).
Step 3. Calculate the fitness of all ants. Replace an antlion with its corresponding ant if it becomes fitter (12). Update elite if an antlion becomes fitter than the elite.
Step 4. If the end criterion is satisfied, return the elite. Otherwise, return to Step 2.
4. Experiments
4.1. Experiments on 23 Classic Benchmark Functions
4.1.1. Benchmark Functions
In this section, 23 classic benchmark functions are selected to perform the IALO algorithm. Table 1 not only shows 9 unimodal functions , 7 multimodal functions , and 7 fixeddimension benchmark functions , but also shows the concrete expressions, the dimension, and the minimum function value of these 23 benchmark functions, where the varied dimensions of 16 benchmark functions are all taken 30 and the minimum values of these classic benchmark functions are all 0 except functions and , and the dimensions of functions are fixed.

As their names imply, every unimodal function has single optimum and every multimodal function has more than one optimum. One of the optima is called global optimum and the rest are called local optima. Therefore, solving an optimized problem is to seek the global optimum and to avoid the local optimum. From Table 1, the global optima are 0 except the functions , while the minimum value of is , which is changed based on the number of variables (), the minimum value of is , the dimension of functions is fixed, and the minimum values of functions are , and , respectively. Figure 2 shows typical 2D plots of six benchmark functions , , , , , and .
For verification of the validness of the proposed algorithm, IALO, SCA, PSO, MFO, MVO, and ALO are employed for comparison with IALO. All algorithms in this section are conducted under the same conditions. Each algorithm is run 30 times independently on these 23 classic benchmark functions and the maximum iteration number of every run is 1000, and the average value and the standard deviation of the best approximated solution in the last iteration are taken as the criteria.
In (20), we take the parameters:where is the current iteration and is the maximum iteration.
And in PSO algorithm, the inertia weight is taken as (21).
4.1.2. Results on Benchmark Functions
Based on the above settings, we perform the algorithms: SCA, PSO, MFO, MVO, ALO, and IALO for comparison on the 23 classic benchmark functions shown in Table 1.
We calculate the average value (Avg.) and the standard derivation (Std.) of the final iterations of 30 times’ performances. The comparative results are obtained by SCA, PSO, MFO, MVO, ALO, and IALO on the 23 classic benchmark functions, shown in Table 2.

From Table 2, it can be seen that the obtained average values of all unimodal functions except function performed by IALO algorithm for 30 times are the least among these six algorithms and they are , respectively. That is, IALO obtains the best solutions on the functions . Hence, it is concluded that IALO is better than the other comparative algorithms: SCA, PSO, MFO, MVO, and ALO in dealing with the 9 unimodal functions.
Table 2 shows that the obtained average values of all multimodal functions except function performed by IALO algorithm for 30 times are the least among these six algorithms and they are , and , respectively. That is, IALO obtains the best solutions on the functions . Hence, it is shown that IALO is better than the other comparative algorithms: SCA, PSO, MFO, MVO, and ALO in dealing with the 7 multimodal functions.
In dealing with the 7 fixeddimension multimodal functions in Table 1, IALO obtains the best solution on function , shown in Table 2. We observe from Table 2 that PSO has the minimum average values on and they are . And we also observe from Table 2 that MFO has the minimum average values on and they are . And MVO has the minimum average value on .
Therefore, IALO is better than SCA, PSO, MFO, MVO, and ALO for function optimizations. In addition, to visually compare the performance of IALO and the other six algorithms: SCA, PSO, MFO, MVO, and ALO, the convergence curves of SCA, PSO, MFO, MVO, ALO, and IALO performed on 15 benchmark functions are generated, as shown in Figure 3. From Figure 3, it can be seen that IALO has the most rapid convergence speeds to arrive at the minimum function value.
To sum up, the proposed algorithm IALO outperforms other compared algorithms: SCA, PSO, MFO, MVO, and ALO. The results verify the performance of the IALO algorithm in solving various benchmark functions and the proposed algorithm IALO is valid.
4.2. Prediction of Chinese Influenza Based on Optimized Neural Network
4.2.1. Data Sources
Influenza is an acute respiratory infection caused by influenza virus and is also a highly infectious and fastspreading disease. It is mainly transmitted through droplets in the air, contact between people, or contact with contaminated substances. The number of influenza patients and the incidence of influenza are determined to make hospitals supply the corresponding medical services for influenza patients. Therefore, it is essential to predict the number of influenza patients and the incidence of influenza every year.
Here, we adopt the influenza data of the whole China from January, 2004 to December, 2016 for prediction from the website http://www.phsciencedata.cn/Share/ky_sjml.jsp. From the loaded dataset, we can obtain the number of influenza patients, the number of deaths, the incidence of influenza, and the mortality rate on influenza of every month. In this paper, the number of influenza patients and the incidence of influenza are utilized to perform the predictions. Therefore, the supported data consists of number of influenza patients and the incidence of influenza from the website http://www.phsciencedata.cn/Share/ky_sjml.jsp.
4.2.2. BP Neural Network Optimized by IALO
We use IALO algorithm to optimize the weights and basis of BP neural network to create the predicted model, written as IALOBPNN. In IALOBPNN, every ant or antlion is mapped into the parameters: weights and basis of BP neural network. Therefore, the dimension of every ant or antlion is determined. For comparison, SCA, PSO, MFO, MVO, and ALO are also employed to optimize the weights and basis of BP neural network, and the corresponding models are SCABPNN, PSOBPNN, MFOBPNN, MVOBPNN, and ALOBPNN. And the fitness function in SCA, PSO, MFO, MVO, ALO, and IALO algorithms is defined as follows:where is the number of the data, is the actual value, and is the predicted value. For convenience, we call BPNN, SCABPNN, PSOBPNN, MFOBPNN, MVOBPNN, ALOBPNN, and IALOBPNN as model 1, model 2, model 3, model 4, model 5, model 6, and model 7, respectively. We use the four evaluation criterions for models: mean absolute error (MAE), mean squared error (MSE), relative mean squared error (RMSE), and mean absolute percentage error (MAPE). The formulas of MAE, MSE, RMSE, and MAPE are defined as follows:where is the number of data and and denote the actual value and the predicted value of the  data.
4.2.3. Experimental Results on Influenza Prediction
We use the first three days’ influenza data to predict the fourth day’s influenza data. We choose the influenza data from January, 2004 to June, 2016 as the trained data and the influenza data from July, 2016 to December, 2016 as the tested data.
We perform model 1 to model 7 for predicting the incidence of influenza and the number of influenza patients. In model 1 and the BPNN parts of model 2 to model 7, the number of the nodes in the hidden layer is set to be 10 and the number of iterations is set to be 5000. In these models, SCA, PSO, MFO, MVO, ALO, and IALO algorithms are performed 500 iterations to optimize the weights and biases of BP neural network.
These 7 models are run only 1 time and the predicted values of the number of influenza patients and the incidence of influenza are shown in Tables 3 and 4, respectively. It is shown that the predicted model IALOBPNN is better than the other predicted models: BPNN, SCABPNN, PSOBPNN, MFOBPNN, MVOBPNN, and ALOBPNN.


Further, we run these 7 models 10 times independently, respectively. The average MAE, MSE, RMSE, and MAPE of predicting the incidence of influenza and the average MAE, MSE, RMSE, and MAPE of predicting the number of influenza patients are obtained, as shown in Tables 5 and 6, respectively.


From Table 5, it is observed that the average MAE, MSE, RMSE, and MAPE of predicting the incidence of influenza obtained by IALOBPNN are the least and arrive at 0.3254, 0.2158, 0.0925, and 23.4146%, respectively. From Table 6, it is shown that the average RMSE and MAPE of predicting the number of influenza patients obtained by IALOBPNN are the least and arrive at 0.1050 and 24.6722%, respectively. And the average MAE and the average MSE of predicting the number of influenza patients obtained by MVOBPNN are the least and arrive at 4056.4442, 3.0765E+07, respectively. Therefore, it can be seen that the predicted model IALOBPNN outperforms the other predicted models: BPNN, SCABPNN, PSOBPNN, MFOBPNN, MVOBPNN, and ALOBPNN. And the results show that IALO algorithm can effectively be used to optimize the weights and basis of BP neural network for predicting the influenza.
5. Discussion
From the results obtained from the previous sections, we can recognize that the proposed IALO in this paper shows the superior results for multidimensional functions and functions with the fixed dimension by comparison with five kinds of swarm intelligence algorithms: SCA, PSO, MFO, MVO, and ALO in Section 4.1. And IALO is applied to optimize the parameters of BP neural network and then the predicted model IALOBPNN is built. IALOBPNN has better predicted results than the other predicted models: BPNN, SCABPNN, MFOBPNN, MVOBPNN, and ALOBPNN in Section 4.2.
The elitism operator of ALO is inspired by PSO to build the elitism operator of the proposed IALO, as shown in (20). And the improved elitism operator reveals an immediate impact on the capabilities of IALO in balancing the exploration and exploitation in dealing with function optimizations and influenza prediction. In (20), there are several parameters: inertia weight , accelerated factors , . In the experiments, we take which is a decreasing function and . But the inertia weight can be selected by means of different methods, which leads to obtaining the different optimal solution and different convergence curve in optimization problems and the different predicted results in predicted problems. In addition, BP neural network in this paper is used to be optimized by IALO, but there are many kinds of neural networks. Therefore, if BP neural network is replaced by another different neural network, the results will be changed possibly.
6. Conclusion and Future Direction
In this paper, inspired by PSO, the elitism operator in ALO is improved and the improved ALO is created, written as IALO. IALO has been shown to be suitable for function optimization and influenza prediction. We use IALO to conduct the numerical tests on 23 classic benchmark functions to examine the exploration, exploitation, and convergence behavior of the proposed IALO, whose effectiveness is valid by comparison with SCA, PSO, MFO, MVO, and ALO. The comparative results illustrate that the proposed IALO is superior to SCA, PSO, MFO, MVO, and ALO. The similar results can be seen from the convergence curves. To further confirm IALO’s optimization capability, we use IALO to optimize the weights and basis of BP neural network for predicting the incidence of influenza and the number of influenza patients, respectively. Thus the predicted model IALOBPNN is built. Then IALOBPNN is compared with the other predicted models: BPNN, SCABPNN, MFOBPNN, MVOBPNN, and ALOBPNN. The experimental results show that the proposed model IALOBPNN has the least MAE, MSE, RMSE, and MAPE for predicting the incidence of influenza and the least RMSE and MAPE for predicting the number of influenza patients. Therefore, there are reasons that the proposed IALO can be used to be a powerful tool in dealing with the classic benchmark functions and the combination with artificial neural network for prediction and classification.
Many worth swarm intelligence algorithm which needs to be explored in future will be proposed constantly. For example, IALO can be combined with one or more of the other swarm intelligence algorithms to become the new hybrid algorithm to further improve its performance. In addition, more than two swarm intelligence algorithms are combined to create the hybrid algorithms. These improved hybrid algorithms can be utilized to realize the function optimizations and further able to solve the realworld problems in valid.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare no financial conflicts of interest.
Acknowledgments
This work was supported by Shanxi Natural Science Foundation [grant numbers 201801D121026, 201701D121012, 201801D121008, 201701D221121]; the National Natural Science Foundation of China [grant numbers 61774137, 11571324]; the Fund for Shanxi “1331KIRT”; and Shanxi Scholarship Council of China [grant number 2016088].
References
 J. H. Holland, Adaptation in Nature and Artificial Systems, The MIT Press, Cambridge, Mass, USA, 1992. View at: MathSciNet
 X. Yao, Evolutionary Computation: Theory and Applications, World Scientific Publishing, Singapore, 1999.
 S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, UK, 2004. View at: Publisher Site  MathSciNet
 H. Sang, L. Gao, and Q. Pan, “Discrete artificial bee colony algorithm for lotstreaming flowshop with total flowtime minimization,” Chinese Journal of Mechanical Engineering, vol. 25, no. 5, pp. 990–1000, 2012. View at: Publisher Site  Google Scholar
 S. K. Nseef, S. Abdullah, A. Turky, and G. Kendall, “An adaptive multipopulation artificial bee colony algorithm for dynamic optimisation problems,” KnowledgeBased Systems, vol. 104, pp. 14–23, 2016. View at: Publisher Site  Google Scholar
 V. HoHuu, T. NguyenThoi, T. VoDuy, and T. NguyenTrang, “An adaptive elitist differential evolution for optimization of truss structures with discrete design variables,” Computers & Structures, vol. 165, pp. 59–75, 2016. View at: Publisher Site  Google Scholar
 G. Bartsch, A. P. Mitra, S. A. Mitra et al., “Use of Artificial Intelligence and Machine Learning Algorithms with Gene Expression Profiling to Predict Recurrent Nonmuscle Invasive Urothelial Carcinoma of the Bladder,” The Journal of Urology, vol. 195, no. 2, pp. 493–498, 2016. View at: Publisher Site  Google Scholar
 C.M. Lai, W.C. Yeh, and Y.C. Huang, “Entropic simplified swarm optimization for the task assignment problem,” Applied Soft Computing, vol. 58, pp. 115–127, 2017. View at: Publisher Site  Google Scholar
 J. H. Holl, “Genetic algorithms,” Scientific American, vol. 267, no. 1, pp. 66–73, 1992. View at: Publisher Site  Google Scholar
 R. Eberhart and J. Kennedy, “Particle swarm optimization,” in Proceedings of IEEE Inter Conference on Neural Networks, vol. 4, pp. 1942–1948, IEEE, Perth, Australia, 1995. View at: Google Scholar
 A. V. Mokshin, V. V. Mokshin, and L. M. Sharnin, “Adaptive genetic algorithms used to analyze behavior of complex system,” Communications in Nonlinear Science and Numerical Simulation, vol. 71, pp. 174–186, 2019. View at: Publisher Site  Google Scholar
 J. Fernández, J. LópezCampos, A. Segade, and J. Vilán, “A genetic algorithm for the characterization of hyperelastic materials,” Applied Mathematics and Computation, vol. 329, pp. 239–250, 2018. View at: Publisher Site  Google Scholar
 V. SanchezTembleque, V. Vedia, L. M. Fraile, S. Ritt, and J. M. Udias, “Optimizing timepickup algorithms in radiation detectors with a genetic algorithm,” Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 927, pp. 54–62, 2019. View at: Google Scholar
 Y. M. Ding, W. L. Zhang, L. Yu, and K. H. Lu, “The accuracy and efficiency of GA and PSO optimization schemes on estimating reaction kinetic parameters of biomass pyrolysis,” Energy, vol. 176, pp. 582–588, 2019. View at: Google Scholar
 S. Mirjalili, “Mothflame optimization algorithm: a novel natureinspired heuristic paradigm,” KnowledgeBased Systems, vol. 89, pp. 228–249, 2015. View at: Publisher Site  Google Scholar
 S. Mirjalili, S. M. Mirjalili, and A. Hatamlou, “Multiverse optimizer: a natureinspired algorithm for global optimization,” Neural Computing and Applications, vol. 27, no. 2, pp. 495–513, 2016. View at: Publisher Site  Google Scholar
 S. Mirjalili, “SCA: a sine cosine algorithm for solving optimization problems,” KnowledgeBased Systems, vol. 96, pp. 120–133, 2016. View at: Google Scholar
 S. Mirjalili and A. Lewis, “The whale optimization algorithm,” Advances in Engineering Software, vol. 95, pp. 51–67, 2016. View at: Publisher Site  Google Scholar
 J. Luo, H. Chen, A. A. Heidari, Y. Xu, Q. Zhang, and C. Li, “Multistrategy boosted mutative whaleinspired optimization approaches,” Applied Mathematical Modelling, vol. 73, pp. 109–123, 2019. View at: Publisher Site  Google Scholar
 S. Mirjalili, “The ant lion optimizer,” Advances in Engineering Software, vol. 83, pp. 80–98, 2015. View at: Publisher Site  Google Scholar
 M. Raju, L. C. Saikia, and N. Sinha, “Automatic generation control of a multiarea system using ant lion optimizer algorithm based PID plus second order derivative controller,” International Journal of Electrical Power & Energy Systems, vol. 80, pp. 52–63, 2016. View at: Publisher Site  Google Scholar
 S. K. Majhi and S. Biswal, “Optimal cluster analysis using hybrid KMeans and Ant Lion Optimizer,” Karbala International Journal of Modern Science, vol. 4, no. 4, pp. 347–360, 2018. View at: Publisher Site  Google Scholar
 Z. Wu, D. Yu, and X. Kang, “Parameter identification of photovoltaic cell model based on improved ant lion optimizer,” Energy Conversion and Management, vol. 151, pp. 107–115, 2017. View at: Publisher Site  Google Scholar
 S. Mouassa, T. Bouktir, and A. Salhi, “Ant lion optimizer for solving optimal reactive power dispatch problem in power systems,” Engineering Science and Technology, an International Journal, vol. 20, no. 3, pp. 885–895, 2017. View at: Publisher Site  Google Scholar
 H. L. Chen, S. Jiao, A. A. Heidari, M. J. Wang, X. Chen, and X. H. Zhao, “An oppositionbased sine cosine approach with local search for parameter estimation of photovoltaic models,” Energy Conversion and Management, vol. 195, pp. 927–942, 2019. View at: Publisher Site  Google Scholar
 A. A. Heidari, S. Mirjalili, H. Faris, I. Aljarah, M. Mafarja, and H. Chen, “Harris hawks optimization: algorithm and applications,” Future Generation Computer Systems, vol. 97, pp. 849–872, 2019. View at: Publisher Site  Google Scholar
 M. Taradeh, M. Mafarja, A. A. Heidari et al., “An evolutionary gravitational searchbased feature selection,” Information Sciences, vol. 497, pp. 219–239, 2019. View at: Publisher Site  Google Scholar
 M. Mafarja, I. Aljarah, A. A. Heidari et al., “Evolutionary population dynamics and grasshopper optimization approaches for feature selection problems,” KnowledgeBased Systems, vol. 145, pp. 25–45, 2018. View at: Publisher Site  Google Scholar
 M. Mafarja, I. Aljarah, A. A. Heidari et al., “Binary dragonfly optimization for feature selection using timevarying transfer functions,” KnowledgeBased Systems, vol. 161, pp. 185–204, 2018. View at: Publisher Site  Google Scholar
 I. Aljarah, M. Mafarja, A. A. Heidari, H. Faris, Y. Zhang, and S. Mirjalili, “Asynchronous accelerating multileader salp chains for feature selection,” Applied Soft Computing, vol. 71, pp. 964–979, 2018. View at: Publisher Site  Google Scholar
 H. P. Hu, L. Tang, S. H. Zhang, and H. Y. Wang, “Predicting the direction of stock markets using optimized neural networks with Google Trends,” Neurocomputing, vol. 285, pp. 188–195, 2018. View at: Publisher Site  Google Scholar
 M. Qiu and Y. Song, “Predicting the direction of stock market index movement using an optimized artificial neural network model,” PLoS ONE, vol. 11, no. 5, Article ID e0155133, 2016. View at: Publisher Site  Google Scholar
 J. Lu, H. Hu, and Y. Bai, “Generalized radial basis function neural network based on an improved dynamic particle swarm optimization and AdaBoost algorithm,” Neurocomputing, vol. 152, pp. 305–315, 2015. View at: Publisher Site  Google Scholar
 J. Lu, H. Hu, and Y. Bai, “Radial basis function neural nerwork based on an improved exponential decreasing inertia weightparticle swarm optimizatin algorithm for AQI prediction,” Abstract and Applied Analysis, vol. 2014, Article ID 178313, 9 pages, 2014. View at: Publisher Site  Google Scholar
 H. Hu, H. Wang, F. Wang, D. Langley, A. Avram, and M. Liu, “Prediction of influenzalike illness based on the improved artificial tree algorithm and artificial neural network,” Scientific Reports, vol. 8, no. 1, article no. 4895, 2018. View at: Publisher Site  Google Scholar
 H. Hu, H. Wang, Y. Bai, and M. Liu, “Determination of endometrial carcinoma with gene expression based on optimized Elman neural network,” Applied Mathematics and Computation, vol. 341, pp. 204–214, 2019. View at: Publisher Site  Google Scholar
 Y. Xu, H. Chen, A. A. Heidari et al., “An efficient chaotic mutative mothflameinspired optimizer for global optimization tasks,” Expert Systems with Applications, vol. 129, pp. 135–155, 2019. View at: Publisher Site  Google Scholar
 A. A. Heidari, R. Ali Abbaspour, and H. Chen, “Efficient boosted grey wolf optimizers for global search and kernel extreme learning machine training,” Applied Soft Computing, vol. 81, article 105521, 2019. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2019 Hongping Hu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.