Complexity

Complexity / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 1480392 | 12 pages | https://doi.org/10.1155/2019/1480392

The Improved Antlion Optimizer and Artificial Neural Network for Chinese Influenza Prediction

Academic Editor: Michele Scarpiniti
Received22 Feb 2019
Revised20 Jun 2019
Accepted11 Jul 2019
Published05 Aug 2019

Abstract

The antlion optimizer (ALO) is a new swarm-based metaheuristic algorithm for optimization, which mimics the hunting mechanism of antlions in nature. Aiming at the shortcoming that ALO has unbalanced exploration and development capability for some complex optimization problems, inspired by the particle swarm optimization (PSO), the updated position of antlions in elitism operator of ALO is improved, and thus the improved ALO (IALO) is obtained. The proposed IALO is compared against sine cosine algorithm (SCA), PSO, Moth-flame optimization algorithm (MFO), multi-verse optimizer (MVO), and ALO by performing on 23 classic benchmark functions. The experimental results show that the proposed IALO outperforms SCA, PSO, MFO, MVO, and ALO according to the average values and the convergence speeds. And the proposed IALO is tested to optimize the parameters of BP neural network for predicting the Chinese influenza and the predicted model is built, written as IALO-BPNN, which is against the models: BPNN, SCA-BPNN, PSO-BPNN, MFO-BPNN, MVO-BPNN, and ALO-BPNN. It is shown that the predicted model IALO-BPNN has smaller errors than other six predicted models, which illustrates that the IALO has potentiality to optimize the weights and basis of BP neural network for predicting the Chinese influenza effectively. Therefore, the proposed IALO is an effective and efficient algorithm suitable for optimization problems.

1. Introduction

Optimization problems exist in scientific research and engineering areas [13], such as statistical physics [4, 5], computer science [6], artificial intelligence [7], and pattern recognition [8].

For every optimization problem, there is at least one global optimal solution and there may be some local optimal solutions as well as a global optimal solution. Many researchers wish to seek the global optimum for solving optimization problems. Therefore, many methods are created and applied to solve optimization problems. In particular, swarm intelligence algorithms proposed give strong support. Genetic algorithm (GA) proposed by Holland in 1992 [9] simulates Darwinian evolution, and particle swarm optimization (PSO) proposed in 1995 [10] simulates birds’ behavior. And GA algorithm and PSO algorithm are constantly improved and applied to many aspects, such as complex system [11], hyperlastic materials [12], radiation detectors [13], and reaction kinetic parameters of biomass pyrolysis [14].

Since then, swarm intelligence algorithms have been constantly proposed and widely applied to find the global optimum of the optimization problems in various fields. For example, Moth-flame optimization algorithm (MFO) [15] mimics the moth eventually converging towards the light. Multi-verse optimizer (MVO) [16] was proposed on the base of three concepts in cosmology: white hole, black hole, and wormhole. Sine cosine algorithm (SCA) [17] creates multiple initial random candidate solutions and requires them to fluctuate outwards or towards the best solution using a mathematical model based on sine and cosine functions. Whale optimization algorithm (WOA) [18] mimics the social behavior of humpback whales. And its improvement proposed for global optimization consists of three strategies: the chaotic initialization phase, Gaussian mutation, and a chaotic local search with a “shrinking” strategy [19]. The antlion optimizer (ALO) [20] mimics the hunting mechanism of antlions in nature, which has been improved and applied into automatic generation control [21], cluster analysis [22], photovoltaic cell [23], power systems [24], and parameter estimation of photovoltaic models [25]. Harris Hawks Optimizer (HHO) [26] proposed in 2019 is a novel population-based, nature-inspired optimization paradigm, whose main inspiration is the cooperative behavior and chasing style of Harris hawks in nature called surprise pounce. HHO is used to perform the function optimizations and the real-world engineering problems.

Swarm intelligence algorithms can be also applied into feature selection (FS), such as gravitational search algorithm (GSA) inspired by Newton’s law of gravity which is combined with evolutionary crossover and mutation operators [27], an efficient optimizer based on the simultaneous use of the Grasshopper Optimization Algorithm (GOA), selection operators, and Evolutionary Population Dynamics (EPD) [28], the Binary Dragonfly Algorithm (BDA) using time-varying transfer functions [29], and binary Salp Swarm Algorithm (SSA) with asynchronous updating rules and a new leadership structure [30].

Swarm intelligence algorithms have been also applied to optimize the weights and basis of artificial neural networks for prediction and classification. For example, SCA and GA are used to optimize the weight and basis of artificial neural network for predicting the direction of stock market index, respectively [31, 32]. An improved dynamic particle swarm optimization with AdaBoost algorithm is used to optimize the parameters of generalized radial basis function neural network for stock market prediction [33], and an Improved Exponential Decreasing Inertia Weight-Particle Swarm Optimization Algorithm is utilized to optimize the parameters of radial basis function neural network for the air quality index (AQI) prediction [34], respectively. Artificial tree (AT) algorithm was improved and applied to optimize the parameters of artificial neural network for predicting influenza-like illness [35]. MVO algorithm was combined with PSO algorithm to optimize the parameters of Elman neural network for classification of endometrial carcinoma with gene expression [36]. Based on Gaussian mutation and a chaotic local search that are employed to increase the population diversity of MFO and the flame updating process of MFO for better exploiting the locality of the solutions, respectively, the proposed CLSGMFO approach [37] is used to perform the function optimizations and is combined with a hybrid kernel extreme learning machine (KELM) model for financial prediction. Based on opposition-based learning (OBL) and the drawbacks of GWO, OBLGWO [38] is proposed to tune the parameters of KELM for dealing with two real-world problems: second major selection (SMS) problem and thyroid cancer diagnosis (TCD) problem.

In this paper, the updated position of antlions inspired by PSO in elitism operator of ALO is improved and then the improved ALO (IALO) is obtained. The proposed IALO is compared against SCA, PSO, MFO, MVO, and ALO by performing on 23 classical benchmark functions. The results are that IALO is superior to SCA, PSO, MFO, MVO, and ALO according to the average values and the convergence speeds. Then, IALO is tested to optimize the parameters of BP neural network for predicting the Chinese influenza and the predicted model is built, written as IALO-BPNN, which is compared with the models: BPNN, SCA-BPNN, PSO-BPNN, MFO-BPNN, MVO-BPNN, and ALO-BPNN. The results illustrate that the IALO has potentiality to optimize the weights and basis of BP neural network for predicting the Chinese influenza effectively. Therefore, the proposed IALO is an effective and efficient algorithm for optimization.

The remaining of the paper is organized as follows. The original ALO is displayed in Section 2. The proposed IALO is described in Section 3. Section 4 shows the comparison results of SCA, PSO, MFO, MVO, ALO, and IALO algorithms for 23 benchmark functions, which show better search performances and fast convergence speeds. In Section 4, IALO is also utilized to optimize the parameters of BP neural network (BPNN) for predicting the Chinese influenza. Discussion is presented in Section 5. And conclusion and future direction are given in Section 6.

2. The Antlion Optimizer

Antlion optimizer (ALO) proposed by Seyedali Mirjalili in 2015 is a novel nature-inspired algorithm that mimics the hunting mechanism of antlions in nature. There are five main steps of hunting prey in ALO: the random walk of ants, building traps, entrapment of ants in traps, catching preys, and rebuilding taps. In fact, the ALO algorithm mimics interaction between antlions and ants in the trap, where ants are allowed to move stochastically for food in the search space and antlions hunt the ants using the traps. The matricesanddenote the matrices for saving the position of ants and antlions, respectively.

Let be the fitness (objective) function during optimization. The matricesandare the matrices for saving the fitness values of ants and antlions.

In ALO algorithm, there exist six operators as follows:

(i) Random Walks of Ants. Ants update their positions with random walk at every step of optimization. A random walk is based on the following:where is a stochastic function defined as follows: calculates the cumulative sum, is the maximum number of iterations, shows the current iteration, and is a random number generated with uniform distribution in the interval of 0,1. Since every search space has a boundary (range of variable) in order to keep the random walks inside the search space, the ants are normalized using the following equation (min–max normalization) before updating position of ants:where is the minimum of random walk of -th variable, is the maximum of random walk in -th variable, is the minimum of -th variable at -th iteration, and indicates the maximum of -th variable at -th iteration.

(ii) Trapping in Antlion’s Pits. Random walks of ants are affected by antlions traps. Equationsandshow that ants randomly walk in a hypersphere defined by the vectors and around a selected antlion, where is the minimum of all variables at -th iteration, indicates the vector including the maximum of all variables at -th iteration, is the minimum of all variables for -th ant, is the maximum of all variables for -th ant, and shows the position of the selected -th antlion at -th iteration.

(iii) Building Trap. A roulette wheel in ALO algorithm is employed to select the fitter antlions for catching ants based on the fitness value during optimization.

(iv) Sliding Ants towards Antlion. According to the fitness values, antlions can build traps proportionally and ants move randomly. Once the ants are in the trap, antlions shoot sands outwards the center of the pit. This behaviour slides down the trapped ant that is trying to escape, where the radius of ants random walks hypersphere is decreased adaptively. The equationsandshrink the radius of updating ants positions and mimic sliding process of ant inside the pits, where is a ratio, is the minimum of all variables at -th iteration, and indicates the vector including the maximum of all variables at -th iteration.

In (10) and (11), , where is the maximum number of iterations and is a constant defined based on the current iteration ( when , when , when , when , and when , shown in Figure 1). Basically, the constant can adjust the accuracy level of exploitation.

(v) Catching Prey and Rebuilding the Pit. When an ant reaches the bottom of the pit, it is caught by an antlion. The antlion updates its position to the latest position of the hunted ant to enhance its chance of catching new prey as follows:where shows the position of selected -th antlion at -th iteration and indicates the position of -th ant at -th iteration.

(vi) Elitism. Elitism is an important characteristic of evolutionary algorithms in order to maintain the best solution(s) obtained at any stage of optimization process. In ALO algorithm, the best antlion is considered to be an elite. Every ant randomly walks around a selected antlion by the roulette wheel and the elite simultaneously as follows:where is the random walk around the antlion selected by the roulette wheel at -th iteration, is the random walk around the elite at -th iteration, and indicates the position of -th ant at -th iteration.

Let be a function that generates the random initial solutions, manipulates the initial population provided by the function , and returns true when the end criterion is satisfied. Based on the above proposed operators, the ALO algorithm is defined as a three-tuple as follows:where the functions , , and are defined as follows:where is the matrix of the position of ants, includes the position of antlions, contains the corresponding fitness of ants, and has the fitness of antlions.

3. The Improved ALO Algorithm

In PSO, there are two updated methods: the velocity update and the position update of the particle. The updated velocity and position of every particle in PSO are shown as follows:where and are the current velocity and position of the -th particle at the -th iteration, and are acceleration coefficients that control the influence of the and on the search process, respectively, is a random number in 0,1, is the current best position of all the particles at the -th iteration, is the best position among all the particles at whole iterations, and is the inertia weight which is the nonnegative constant less than 1.

The structure of (18) in PSO is employed as the updated approach of the elitism operator of ALO, where the adopted method is that , , , , and are replaced by , , , in (13), and , respectively. Thus the improved elitism operator of ALO is obtained as follows:Thus the ALO is improved, named by IALO.

The concrete steps of the IALO algorithm are as follows.

Step 1. Initialize the population of ants and antlions randomly. Calculate the fitness of ants and antlions. Find the best antlions and assume it as the elite (determined optimum).

Step 2. For every ant, select an antlion using Roulette wheel, update and using (10) and (11), create a random walk and normalize it using (5) and (7), and update the position of ant using (20).

Step 3. Calculate the fitness of all ants. Replace an antlion with its corresponding ant if it becomes fitter (12). Update elite if an antlion becomes fitter than the elite.

Step 4. If the end criterion is satisfied, return the elite. Otherwise, return to Step 2.

4. Experiments

4.1. Experiments on 23 Classic Benchmark Functions
4.1.1. Benchmark Functions

In this section, 23 classic benchmark functions are selected to perform the IALO algorithm. Table 1 not only shows 9 unimodal functions , 7 multimodal functions , and 7 fixed-dimension benchmark functions , but also shows the concrete expressions, the dimension, and the minimum function value of these 23 benchmark functions, where the varied dimensions of 16 benchmark functions are all taken 30 and the minimum values of these classic benchmark functions are all 0 except functions and , and the dimensions of functions are fixed.


FunctionDimRange

30-100,1000
30-10,100
30[-100,1000
30-100,1000
30-30,300
30-100,1000
30-1.28,1.280
30-10,100
30-10,100
30-500,500
30-5.12,5.120
30-32,320
30-600,6000
30-50,500
30-50,500
30-10,10-1
2-65,651
4-5,50.00030
2-5,5-1.0316
2-5,50.398
2-2,23
31,3-3.86
60,1-3.32

As their names imply, every unimodal function has single optimum and every multimodal function has more than one optimum. One of the optima is called global optimum and the rest are called local optima. Therefore, solving an optimized problem is to seek the global optimum and to avoid the local optimum. From Table 1, the global optima are 0 except the functions , while the minimum value of is , which is changed based on the number of variables (), the minimum value of is , the dimension of functions is fixed, and the minimum values of functions are , and , respectively. Figure 2 shows typical 2D plots of six benchmark functions , , , , , and .

For verification of the validness of the proposed algorithm, IALO, SCA, PSO, MFO, MVO, and ALO are employed for comparison with IALO. All algorithms in this section are conducted under the same conditions. Each algorithm is run 30 times independently on these 23 classic benchmark functions and the maximum iteration number of every run is 1000, and the average value and the standard deviation of the best approximated solution in the last iteration are taken as the criteria.

In (20), we take the parameters:where is the current iteration and is the maximum iteration.

And in PSO algorithm, the inertia weight is taken as (21).

4.1.2. Results on Benchmark Functions

Based on the above settings, we perform the algorithms: SCA, PSO, MFO, MVO, ALO, and IALO for comparison on the 23 classic benchmark functions shown in Table 1.

We calculate the average value (Avg.) and the standard derivation (Std.) of the final iterations of 30 times’ performances. The comparative results are obtained by SCA, PSO, MFO, MVO, ALO, and IALO on the 23 classic benchmark functions, shown in Table 2.


SCAPSOMFOMVOALOIALO

Avg.2.4833E-021.7029E-076.6667E+023.5320E-011.0361E-055.6713E-09
Std.8.4745E-023.3987E-072.5371E+039.0486E-029.2902E-066.8327E-09

Avg.2.7355E-055.6670E+003.5009E+013.8667E-014.0298E+012.1869E-06
Std.7.0204E-056.7889E+001.7179E+011.0953E-014.5819E+011.7256E-06

Avg.4.4292E+032.1475E+011.4187E+044.0091E+011.1158E+038.1060E-08
Std.3.4067E+038.6854E+001.2322E+041.6256E+016.9499E+026.7920E-08

Avg.2.2828E+017.0944E-016.7633E+019.3517E-011.2097E+011.9699E-05
Std.1.0885E+011.6423E-019.8845E+003.8576E-013.4911E+001.3800E-05

Avg.3.1495E+025.9107E+011.8190E+042.4310E+022.1312E+026.0874E+00
Std.7.8295E+024.7783E+013.6553E+044.7649E+023.9933E+027.0979E+00

Avg.4.4585E+002.7838E-071.3334E+033.0566E-018.2847E-064.3582E-03
Std.4.5527E-015.3494E-073.4577E+036.4923E-026.6377E-064.5995E-03

Avg.3.6431E-024.0953E+007.1591E+002.0177E-029.9794E-022.4135E-04
Std.3.2811E-025.0086E+001.4622E+019.8788E-032.9363E-022.2411E-04

Avg.2.2980E+008.3949E+014.2009E+042.5972E+001.8502E+003.5681E-01
Std.2.9273E+001.2624E+027.8237E+042.8904E+002.0096E+001.5848E-01

Avg.1.4830E-021.0333E+025.0351E+023.8931E-011.4806E+006.5564E-10
Std.4.1114E-021.4016E+025.7668E+023.5894E-012.4219E+009.0072E-10

Avg.-3.9223E+03-7.0089E+03-8.7739E+03-7.7335E+03-5.6870E+03-1.2465E+04
Std.2.0066E+027.4483E+021.0097E+037.5803E+027.2459E+023.1132E+02

Avg.2.3813E+018.5902E+011.6498E+021.0670E+027.9663E+014.2863E-09
Std.2.9957E+013.3029E+013.3619E+012.7246E+012.0180E+014.8343E-09

Avg.1.2218E+011.1481E-011.3742E+011.1243E+002.1638E+002.1607E-05
Std.9.6075E+003.5248E-018.1575E+006.8869E-014.8964E-011.6013E-05

Avg.3.0681E-011.1976E-022.7139E+015.7658E-011.4141E-021.0333E-08
Std.2.5543E-011.4464E-025.3798E+018.2168E-021.2955E-021.0276E-08

Avg.3.2960E+002.6382E-097.6727E-011.4587E+001.0616E+011.3128E-03
Std.6.3376E+005.2924E-091.0428E+009.7475E-014.4172E+001.7484E-03

Avg.2.2449E+034.0291E-031.3669E+076.6053E-021.5288E+002.0314E-02
Std.1.2265E+045.3850E-037.4867E+073.3510E-028.0972E+001.9561E-02

Avg.1.4802E-2610.0000E+000.0000E+000.0000E+000.0000E+00-9.9996E-01
Std.0.0000E+000.0000E+000.0000E+000.0000E+000.0000E+003.8017E-05

Avg.1.5607E+003.7235E+002.8078E+009.9800E-011.5594E+001.4625E+00
Std.8.9065E-012.7626E+002.2124E+007.7165E-121.0912E+009.6225E-01

Avg.9.3314E-043.1081E-031.7669E-034.0776E-034.7203E-033.5287E-04
Std.3.8535E-045.8663E-033.5356E-037.4136E-037.9585E-033.4786E-05

Avg.-1.0316E+00-1.0316E+00-1.0316E+00-1.0316E+00-1.0316E+00-1.0316E+00
Std.2.1357E-050.0000E+000.0000E+005.7709E-088.1190E-141.3166E-04

Avg.3.9870E-013.9789E-013.9789E-013.9789E-013.9789E-014.0284E-01
Std.9.6371E-040.0000E+000.0000E+001.5989E-074.0846E-147.3331E-03

Avg.3.0000E+003.0000E+003.0000E+003.0000E+003.0000E+003.0295E+00
Std.1.0124E-050.0000E+004.8014E-154.8579E-072.8150E-133.3891E-02

Avg.-3.8551E+00-3.8625E+00-3.8628E+00-3.8628E+00-3.8628E+00-3.8541E+00
Std.2.6592E-031.4390E-032.7101E-154.9816E-072.9913E-141.4000E-02

Avg.-2.8128E+00-3.2222E+00-3.2220E+00-3.2582E+00-3.2705E+00-3.1590E+00
Std.4.8513E-019.4865E-025.3410E-026.0678E-025.9929E-021.0387E-01

From Table 2, it can be seen that the obtained average values of all unimodal functions except function performed by IALO algorithm for 30 times are the least among these six algorithms and they are , respectively. That is, IALO obtains the best solutions on the functions . Hence, it is concluded that IALO is better than the other comparative algorithms: SCA, PSO, MFO, MVO, and ALO in dealing with the 9 unimodal functions.

Table 2 shows that the obtained average values of all multimodal functions except function performed by IALO algorithm for 30 times are the least among these six algorithms and they are , and , respectively. That is, IALO obtains the best solutions on the functions . Hence, it is shown that IALO is better than the other comparative algorithms: SCA, PSO, MFO, MVO, and ALO in dealing with the 7 multimodal functions.

In dealing with the 7 fixed-dimension multimodal functions in Table 1, IALO obtains the best solution on function , shown in Table 2. We observe from Table 2 that PSO has the minimum average values on and they are . And we also observe from Table 2 that MFO has the minimum average values on and they are . And MVO has the minimum average value on .

Therefore, IALO is better than SCA, PSO, MFO, MVO, and ALO for function optimizations. In addition, to visually compare the performance of IALO and the other six algorithms: SCA, PSO, MFO, MVO, and ALO, the convergence curves of SCA, PSO, MFO, MVO, ALO, and IALO performed on 15 benchmark functions are generated, as shown in Figure 3. From Figure 3, it can be seen that IALO has the most rapid convergence speeds to arrive at the minimum function value.

To sum up, the proposed algorithm IALO outperforms other compared algorithms: SCA, PSO, MFO, MVO, and ALO. The results verify the performance of the IALO algorithm in solving various benchmark functions and the proposed algorithm IALO is valid.

4.2. Prediction of Chinese Influenza Based on Optimized Neural Network
4.2.1. Data Sources

Influenza is an acute respiratory infection caused by influenza virus and is also a highly infectious and fast-spreading disease. It is mainly transmitted through droplets in the air, contact between people, or contact with contaminated substances. The number of influenza patients and the incidence of influenza are determined to make hospitals supply the corresponding medical services for influenza patients. Therefore, it is essential to predict the number of influenza patients and the incidence of influenza every year.

Here, we adopt the influenza data of the whole China from January, 2004 to December, 2016 for prediction from the website http://www.phsciencedata.cn/Share/ky_sjml.jsp. From the loaded dataset, we can obtain the number of influenza patients, the number of deaths, the incidence of influenza, and the mortality rate on influenza of every month. In this paper, the number of influenza patients and the incidence of influenza are utilized to perform the predictions. Therefore, the supported data consists of number of influenza patients and the incidence of influenza from the website http://www.phsciencedata.cn/Share/ky_sjml.jsp.

4.2.2. BP Neural Network Optimized by IALO

We use IALO algorithm to optimize the weights and basis of BP neural network to create the predicted model, written as IALO-BPNN. In IALO-BPNN, every ant or antlion is mapped into the parameters: weights and basis of BP neural network. Therefore, the dimension of every ant or antlion is determined. For comparison, SCA, PSO, MFO, MVO, and ALO are also employed to optimize the weights and basis of BP neural network, and the corresponding models are SCA-BPNN, PSO-BPNN, MFO-BPNN, MVO-BPNN, and ALO-BPNN. And the fitness function in SCA, PSO, MFO, MVO, ALO, and IALO algorithms is defined as follows:where is the number of the data, is the actual value, and is the predicted value. For convenience, we call BPNN, SCA-BPNN, PSO-BPNN, MFO-BPNN, MVO-BPNN, ALO-BPNN, and IALO-BPNN as model 1, model 2, model 3, model 4, model 5, model 6, and model 7, respectively. We use the four evaluation criterions for models: mean absolute error (MAE), mean squared error (MSE), relative mean squared error (RMSE), and mean absolute percentage error (MAPE). The formulas of MAE, MSE, RMSE, and MAPE are defined as follows:where is the number of data and and denote the actual value and the predicted value of the - data.

4.2.3. Experimental Results on Influenza Prediction

We use the first three days’ influenza data to predict the fourth day’s influenza data. We choose the influenza data from January, 2004 to June, 2016 as the trained data and the influenza data from July, 2016 to December, 2016 as the tested data.

We perform model 1 to model 7 for predicting the incidence of influenza and the number of influenza patients. In model 1 and the BPNN parts of model 2 to model 7, the number of the nodes in the hidden layer is set to be 10 and the number of iterations is set to be 5000. In these models, SCA, PSO, MFO, MVO, ALO, and IALO algorithms are performed 500 iterations to optimize the weights and biases of BP neural network.

These 7 models are run only 1 time and the predicted values of the number of influenza patients and the incidence of influenza are shown in Tables 3 and 4, respectively. It is shown that the predicted model IALO-BPNN is better than the other predicted models: BPNN, SCA-BPNN, PSO-BPNN, MFO-BPNN, MVO-BPNN, and ALO-BPNN.


Actual 7893 11309 14254 21953 38319

BPNN3371 10322 1461422982 8202
SCA-BPNN12258 10556 15551 30951 32987
PSO-BPNN47311454 17101 28559 55681
MFO-BPNN11711 11597 15070 31986 25361
MVO-BPNN12661981613931 31863 49451
ALO-BPNN3794 12461 1756627023 28565
IALO-BPNN10045 9117 15643 3359035124


Actual 0.57930.83001.04621.61132.8125

BPNN0.98260.88871.09513.70063.4527
SCA-BPNN0.88570.72531.15562.27473.4892
PSO-BPNN1.05970.75851.35302.38152.3470
MFO-BPNN1.18220.87330.87712.40302.4213
MVO-BPNN0.34190.75140.95692.76132.6413
ALO-BPNN0.25860.82211.21742.45592.5905
IALO-BPNN0.78790.86671.17502.12112.0030

Further, we run these 7 models 10 times independently, respectively. The average MAE, MSE, RMSE, and MAPE of predicting the incidence of influenza and the average MAE, MSE, RMSE, and MAPE of predicting the number of influenza patients are obtained, as shown in Tables 5 and 6, respectively.


Model 1 Model 2 Model 3 Model 4 Model 5Model 6Model 7

MAE0.4950 0.3615 0.5333 0.3639 0.3620 0.3984 0.3254
MSE0.5869 0.3098 0.5902 0.2686 0.2586 0.3327 0.2158
RMSE 0.1899 0.1419 0.2761 0.1687 0.1592 0.1940 0.0925
MAPE 32.7250 25.7252 39.7732 28.9608 28.3542 30.5593 23.4146


Model 1Model 2Model 3Model 4Model 5Model 6Model 7

MAE7448.04145746.80415968.97645748.08324056.44424648.95274659.0447
MSE1.3140E+086.4134E+076.7473E+076.6312E+073.0765E+074.2187E+074.9102E+07
RMSE0.34240.20040.22910.19390.14270.15040.1050
MAPE40.435632.019634.828331.875925.165327.494924.6722

From Table 5, it is observed that the average MAE, MSE, RMSE, and MAPE of predicting the incidence of influenza obtained by IALO-BPNN are the least and arrive at 0.3254, 0.2158, 0.0925, and 23.4146%, respectively. From Table 6, it is shown that the average RMSE and MAPE of predicting the number of influenza patients obtained by IALO-BPNN are the least and arrive at 0.1050 and 24.6722%, respectively. And the average MAE and the average MSE of predicting the number of influenza patients obtained by MVO-BPNN are the least and arrive at 4056.4442, 3.0765E+07, respectively. Therefore, it can be seen that the predicted model IALO-BPNN outperforms the other predicted models: BPNN, SCA-BPNN, PSO-BPNN, MFO-BPNN, MVO-BPNN, and ALO-BPNN. And the results show that IALO algorithm can effectively be used to optimize the weights and basis of BP neural network for predicting the influenza.

5. Discussion

From the results obtained from the previous sections, we can recognize that the proposed IALO in this paper shows the superior results for multidimensional functions and functions with the fixed dimension by comparison with five kinds of swarm intelligence algorithms: SCA, PSO, MFO, MVO, and ALO in Section 4.1. And IALO is applied to optimize the parameters of BP neural network and then the predicted model IALO-BPNN is built. IALO-BPNN has better predicted results than the other predicted models: BPNN, SCA-BPNN, MFO-BPNN, MVO-BPNN, and ALO-BPNN in Section 4.2.

The elitism operator of ALO is inspired by PSO to build the elitism operator of the proposed IALO, as shown in (20). And the improved elitism operator reveals an immediate impact on the capabilities of IALO in balancing the exploration and exploitation in dealing with function optimizations and influenza prediction. In (20), there are several parameters: inertia weight , accelerated factors , . In the experiments, we take which is a decreasing function and . But the inertia weight can be selected by means of different methods, which leads to obtaining the different optimal solution and different convergence curve in optimization problems and the different predicted results in predicted problems. In addition, BP neural network in this paper is used to be optimized by IALO, but there are many kinds of neural networks. Therefore, if BP neural network is replaced by another different neural network, the results will be changed possibly.

6. Conclusion and Future Direction

In this paper, inspired by PSO, the elitism operator in ALO is improved and the improved ALO is created, written as IALO. IALO has been shown to be suitable for function optimization and influenza prediction. We use IALO to conduct the numerical tests on 23 classic benchmark functions to examine the exploration, exploitation, and convergence behavior of the proposed IALO, whose effectiveness is valid by comparison with SCA, PSO, MFO, MVO, and ALO. The comparative results illustrate that the proposed IALO is superior to SCA, PSO, MFO, MVO, and ALO. The similar results can be seen from the convergence curves. To further confirm IALO’s optimization capability, we use IALO to optimize the weights and basis of BP neural network for predicting the incidence of influenza and the number of influenza patients, respectively. Thus the predicted model IALO-BPNN is built. Then IALO-BPNN is compared with the other predicted models: BPNN, SCA-BPNN, MFO-BPNN, MVO-BPNN, and ALO-BPNN. The experimental results show that the proposed model IALO-BPNN has the least MAE, MSE, RMSE, and MAPE for predicting the incidence of influenza and the least RMSE and MAPE for predicting the number of influenza patients. Therefore, there are reasons that the proposed IALO can be used to be a powerful tool in dealing with the classic benchmark functions and the combination with artificial neural network for prediction and classification.

Many worth swarm intelligence algorithm which needs to be explored in future will be proposed constantly. For example, IALO can be combined with one or more of the other swarm intelligence algorithms to become the new hybrid algorithm to further improve its performance. In addition, more than two swarm intelligence algorithms are combined to create the hybrid algorithms. These improved hybrid algorithms can be utilized to realize the function optimizations and further able to solve the real-world problems in valid.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no financial conflicts of interest.

Acknowledgments

This work was supported by Shanxi Natural Science Foundation [grant numbers 201801D121026, 201701D121012, 201801D121008, 201701D221121]; the National Natural Science Foundation of China [grant numbers 61774137, 11571324]; the Fund for Shanxi “1331KIRT”; and Shanxi Scholarship Council of China [grant number 2016-088].

References

  1. J. H. Holland, Adaptation in Nature and Artificial Systems, The MIT Press, Cambridge, Mass, USA, 1992. View at: MathSciNet
  2. X. Yao, Evolutionary Computation: Theory and Applications, World Scientific Publishing, Singapore, 1999.
  3. S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, UK, 2004. View at: Publisher Site | MathSciNet
  4. H. Sang, L. Gao, and Q. Pan, “Discrete artificial bee colony algorithm for lot-streaming flowshop with total flowtime minimization,” Chinese Journal of Mechanical Engineering, vol. 25, no. 5, pp. 990–1000, 2012. View at: Publisher Site | Google Scholar
  5. S. K. Nseef, S. Abdullah, A. Turky, and G. Kendall, “An adaptive multi-population artificial bee colony algorithm for dynamic optimisation problems,” Knowledge-Based Systems, vol. 104, pp. 14–23, 2016. View at: Publisher Site | Google Scholar
  6. V. Ho-Huu, T. Nguyen-Thoi, T. Vo-Duy, and T. Nguyen-Trang, “An adaptive elitist differential evolution for optimization of truss structures with discrete design variables,” Computers & Structures, vol. 165, pp. 59–75, 2016. View at: Publisher Site | Google Scholar
  7. G. Bartsch, A. P. Mitra, S. A. Mitra et al., “Use of Artificial Intelligence and Machine Learning Algorithms with Gene Expression Profiling to Predict Recurrent Nonmuscle Invasive Urothelial Carcinoma of the Bladder,” The Journal of Urology, vol. 195, no. 2, pp. 493–498, 2016. View at: Publisher Site | Google Scholar
  8. C.-M. Lai, W.-C. Yeh, and Y.-C. Huang, “Entropic simplified swarm optimization for the task assignment problem,” Applied Soft Computing, vol. 58, pp. 115–127, 2017. View at: Publisher Site | Google Scholar
  9. J. H. Holl, “Genetic algorithms,” Scientific American, vol. 267, no. 1, pp. 66–73, 1992. View at: Publisher Site | Google Scholar
  10. R. Eberhart and J. Kennedy, “Particle swarm optimization,” in Proceedings of IEEE Inter Conference on Neural Networks, vol. 4, pp. 1942–1948, IEEE, Perth, Australia, 1995. View at: Google Scholar
  11. A. V. Mokshin, V. V. Mokshin, and L. M. Sharnin, “Adaptive genetic algorithms used to analyze behavior of complex system,” Communications in Nonlinear Science and Numerical Simulation, vol. 71, pp. 174–186, 2019. View at: Publisher Site | Google Scholar
  12. J. Fernández, J. López-Campos, A. Segade, and J. Vilán, “A genetic algorithm for the characterization of hyperelastic materials,” Applied Mathematics and Computation, vol. 329, pp. 239–250, 2018. View at: Publisher Site | Google Scholar
  13. V. Sanchez-Tembleque, V. Vedia, L. M. Fraile, S. Ritt, and J. M. Udias, “Optimizing time-pickup algorithms in radiation detectors with a genetic algorithm,” Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 927, pp. 54–62, 2019. View at: Google Scholar
  14. Y. M. Ding, W. L. Zhang, L. Yu, and K. H. Lu, “The accuracy and efficiency of GA and PSO optimization schemes on estimating reaction kinetic parameters of biomass pyrolysis,” Energy, vol. 176, pp. 582–588, 2019. View at: Google Scholar
  15. S. Mirjalili, “Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm,” Knowledge-Based Systems, vol. 89, pp. 228–249, 2015. View at: Publisher Site | Google Scholar
  16. S. Mirjalili, S. M. Mirjalili, and A. Hatamlou, “Multi-verse optimizer: a nature-inspired algorithm for global optimization,” Neural Computing and Applications, vol. 27, no. 2, pp. 495–513, 2016. View at: Publisher Site | Google Scholar
  17. S. Mirjalili, “SCA: a sine cosine algorithm for solving optimization problems,” Knowledge-Based Systems, vol. 96, pp. 120–133, 2016. View at: Google Scholar
  18. S. Mirjalili and A. Lewis, “The whale optimization algorithm,” Advances in Engineering Software, vol. 95, pp. 51–67, 2016. View at: Publisher Site | Google Scholar
  19. J. Luo, H. Chen, A. A. Heidari, Y. Xu, Q. Zhang, and C. Li, “Multi-strategy boosted mutative whale-inspired optimization approaches,” Applied Mathematical Modelling, vol. 73, pp. 109–123, 2019. View at: Publisher Site | Google Scholar
  20. S. Mirjalili, “The ant lion optimizer,” Advances in Engineering Software, vol. 83, pp. 80–98, 2015. View at: Publisher Site | Google Scholar
  21. M. Raju, L. C. Saikia, and N. Sinha, “Automatic generation control of a multi-area system using ant lion optimizer algorithm based PID plus second order derivative controller,” International Journal of Electrical Power & Energy Systems, vol. 80, pp. 52–63, 2016. View at: Publisher Site | Google Scholar
  22. S. K. Majhi and S. Biswal, “Optimal cluster analysis using hybrid K-Means and Ant Lion Optimizer,” Karbala International Journal of Modern Science, vol. 4, no. 4, pp. 347–360, 2018. View at: Publisher Site | Google Scholar
  23. Z. Wu, D. Yu, and X. Kang, “Parameter identification of photovoltaic cell model based on improved ant lion optimizer,” Energy Conversion and Management, vol. 151, pp. 107–115, 2017. View at: Publisher Site | Google Scholar
  24. S. Mouassa, T. Bouktir, and A. Salhi, “Ant lion optimizer for solving optimal reactive power dispatch problem in power systems,” Engineering Science and Technology, an International Journal, vol. 20, no. 3, pp. 885–895, 2017. View at: Publisher Site | Google Scholar
  25. H. L. Chen, S. Jiao, A. A. Heidari, M. J. Wang, X. Chen, and X. H. Zhao, “An opposition-based sine cosine approach with local search for parameter estimation of photovoltaic models,” Energy Conversion and Management, vol. 195, pp. 927–942, 2019. View at: Publisher Site | Google Scholar
  26. A. A. Heidari, S. Mirjalili, H. Faris, I. Aljarah, M. Mafarja, and H. Chen, “Harris hawks optimization: algorithm and applications,” Future Generation Computer Systems, vol. 97, pp. 849–872, 2019. View at: Publisher Site | Google Scholar
  27. M. Taradeh, M. Mafarja, A. A. Heidari et al., “An evolutionary gravitational search-based feature selection,” Information Sciences, vol. 497, pp. 219–239, 2019. View at: Publisher Site | Google Scholar
  28. M. Mafarja, I. Aljarah, A. A. Heidari et al., “Evolutionary population dynamics and grasshopper optimization approaches for feature selection problems,” Knowledge-Based Systems, vol. 145, pp. 25–45, 2018. View at: Publisher Site | Google Scholar
  29. M. Mafarja, I. Aljarah, A. A. Heidari et al., “Binary dragonfly optimization for feature selection using time-varying transfer functions,” Knowledge-Based Systems, vol. 161, pp. 185–204, 2018. View at: Publisher Site | Google Scholar
  30. I. Aljarah, M. Mafarja, A. A. Heidari, H. Faris, Y. Zhang, and S. Mirjalili, “Asynchronous accelerating multi-leader salp chains for feature selection,” Applied Soft Computing, vol. 71, pp. 964–979, 2018. View at: Publisher Site | Google Scholar
  31. H. P. Hu, L. Tang, S. H. Zhang, and H. Y. Wang, “Predicting the direction of stock markets using optimized neural networks with Google Trends,” Neurocomputing, vol. 285, pp. 188–195, 2018. View at: Publisher Site | Google Scholar
  32. M. Qiu and Y. Song, “Predicting the direction of stock market index movement using an optimized artificial neural network model,” PLoS ONE, vol. 11, no. 5, Article ID e0155133, 2016. View at: Publisher Site | Google Scholar
  33. J. Lu, H. Hu, and Y. Bai, “Generalized radial basis function neural network based on an improved dynamic particle swarm optimization and AdaBoost algorithm,” Neurocomputing, vol. 152, pp. 305–315, 2015. View at: Publisher Site | Google Scholar
  34. J. Lu, H. Hu, and Y. Bai, “Radial basis function neural nerwork based on an improved exponential decreasing inertia weight-particle swarm optimizatin algorithm for AQI prediction,” Abstract and Applied Analysis, vol. 2014, Article ID 178313, 9 pages, 2014. View at: Publisher Site | Google Scholar
  35. H. Hu, H. Wang, F. Wang, D. Langley, A. Avram, and M. Liu, “Prediction of influenza-like illness based on the improved artificial tree algorithm and artificial neural network,” Scientific Reports, vol. 8, no. 1, article no. 4895, 2018. View at: Publisher Site | Google Scholar
  36. H. Hu, H. Wang, Y. Bai, and M. Liu, “Determination of endometrial carcinoma with gene expression based on optimized Elman neural network,” Applied Mathematics and Computation, vol. 341, pp. 204–214, 2019. View at: Publisher Site | Google Scholar
  37. Y. Xu, H. Chen, A. A. Heidari et al., “An efficient chaotic mutative moth-flame-inspired optimizer for global optimization tasks,” Expert Systems with Applications, vol. 129, pp. 135–155, 2019. View at: Publisher Site | Google Scholar
  38. A. A. Heidari, R. Ali Abbaspour, and H. Chen, “Efficient boosted grey wolf optimizers for global search and kernel extreme learning machine training,” Applied Soft Computing, vol. 81, article 105521, 2019. View at: Publisher Site | Google Scholar

Copyright © 2019 Hongping Hu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

859 Views | 513 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.