Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 5652340 | https://doi.org/10.1155/2019/5652340

Xingwang Huang, Chaopeng Li, Yunming Pu, Bingyan He, "Gaussian Quantum Bat Algorithm with Direction of Mean Best Position for Numerical Function Optimization", Computational Intelligence and Neuroscience, vol. 2019, Article ID 5652340, 18 pages, 2019. https://doi.org/10.1155/2019/5652340

Gaussian Quantum Bat Algorithm with Direction of Mean Best Position for Numerical Function Optimization

Academic Editor: Rodolfo E. Haber
Received19 Sep 2019
Revised23 Oct 2019
Accepted28 Oct 2019
Published16 Nov 2019

Abstract

Quantum-behaved bat algorithm with mean best position directed (QMBA) is a novel variant of bat algorithm (BA) with good performance. However, the QMBA algorithm generates all stochastic coefficients with uniform probability distribution, which can only provide a relatively small search range, so it still faces a certain degree of premature convergence. In order to help bats escape from the local optimum, this article proposes a novel Gaussian quantum bat algorithm with mean best position directed (GQMBA), which applies Gaussian probability distribution to generate random number sequences. Applying Gaussian distribution instead of uniform distribution to generate random coefficients in GQMBA is an effective technique to promote the performance in avoiding premature convergence. In this article, the combination of QMBA and Gaussian probability distribution is applied to solve the numerical function optimization problem. Nineteen benchmark functions are employed and compared with other algorithms to evaluate the accuracy and performance of GQMBA. The experimental results show that, in most cases, the proposed GQMBA algorithm can provide better search performance.

1. Introduction

Recently, optimization problems are usually encountered in a mount of real-word areas such as artificial intelligence, computer science, pattern recognition, information theory, etc. Many of the actual optimization problems are frequently NP-hard problems, and searching for optimal solutions is pretty hard. It stands for the reason that it takes too long to solve these NP-hard optimization problems with traditional optimization methods. Therefore, different optimization techniques, especially bioinspired metaheuristic optimization algorithms or swarm intelligence (SI) optimization algorithms, have raised many researchers’ growing interest in the past twenty years and they have proposed or developed various optimization algorithms such as particle swarm optimization (PSO) [1, 2], gravitational search algorithm (GSA) [3], ant colony optimization (ACO) [4], cuckoo search (CS) [5], and bat algorithm (BA) [6]. These algorithms are verified that they are very suited for optimizing problems such as feature selection [7, 8], task scheduling [9], unit commitment [10], artificial neural networks [11], fuzzy control [12], parameter selection and optimization [13, 14], and numerical function optimization [15]. They have also been applied to multiobjective optimization problems [16]. Compared with traditional optimization techniques, these algorithms can provide better generalization ability and parallelism. Hence, in many high-dimensional optimization problems, these algorithms outperform the traditional optimization techniques.

The bat algorithm is a relatively new nature-inspired swarm-based optimization algorithm that was proposed by Yang in 2010 [6]. This algorithm mimics the foraging behavior of bats to search the optima and it has successfully combined the merits of many well-known algorithms in a structured way, such as PSO, simulated annealing (SA) [17] and genetic algorithm (GA) [18]. BA also inherits the simplicity of PSO and it has been proved to be more efficient than its predecessor PSO and GA, especially in low-dimensional cases. Also, it is easy to implement BO in various computer languages. Hence, it has been applied to many engineering optimization problems [19]. However, due to its low population diversification, it may get trapped in local optima and premature convergence when solving high-dimensional optimization problems [20]. Therefore, to issue this deficiency, many bat algorithm variants are proposed to improve BA performance, such as CLBA [21], DLBA [22], IBA [23], HSBA [24], etc. QMBA [15] is also a variant of BA, which introduced the quantum behavior to improve the population diversity. Also, with the direction of mean best position in the later phase of searching, QMBA can convergence more quickly. QMBA is verified to be superior to the original BA and other four BA variants, including IBA, HSBA, MBA [25], and CBA [26].

However, all stochastic coefficients in the QMBA algorithm are generated using the uniform probability distribution, which can only provide a relatively small search range. Therefore, QMBA still faces a certain degree of premature convergence. To solve this issue, this article presents a technique of quantum-behaved BA directed by mean best position (QMBA) based on Gaussian distribution (GQMBA) for numerical function optimization. The employment of random Gaussian generation instead of random uniform generation in QMBA is an effective method to enhance the performance of QMBA in avoiding local solutions. In order to verify the performance of the Gaussian QMBA (GQMBA) technique, nineteen classical benchmark functions were employed, and the experimental results obtained by GQMBA over 30 trials were compared with some other algorithms mentioned in the literature.

The remainder of this article is structured as follows: the original bat algorithm is described in Sections 2 and 3 gives an overview to the QMBA. Section 4 presents the Gaussian quantum bat algorithm with the direction of mean best position. The following Section 5 provides the simulation results and comparison of this proposed techinique. Finally, conclusions are made, and future research direction is presented in Section 6.

2. The Original BA Algorithm

Bat algorithm is a nature-inspired optimization algorithm inspired by foraging behavior of bats [6]. This technique is simple and easy to implement and efficient, which is swarm-based on a stochastic optimization method. When bats foraging, they search for prey and avoid obstacles by using the echolocation technique. The original BA employs a frequency-tuning approach to increase the diversification of the swarm, while, at the same time, it adopts the automatic zooming method to try to keep the balance of global search and local search during the search procedure by simulating the variations of pulse loudness and emission rates of bats when foraging. Based on three idealized rules [6], the foraging behavior of bats can be transformed to the bat algorithm. The following paragraphs present the details of BA.

In the original BA, each bat flies toward the prey, that is, moving toward the current global best position. The frequency vector (), velocity vector (), and position vector () of the artificial bats are updated during the process of iteration using the following equations:where β is a uniform random number in the range of [0, 1], and indicate the minimum and maximum frequency, respectively, means the current global best solution. With these equations, the global search capacity of BA can be guaranteed.

For the local search, to produce new solution for each bat when a solution is chosen from the current best solutions, a local random walk strategy is employed. This strategy can be described as follows:where ε is a uniform random number in the range of [−1, 1] and decides the direction of new solution. Here, At is the average loudness value of all bats at the tth iteration.

During the foraging process, bats will gradually adapt the values of loudness and pulse emission rate for the purpose of locating the prey. The loudness value and pulse emission rate can be updated in each cycle as follows:where indicates the initial rate of pulse emission of ith artificial bat. α and γ are constants. The range of α is [0, 1] and γ is a positive number (). Actually, as the cooling coefficient in the SA, α decides the convergence of BA. For simplicity, is usually adopted in the researches.

The basic procedure of BA is described as the pseudocode illustrated in Algorithm 1.

Initialize the bat population and ;
 Define pulse frequency , pulse rate and the loudness ;
while do
  Generate new solutions by adjusting frequency, updating velocities and  positions using equations (1)–(3);
if then
  Select a solution among the best solutions randomly;
  Generate a local solution around the selected best solution using equation (4);
end if
if then
  Accept the new solutions;
  Increase and reduce using equations (5) and (6);
end if
 Rank the bats and find the current best ;
;
end while

3. The QMBA Algorithm

The original bat algorithm has the characteristics of simplicity, easy to implement and quick convergence; hence, it has been applied to many optimization problems. However, BA performs bad in the multimodal cases, due to its low population diversity. Through the analysis of the trajectory of artificial bats, Zhu et al. [15] proposed the quantum-behaved bat algorithm with mean best position directed. In QMBA, the quantum-behaved mutation operator can increase the diversity of swam and it also can help to avoid premature convergence. Additionally, the mean beast solution used in the later phase can quick up the convergence speed of the algorithm. The following paragraphs describe the details of QMBA [15].

QMBA is basically constructed on the basis of the original BA. The decreasing coefficient A and increasing coefficient r control the global search and local search, respectively. But the method to generate new candidate solutions is different from the original BA. The new method is described as follows:where η indicates a random number uniformly distributed in the range [0, 1].and it represents the distance between the dth dimension of current global best position in the swarm and the position of dth dimension of ith bat, rand is a uniform random number in [0, 1]. If the distance is smaller than the threshold TH, the ith bat can fly randomly. However, if the distance is larger than the threshold TH, then the ith bat flies toward to the current global best position.

For the local search, the random walk strategy is not employed again. According to certain mutation probability , some of the bats will be mutated with quantum-behaved operator, which can be described as follows:where U is a random number in the range [0, 1] generated by the uniform distribution and μ is a self-adaptive linear decreasing coefficient defined asand and are the initial and final values of μ. In the QMBA, and are adopted.where the denotes the mean best position, that is, the average value of positions of all artificial bats. represents the present best position of the ith bat, M indicates the size of swarm, and D represents the dimension of problem.

If a bat does not mutate with quantum-behaved operation mentioned above during the local search, then the position of the bat is updated as follows:where ϕ also denotes a random number of uniform distribution in the range [0, 1].

The pseudocode of QMBA is presented in Algorithm 2.

Initialize the bat population and ;
Define pulse frequency , pulse rate and the loudness ;
while do
for to n do
  Generate new solutions by calculating the distance between the bat and current global best position, updating positions using   equations (7) and (8);
  if then
   if then
    Bats fly with quantum behavior and positions using equations (9)–(11);
   else
    The mean best position is used to guide other bats and position updated using equations (11) and (12);
   end if
  end if
  if then
   Accept the new solutions;
   Increase and reduce using equations (5) and (6);
  end if
  Rank the bats and find the current best ;
end for
;
end while

4. The GQMBA Algorithm

Various novel variants of BA have been developed to improve the performance of the original BA in recent years. Most of these BA variants generate random numbers with uniform probability distribution. However, some researches have proved that other probability distributions, such as Gaussian (normal) probability distribution, can be a good choice to improve the performance of heuristic algorithms [2729]. In fact, any long-tail distribution helps increase the step size and distance of the random walk.

In this section, following the same direction of research, we give out a combination of QMBA and Gaussian probability distribution, which is called Gaussian QMBA (GQMBA).

A random generation that the Gaussian probability distribution with a mean of 0 and a standard deviation of 1 is utilized for initializing stochastic coefficients of GQMBA. GQMBA offers a good trade-off between the probability of having numerous small amplitudes near the present position and the small probability of having a higher amplitude. This random generation allows bats fly away from the present position and jump out of local optima. It can not only promote the accuracy of the solutions but also improve the robustness of the optimization technique. As described in Section 3, there are three uniformly distributed random sequences in the search process of QMBA. Therefore, with the application of Gaussian random generation, the GQMBA algorithm can provide a wider search space and the performance of QMBA may be improved.

In this article, stochastic numbers in GQMBA are generated using the absolute value of Gaussian probability distributions with a mean of zero and a standard deviation of one, that is, or . The one-dimensional probability density function of is defined by:

The combination of QMBA and Gaussian probability distribution is simple but effective. Only three equations need to be modified. The three major highlights are described below.

Firstly, parameter η of equation (7) is modified according to the following equation:where .

Secondly, U of equation (9) is also replaced with the absolute value of the Gaussian probability distribution with a mean of zero and unit standard deviation. This quantum-behaved mutation operator now is updated according towhere . Note that, according to equation (13), ; therefore, satisfies the domain requirement of logarithmic function ().

Thirdly, the random number ϕ in equation (12) is modified according to the following equation:where .

Overall, in GQMBA, the present global best solution guides the exploration phase to guarantee convergence, while the Gaussian quantum-behaved mutation operator and mean best position contribute to the exploitation phase to escape from local optimum and prevent premature convergence.

Based on the above description, the pseudocode of the GQMBA algorithm is summarized in Algorithm 3.

Initialize the bat population and ;
Define pulse frequency , pulse rate and the loudness ;
while do
for to n do
  Generate new solutions by calculating the distance between the bat and current global best position, updating positions using   equations (8) and (14);
  if then
   if then
    Bats fly with quantum behavior and positions using equations (10), (11) and (15);
   else
    The mean best position is used to guide other bats and position updated using equations (11) and (16);
   end if
  end if
  if then
   Accept the new solutions;
   Increase and reduce using equations (5) and (6);
  end if
  Rank the bats and find the current best ;
end for
;
end while

5. Experiments and Discussion

In this section, nineteen classical benchmark functions are illustrated in Tables 13, which are adopted to test the performance of the GQMBA algorithm. These benchmark functions are usually employed in numerical optimization techniques [15, 3032]. In this article, the nineteen benchmark functions are grouped into three categories. The first category includes seven unimodal functions, which have only one optimal solution and are efficient to verify metaheuristic optimization techniques in terms of convergence speed and exploitation capability. The second category includes six multimodal functions, which have an exponential increasing number of local minima. Therefore, these multimodal functions are suitable for examining the local solutions avoidance and exploration capability of algorithms. The third category includes six composite functions, which are very complex with the combination of different rotated, shifted, and biased multimodal test functions. These composite functions are highly similar to the actual applications and suitable to benchmark the performance of methods in terms of balanced global search and local search. In these tables, D denotes the dimension of the solution space, Range is the boundary of the search space, and the global best value is also given in column 4. Thirty independent tests are completed for every benchmark function. All the tests illustrated in this research are performed on a PC with Intel (R) Core (TM) i5-6500 3.20 GHz CPU and 8.0 GB RAM of memory, and the codes are implemented in Matlab 2014a.


FunctionDRange

30[−100, 100]0
30[−10, 10]0
30[−100, 100]0
30[−100, 100]0
30[−30, 30]0
30[−100, 100]0
30[−1.28, 1.28]0


FunctionDRange

30[−500, 500]
30[−5.12, 5.12]0
30[−32, 32]0
30[−600, 600]0


30[−50, 50]0
30[−50, 50]0


FunctionDRange

30[−5, 5]0
30[−5, 5]0
30[−5, 5]0
30[−5, 5]0
30[−5, 5]0
30[−5, 5]0

Since metaheuristic algorithms belong to stochastic optimization methods, they need to be completed at least over 10 independent runs for producing meaningful statistical consequences. Besides the mean and standard deviation, statistical tests, such as Wilcoxon rank-sum test, should be conducted to verify the significance of the results based on every independent runs. In this article, the nonparametric Wilcoxon rank-sum tests are completed to verify whether there exists a statistical difference between the results obtained by GQMBA and the results searched by the other algorithms. A value of less than 0.05 () means that there exists statistical difference between the performances of the two algorithms, while a value of greater than 0.05 () denotes that the performances are statistically similar.

Considering that QMBA has been verified to be more efficient than other variants of BA [15], therefore, GQMBA is compared with the original BA and QMBA as well as a new metaheuristic algorithm MFO [30] to verify its efficiency. MFO is a novel nature-inspired heuristic algorithm. It shows high and competitive global search ability in multimodal functions and local search ability in unimodal functions. Also, MFO can balance global search and local search properly. Comparing with PSO, GSA, BA, FPA, SMS, FA, and GA, it can provide promising and competitive performance [30]. So MFO is selected as the comparative algorithm. In all swarm-based algorithms mentioned above, the maximum number of iterations is taken as 1000 for unimodal and multimodal functions, while the maximum number of iterations is taken as 100 for composite functions due to its high complexity, and the size of population is set as 50. Table 4 shows the other parameter settings for each algorithm. As presented in Tables 13, nineteen classical benchmark functions are employed. The experimental results are presented in Tables 5 and 6 and Figures 119. Note that the optimal mean (Mean) and the optimal standard deviations (SD) of the results obtained by the four methods for each function are illustrated in bold.


AlgorithmsParameter design

BA
QMBA
MFOIdentical to the values in the original article
GQMBA

denotes a uniform random number ranged in [0, 1].

BAQMBAMFOGQMBA
MeanSDMeanSDMeanSDMeanSD

3.25e + 049.53e + 038.13e − 042.87e − 031.67e + 034.61e + 031.37e + 004.07e + 00
2.64e + 081.36e + 094.70e − 022.06e − 013.27e + 012.27e + 012.39e − 023.77e − 02
8.15e + 043.28e + 047.99e + 034.97e + 031.61e + 048.92e + 032.62e + 031.43e + 03
6.45e + 019.17e + 001.65e + 015.47e + 005.77e + 011.38e + 014.81e − 017.13e − 01
6.67e + 073.21e + 079.72e + 011.31e + 022.69e + 061.46e + 071.09e + 021.30e + 02
3.14e + 048.06e + 031.85e + 005.70e + 001.66e + 033.78e + 031.43e + 003.63e + 00
6.18e + 013.69e + 015.07e − 022.41e − 022.12e + 005.01e + 005.14e − 024.52e − 02
−3.82e + 031.41e + 03−6.66e + 031.61e + 03−8.70e+038.74e + 02−6.99e + 037.49e + 02
3.64e + 023.34e + 014.25e + 011.68e + 011.47e + 023.23e + 013.47e + 011.19e + 01
1.99e + 011.78e − 012.06e + 001.29e + 001.30e + 019.12e + 001.53e − 024.69e − 02
2.83e + 028.03e + 011.71e − 013.45e − 011.51e + 014.17e + 018.08e − 022.31e − 01
1.01e + 088.36e + 075.93e − 018.74e − 012.51e − 013.44e − 013.88e − 028.50e − 02
2.38e + 081.46e + 085.46e − 011.56e + 004.49e − 021.64e018.59e − 022.31e − 01
1.22e + 031.12e + 024.06e + 022.59e + 025.08e + 021.76e+023.27e + 021.91e + 02
1.22e + 031.24e + 024.98e + 021.92e + 026.35e + 022.46e + 024.25e + 021.83e+02
1.59e + 031.66e + 021.25e + 031.96e + 021.02e + 032.02e + 028.66e + 021.60e+02
1.44e + 031.00e + 021.10e + 037.58e + 011.16e + 031.61e + 029.80e + 021.46e+02
1.45e + 031.29e + 027.44e + 023.69e + 025.83e + 023.46e + 024.00e + 022.61e+02
1.39e + 038.79e + 011.03e + 034.32e + 011.18e + 036.28e + 011.04e + 035.40e + 01


FGQMBAMFOQMBABA

0.01500.0176N/A3.0199e − 11
N/A5.5727e − 100.02073.0199e − 11
N/A8.8411e − 073.8053e − 073.0199e − 11
N/A3.0199e − 113.0199e − 113.0199e − 11
0.43763.3679e − 04N/A3.0199e − 11
N/A0.32550.01333.0199e − 11
0.27075.5611e − 04N/A3.0199e − 11
7.1186e − 09N/A0.01684.1997e − 10
N/A3.0199e − 110.11883.0199e − 11
N/A2.3897e − 087.7725e − 091.4110e − 09
N/A0.42900.02513.0199e − 11
N/A3.5708e − 064.0840e − 053.0199e − 11
0.0850N/A9.7917e − 053.0199e − 11
N/A5.2640e − 040.23403.0199e − 11
N/A0.00230.10553.0199e − 11
N/A0.00212.1947e − 083.6897e − 11
N/A4.0840e − 058.1200e − 043.0199e − 11
N/A0.00831.0407e − 043.3384e − 11
0.38712.2273e − 09N/A4.0772e − 11

N/A means not applicable. have been italized.

As can be seen from Table 5, the GQMBA provides the best performance on most test functions, followed by the QMBA and MFO algorithms. And they are much better than the BA algorithm. To some extent, the results obtained also demonstrate that BA is easy to get trapped into local minimums when the dimension of search space is high.

The GQMBA algorithm obtains the best mean on 14 benchmark functions of 19 test functions, except , , , , and . At the same time, GQMBA provides the most stable solutions on fourteen benchmark functions, except , , , , and . The -values in Table 6 illustrate that the superiority of the GQMBA algorithm is statistically significant on 9 benchmark functions, including , , , , , , , and , which cover the unimodal, multimodal, and composite benchmark functions. On the other functions, GQMBA and QMBA perform statistically similar and there is no significant difference between these two algorithms, except and . With the analysis above, it can be concluded that the introduction of Gaussian probability distribution to QMBA is an effective mechanism and GQMBA has the significant advantage over the other three algorithms in terms of accuracy, stability, and local minimum avoidance.

Figures 119 demonstrate the average curves of fitness values obtained by four algorithms on nineteen functions. The values presented in these curves are the average function fitness values obtained from 30 independent tests. These figures show that the original BA and MFO converge quickly with few iterations, but they are easy to get trapped into the local optimum in many cases. These figures also demonstrate that the convergence speed of GQMBA is similar to QMBA, while GQMBA can provide better accuracy and prevent premature convergence on most benchmark functions.

In order to compare the performance in terms of computational time, the average computational time of approaches on each test function over 30 independent runs is also provided in Table 7. As can be seen from this table, for unimodal and multimodal benchmark functions, the performance of MFO in terms of computational time is best and the average computational time of GQMBA is slightly more than that of QMBA. For composite benchmark functions, the computational time of GQMBA is similar to other approaches. The obtained results comfirm that GQMBA can achieve better results with slightly more computational time.


BAMFOQMBAGQMBA

1.100360.4693590.8896150.929227
1.042430.5245130.9211710.980138
3.07652.430982.887772.95636