Abstract

Bat Algorithm (BA) is a swarm intelligence algorithm which has been intensively applied to solve academic and real life optimization problems. However, due to the lack of good balance between exploration and exploitation, BA sometimes fails at finding global optimum and is easily trapped into local optima. In order to overcome the premature problem and improve the local searching ability of Bat Algorithm for optimization problems, we propose an improved BA called OBMLBA. In the proposed algorithm, a modified search equation with more useful information from the search experiences is introduced to generate a candidate solution, and Lévy Flight random walk is incorporated with BA in order to avoid being trapped into local optima. Furthermore, the concept of opposition based learning (OBL) is embedded to BA to enhance the diversity and convergence capability. To evaluate the performance of the proposed approach, 16 benchmark functions have been employed. The results obtained by the experiments demonstrate the effectiveness and efficiency of OBMLBA for global optimization problems. Comparisons with some other BA variants and other state-of-the-art algorithms have shown the proposed approach significantly improves the performance of BA. Performances of the proposed algorithm on large scale optimization problems and real world optimization problems are not discussed in the paper, and it will be studied in the future work.

1. Introduction

With the development of natural science and technology, a large number of complicated optimization problems arise in a variety of fields, such as engineering, manufacturing, information technology, and economic management. These real world problems usually have an objective function with lots of decision variables and constrains, appearing with discontinuous, nonconvex features [1]. Optimization methods are always needed to obtain the optimal solution of these problems.

Optimization algorithms proposed by researchers can be categorized into two groups: deterministic algorithms and stochastic algorithms. Deterministic algorithms usually need the information of gradient. They are effective for problems with one global optimum, while they might be invalid for problems with several local optima or problems with gradient information unavailable. Compared with deterministic algorithms, stochastic algorithms require only the information of the objective function [2]. They have been successfully used in many optimization problems characterized as nondifferentiable and nonconvex.

Swarm intelligence algorithms, regarded as a subset of stochastic algorithms, have been widely used to solve optimization problems during the past decades. They are inspired by the collective behavior of swarms in nature. Researchers have observed such behaviors of animals, plants, or humans, analyzed the driving force behind the phenomena, and then proposed various types of algorithms. For example, Genetic Algorithm (GA) [3] and Differential Evolution (DE) [4] were generalized by inspiration of the biological evolution and natural selection. Particle Swarm Optimization (PSO) [5] was proposed by simulating the social and cognitive behavior of fish or bird swarms. Ant Colony Algorithm (ACO) [6] and Artificial Bee Colony (ABC) [7] were introduced by the foraging behavior of ants and bees. Cat Swarm Algorithm (CSO) [8], Cuckoo Search Algorithm (CS) [9], and Bat Algorithm (BA) [10] were also proposed by inspiration from the intelligence features in behaviors or organisms of swarms.

Bat Algorithm, proposed by Yang in 2010, is a swarm intelligence algorithm inspired by the echolocation behavior of bats. Echolocation is a type of sonar which is used by bats to detect prey, hunt, and avoid obstacles. With the help of echolocation, not only can the bats be able to detect the distance of the prey, but also they can identify the shape, position, and angle of the prey [10]. Compared with other existing swarm intelligence algorithms, BA has many advantages such as less control parameters, excellent global optimization ability, and implementation simplicity, which have shown excellent performances in some classes of optimization problems. Due to its simplicity, convergence speed, and population feature, it has been intensively used to solve academic and real life problems, like multiobjective optimization [11], engineering optimization [12], cluster analysis [13], scheduling problems [14], structure optimization [15], image processing [16], manufacturing designing [17], and various other problems.

It has been shown that BA is an excellent and powerful algorithm for global optimization problems, discrete optimization problems, and constrained optimization problems. However, due to the lack of good balance between exploration and exploitation in basic BA, the algorithm sometimes fails at finding global optimum and is easily trapped into local optima. Much efforts have been made to improve the performance of BA. These improvements or modifications can be divided into two categories.

Introduction of New Search Mechanisms for Generating Candidate Solutions. Inspired by DE, Xie et al. [18] presented a modified approach DLBA with a new solution update mechanism combining the concept of differential operator and Lévy Flight trajectory. Li and Zhou [19] embedded the complex value encoding mechanism into Bat Algorithm to improve the diversity and convergence performance. Gandomi and Yang [20] introduced chaotic maps into the population initialization and position update equation and proposed a modified algorithm. Bahmani-Firouzi and Azizipanah-Abarghooee [21] proposed four improved velocity update equations in order to achieve a better balance between the exploration and exploitation. Jaddi et al. [22] divided the swarm population into two subpopulation groups with each group designed to generate solutions according to different equations, respectively. Yilmaz and Küçüksille [23] defined two modification structures of the velocity equation inspired by PSO and then hybridized the modified BA with Invasive Weed Optimization algorithm. The velocity equations used information of the best-so-far solution and the neighborhood solutions, which can effectively enhance the search ability of BA.

Hybridization of BA with Other Operators. For example, Khan and Sahai [24] proposed a hybrid approach involving PSO, HS, and SA for generating new solutions. Wang and Guo [25] introduced a mutation operator into BA using SA. He et al. [26] developed a hybrid BA which combines SA and Gauss distribution. Sadeghi et al. [27] developed a hybrid BA with local search based on PSO. Lin et al. [28] developed a chaotic BA with Lévy Flights and parameters estimated by chaotic maps. Meng et al. [29] incorporated the bats’ habitat selection and their self-adaptive compensation for Doppler effect in echoes into the basic BA and proposed a new self-adaptive modified algorithm NBA.

The study is not limited to the above two aspects and more work can be seen in [3034] and so on.

As technology and science develop dramatically fast, more and more complicated optimization problems need to be solved by advanced numerical methods. As an important heuristic optimization algorithm, researches on how to improve the convergence accuracy and efficiency of BA are of great significance to improve the heuristic optimization theory and solve the real world optimization problems.

Exploration and exploitation are two critical characteristics in the updating process of an algorithm. Exploration is the ability of the algorithm to find the global optimum solutions in different areas of the search space, while exploitation refers to the capability of the algorithm to find better solutions with previous experience. Researches in the literature show that, for a swarm intelligence algorithm, the exploration capability should be employed first so as to search in the whole space, while the exploitation capability should be considered later by improving the quantity of the solution in the local search process [23].

To achieve a better balance between exploration and exploitation behavior of BA, it is highly required to modify the global search approach to explore the search region and develop a local search method to exploit in the nonvisited regions. Accordingly, modifications of the search equation and local search strategies for BA are studied in the research. A modified algorithm called OBMLBA is proposed in this study. The main difference between OBMLBA and other BAs is the interaction behavior between bats. In the proposed approach, the bat explores the search space with the help of the best-so-far solution and the neighborhood solutions by using echolocation, which can well balance the exploration and exploitation. The new population is generated by using a new modified position update equation with frequency defined by sinusoidal function. Lévy Flight random walk is used to exploit the local search space whereas opposition based learning is used to introduce opposition based solutions in population OP. Individuals of and OP are merged together, and the best individuals are used as a new population for the next generation. The proposed method has been tested on 16 benchmark functions. Comparison with variants of BA and other state-of-the-art algorithms demonstrates the effectiveness and superiority of OBMLBA in solving optimization problems.

The paper is organized as follows. After the introduction, the basic concept of BA is introduced in Section 2. The proposed OBMLBA is presented in Section 3. Section 4 evaluates the performance of the proposed algorithm by comparing results with other algorithms. Finally, conclusions and future work are discussed in Section 5.

2. Basic Bat Algorithm

The optimization problems considered in this paper are single-objective, unconstrained continuous problems to be minimized.

Bat Algorithm is a heuristic algorithm proposed by Yang in 2010. It is based on the echolocation capability of micro bats guiding them on their foraging behavior. In BA, the position of a bat represents a possible solution of the given optimization problem. The position of the food source found by the th bat can be expressed as . The fitness of corresponds to the quality of the position the bat locates in.

2.1. Initiation

At the beginning, bats do not know the location of food sources. They generate a randomly distributed population of solutions, where denotes the number of bats in the population. Each solution can be produced within the search space as follows:where . and denote the upper and lower bounds of the solution in dimension , respectively. is a uniformly distributed value generated in the range of .

2.2. Generation of New Solutions

Bats modify the solution based on the information of the current location and the best source food location. Bats navigate by adjusting their flying directions using their own and other swarm members’ best experience to find the optimum of the problem.

At this stage, for each food source position , a new position was formulated as follows:where represents each bat in the population, , and represents the th iteration. and are the position and velocity components of the th bat in the population at the th iteration. denotes the pulse frequency that affects the velocity of the th bat. and represent the maximum and minimum of . is a random number between . is the best position found by the whole population.

2.3. Local Search

Once the new solutions are generated, the local search is invoked by bats’ random walk. If the pulse emission rate of the th bat is smaller than a random number, select from the population and generate a new position as follows:where represents a solution chosen in current population by some mechanism and is a random vector drawn from a uniform distribution. is the average loudness of all bats at the time step .

2.4. Solutions, Loudness, and Pulse Emission Rate Updating

If a random number is bigger than the loudness and , accept the new solution . At the same time, the loudness is decreased while its pulse emission is increased as follows:where and are constants. The initial loudness and initial pulse emission rate are randomly generated numbers in the range of and , respectively.

The pseudocode of Bat Algorithm is presented in Algorithm 1.

(1) Input the parameters.
(2) Set . Initialize the position and velocity of each individual in the population.
(3) Evaluate the objective function and related parameter values of each individual.
(4) While    do
(5) Generate new solutions using Eq. (2)~Eq. (4).
(6) if  
(7)  Generate a new local solution around the selected solution using Eq. (5).
(8) end if
(9) if   and
(10)  update the solution, loudness and pulse emission rate by Eq. (6).
(11)  end if
(12) end while
(13) Output the individual with the smallest objective function value in the population.

3. Enhanced Bat Algorithm

3.1. Location Update Equation Based on DE

Yilmaz and Küçüksille [23] pointed that the update equations of velocity in basic BA provide local search only with the guidance of the best-so-far solution, which makes the BA easily entrapped into a local minimum. In order to improve the search ability, they proposed a modified equation:where is a solution randomly chosen from the population and and are learning factors ranging from 0 to 1. is the initial value of . is the maximal number of iterations. is the number of iterations, and is a nonlinear index.

The second term in the modified search equation is a factor affecting the velocity of the solution with guidance of the best solution , which emphasizes exploitation. The third term is another factor that affects the velocity with help of randomly selected solution which emphasizes exploration. Effects of the best solution and the kth solution are adjusted by changing the value of . Experiments results indicate that the modified algorithm can outperform BA in most test functions.

Xie et al. [18] proposed an improved BA based on Differential Evolution operator. The position updating equations were defined as follows:where is the global best location in current population. () is a randomly chosen solution in the population. and are defined as follows:where and are minimum and maximum frequency values of , while and are minimum and maximum frequency values of . is a fixed parameter. is the number of iterations. is a random vector drawing a uniform distribution. The modified algorithm has shown a better performance than classical BA.

Inspired by Yilmaz and Zhou’s methods, we propose a modified location updating equation:where are individuals randomly selected from the population. and are frequencies.

As in mutation operation of DE, the frequency is an important parameter that affects the convergence performance of the algorithm. The value of changes according to the predefined formula. In basic BA and DLBA, is defined in (2), (11), and (12), respectively. Here we use a sinusoidal formulation [35] permitting certain flexibility in changing the direction of . The frequencies and are defined as follows:where represents frequency of the sinusoidal function.

3.2. Local Search Based on Lévy Flight

Lévy Flight, which is based on the flight behavior of animals and insects, has been variously studied in literatures [36]. It is defined as a kind of non-Gauss stochastic process with a random walk whose step sizes are based on Lévy stable distribution.

There are lots of versions of Lévy distribution. Mantegna Algorithm [37] has been tested to be the most efficient method to generate Lévy distribution values. It has been adopted in many evolution algorithms in order to escape from local minimum.

When generating a new solution for the th solution by performing Lévy Flight, the new candidate is defined as follows:where is a random step size parameter, is a Lévy Flight distribution parameter, and denotes entry wise multiplication.

In Mantegna Algorithm, the step size can be defined as follows:where and are obtained from normal distribution; that is,with

The pseudocode of Lévy Flight Search is shown in Algorithm 2.

(1) Input the optimization function and .
(2) Select the individual which is chosen to be modified by Lévy Flight.
(3) Compute using Eq. (19).
(4) Compute step size s using Eq. (17) and Eq. (18).
(5) Generate a new candidate solution using Eq. (16).
(6) Calculate .
(7) if  
(8) .
(9)  end if
3.3. Opposition Based Learning

Opposition based learning was proposed by Rahnamayan et al. [38] in 2005. It has been successfully used in machine learning and evolutionary computing [39]. For an optimization problem, if the candidate solutions are near the global optima, it is easy to find the ideal solutions with fast convergence speed. But if the candidate solutions are far away from the global optima, it will take more computational efforts to get the required solutions. The concept of OBL is to consider the candidate solutions and their opposite counterpart solutions simultaneously in an optimization algorithm. Then the greedy selection is applied between them according to their objective function values. With the help of OBL, the probability of finding global optima is increased.

In basic OBL, let be a -dimensional solution in the search space, with component variables . Then the opposite point is defined by its coordinates , where

OBL can be used not only to initialize the population initialization, but also to improve the population diversity during the evolutionary process. Once the opposite population generated, it is combined with the original population for selection. Individuals with fitness ranking in the top half of all candidate solutions will be selected as the current population. The pseudocode of OBL is given in Algorithm 3.

(1)  Input original population , population size , the dimension of solution , optimization function .
(2) Generate opposite population OP as follows:
(3) for  
(4) for  
(5)  ;
(6) end for
(7) end for
(8) Merge the original population and opposite population together.
(9) Calculate the fitness of population .
(10) Choose the top half of population as the current population.
3.4. Proposed Modified Bat Algorithm

Based on the basic BA and the above considerations, we propose a modified method, called OBMLBA. It is clear that local search with Lévy Flight can improve the ability of exploitation. Incorporating with OBL strategy can enhance the population diversity. Inspired by DLBA, the new location update equation takes advantage of the information of both the global best solution and randomly chosen solution near the candidate solution. So the modified BA can get a good balance between the exploration and exploitation. The main steps of the algorithm are summarized in Algorithm 4.

(1)  Input the parameters.
(2) Set . Initialize the position and velocity of each individual in the population.
(3) Evaluate the objective function and related parameter values of each individual.
(4) While    do
(5)  Apply Lévy Flight Local Search strategy using Algorithm 2.
(6)   Generate new solutions using Eq. (13).
(7)   if  
(8)     Apply the Opposition based learning using Algorithm 3.
(9)     Generate a new local solution around the selected solution using Eq. (5).
(10)    end if
(11)   if   and
(12)    update the solution, loudness and pulse emission rate.
(13)    end if
(14) end while
(15)  Output the individual with the smallest objective function value in the population.

4. Experimental Results and Discussion

In order to evaluate the performance of the proposed algorithm, we select 16 standard benchmark functions to investigate the efficiency of the proposed approach. These functions are presented in Table 1. The function set comprises unimodal functions and multimodal functions, which can be used to test the convergence speed and local premature convergence problem, respectively. ~ are continuous unimodal functions. is a discontinuous step function. is a noisy quartic function. ~ are multimodal functions with high dimension. These functions are categorized into low-dimension, middle-dimension, and high-dimension functions according to , 30, and 60, respectively.

Related parameters of the basic BA, DLBA, and OBMLBA are presented in Table 2. The maximum number of iterations is set to be 500, 1000, and 2000 in case of , 30, and 60.

Experiments have been carried out on the test functions with different dimensions. The computational results are summarized in Tables 35 in terms of the mean error and standard deviation of the function error values. In addition, some convergence characteristics graphs are shown in Figure 1.

As seen from Tables 35, OBMLBA can get the global optimal on ~, and ~ with , 30, and 60. On and , the precision of solutions obtained by OBMLBA is smaller than the order of and the order for , respectively. Results indicate that OBMLBA can get solutions extremely close to the global optimal values on most of the functions.

Solutions obtained by BAs are compared with results obtained by MLBA and OBMLBA. As shown in Tables 35, OBMLBA performs significantly better than basic BA, LBA, and DLBA on ~, and . All algorithms except for basic BA and LBA perform well on ~, and . OBMLBA is comparable to DLBA and MLBA on functions and .

Compared with MLBA, OBMLBA provides better performance on functions ~, and . For , and , results obtained by MLBA and OBMLBA are quite close to each other. Both MLBA and OBMLBA get the global optimum of ~, and .

From the convergence characteristic graphs, it can be obviously observed that OBMLBA is the fastest algorithm in all the considered BAs. MLBA is slower than OBMLBA but faster than DLBA, LBA, and basic BA. OBMLBA exhibits the best accuracy and achieves the fastest convergence speed. The results indeed indicate the advantage of the modified position update equation and OBL strategy.

On the whole, algorithms proposed in this study such as MLBA and OBMLBA work better than other BAs considered. And OBMLBA is more effective than MLBA.

In Table 6, OBMLBA is compared with some state-of-the-art algorithms, such as ABC, PSO, OCABC, CLPSO, and OLPSO-G. Results of these algorithms are all derived directly from their corresponding literatures [40]. Results show that OBMLBA has outperformed other methods on 6 of 7 functions. OBMLBA performs worse than ABC and OCABC only on Rosenbrock function.

According to the analyses above, it can be clearly observed that OBMLBA provides outstanding performance with fast convergence speed and high convergence accuracy on most of the test functions.

5. Conclusions

Bat Algorithm, a metaheuristic algorithm proposed recently, has been successfully used to solve different types of optimization problems. However, it also has some inefficiency on exploration and exploitation capabilities.

In this study, a new variant of Bat Algorithm called OBMLBA is presented. The exploration and exploitation abilities of BA are balanced and enhanced in the search process by combining the information of the best-so-far solution and the neighborhood solutions into the search equations. At the same time, the proposed equation uses sinusoidal function to define the bat pulse frequency , allowing flexibility in changing the direction of . Furthermore, Lévy Flight random walk is taken into consideration to escape from local optima. The concept of opposition based learning is introduced to the algorithm to improve the diversity and convergence capability. The proposed approach has been successfully used on 16 benchmark functions. Results and comparisons with the basic BA, DLBA, and other state-of-the-art algorithms show that OBMLBA has outperformed considered approaches in most of the functions. However, these experiments are just small scale optimization problems. The performance of the proposed algorithm on large scale optimization problems and real world optimization problems should be deeply investigated. The future study of this method may focus on investigating the mathematical foundations and applications of the proposed algorithm.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported in part by the National Nature Science Foundation of China under Grant 61402534, by the Shandong Provincial Natural Science Foundation, China, under Grant ZR2014FQ002, and by the Fundamental Research Funds for the Central Universities under Grant 16CX02010A.