Research Article  Open Access
Huda I. Ahmed, Eman T. Hamed, Hamsa Th. Saeed Chilmeran, "A Modified Bat Algorithm with Conjugate Gradient Method for Global Optimization", International Journal of Mathematics and Mathematical Sciences, vol. 2020, Article ID 4795793, 14 pages, 2020. https://doi.org/10.1155/2020/4795793
A Modified Bat Algorithm with Conjugate Gradient Method for Global Optimization
Abstract
Metaheuristic algorithms are used to solve many optimization problems. Firefly algorithm, particle swarm improvement, harmonic search, and bat algorithm are used as search algorithms to find the optimal solution to the problem field. In this paper, we have investigated and analyzed a new scaled conjugate gradient algorithm and its implementation, based on the exact Wolfe line search conditions and the restart Powell criterion. The new spectral conjugate gradient algorithm is a modification of the Birgin and Martínez method, a manner to overcome the lack of positive definiteness of the matrix defining the search direction. The preliminary computational results for a set of 30 unconstrained optimization test problems show that this new spectral conjugate gradient outperforms a standard conjugate gradient in this field and we have applied the newly proposed spectral conjugate gradient algorithm in bat algorithm to reach the lowest possible goal of bat algorithm. The newly proposed approach, namely, the directional bat algorithm (CGBAT), has been then tested using several standard and nonstandard benchmarks from the CEC’2005 benchmark suite with five other algorithms and has been then tested using nonparametric statistical tests and the statistical test results show the superiority of the directional bat algorithm, and also we have adopted the performance profiles given by Dolan and More which show the superiority of the new algorithm (CGBAT).
1. Introduction
In 2010, Yang proposed a new optimization algorithm, namely, bat algorithm (BA), based on swarm intelligence and the inspiration from observing the bats. Although the original BA presents superior results in the experiments than PSO, we notice that the performance and the accuracy of the original BA still have the capacity to present better. The algorithm exploits the socalled echolocation of bats.
Bats use sonar echoes to detect and avoid obstacles. It is generally known that sound pulses are transformed to frequency which reflects from obstacle. Bats can use time delay from emission to reflection and use it for navigation. They typically emit short loud, sound impulses. The pulse rate is usually defined as 10 to 20 times per second. After hitting and reflecting, the bats transform their own pulse into useful information to gauge how far away the prey is. The bats are using wavelengths that vary from the range 0.7 to 17 mm or inbound frequencies of 20–500 kHz. To implement the algorithm, the pulse frequency and rate have to be defined. The pulse rate can be simply determined in the range from 0 to 1, where 0 means that there is no emission and 1 means that bats are emitting maximum [1].
The batinspired algorithm is a recent swarmbased intelligent system which mimics the echolocation system of microbats. In the batinspired algorithm, the bats randomly fly around the best bat locations found during the search so as to improve their hunting of prey. In practice, one bat location from a set of best bat locations is selected. Thereafter, that best bat location is used by local search with a random walk strategy to inform other bats about the prey location. This selection mechanism can be improved using other natural selection mechanisms adopted from other advanced algorithms such as genetic algorithm. Therefore, six selection mechanisms are studied to choose the best bat location: globalbest, tournament, proportional, linear rank, exponential rank, and random. Consequently, six versions of the batinspired algorithm are proposed and studied which are globalbest batinspired algorithm (GBA), tournament batinspired algorithm (TBA), proportional batinspired algorithm (PBA), linear rank batinspired algorithm (LBA), exponential rank batinspired algorithm (EBA), and random batinspired algorithm (RBA). Using two sets of global optimization functions, the batinspired versions are evaluated and the sensitivity analyses of each version to its parameters studied [2]. A success of an algorithm always depends on well balanced of these components. The aim of this study is to improve the performance of the standard bat algorithm by increasing its exploration and exploitation abilities along the main line of the BA. In this paper, two improvement strategies are presented. The first improvement strategy is the development of a spectral conjugate gradient technique, which can be used to guide the research process, and the second improvement strategy is to improve the bat algorithm using the conjugate gradient method to arrive at the best solution for the current iteration, which can be used to enhance the ability to local search. The newly proposed optimally directional bat algorithm (CGBAT) will be tested on several benchmark problems chosen from the wellknown CEC’2005 benchmark set and compared with several other swarm and evolutionary algorithms. Therefore, this study is organized as follows. A new scalar in a spectral conjugate gradient is described in Section 2. A global convergence is described in Section 3, the standard bat algorithm is presented in Section 4. Then, the enhanced bat algorithm is presented in Section 5. Finally, the results of the numerical experiments are presented in Section 6, followed by the conclusions in Section 7. Too much exploration but too little exploitation may cause difficulties that algorithm converges towards optimal solutions. The conjugate gradient technique could be a helpful procedure to search out the minimum value of any nonlinear function to find optimal solutions:where is a realvalued function. The numerical formula is given bywhere is a step length to be computed by a line search procedure [3]. The search direction is outlined as follows:and is gradient and is a parameter of conjugacy condition. Some famed formulas of this parameter are outlined as follows:where , which are referred to as Hestenes–Stiefel (HS) [4], Polak–Ribière (PR) [5], Fletcher–Reeves (FR) [6], and Dai–Yuan (DY) [7] severally. Many authors have studied the convergence of the on top of formulas for years [8–11].
To prove the convergence analysis of the conjugate gradient technique, the following weak Wolfe conditions are used:
Used the strong Wolfe conditions consist of (5) and
The constants are within the period , and additional details are found in [3]. Well, the sufficient descent property is defined as follows:where denotes the Euclidean norm, provided that c is any positive constant [7].
2. A New Scalar in Spectral CG Method ()
Birgin and Martínez (SS) [12] instructed a spectral conjugate gradient technique outlined bywhere .
The parameter has the following form:where the spectral in [13] is determined by using the following equation:
In this section, we will derive a new spectral CG method as follows:
The matrix is asymmetrical and positive definite and the scalar is defined by AlBayati and Salah [14] as .
By equating (9) and (12), we get
Multiplying both sides of (13) by , we get
Since and , we get
Since ,
If we use exact line search, then the new scalar is equal to one.
The new direction is defined by the following equation:
Theorem 1. Let the line search in (2) satisfies the strong Wolfe condition, then the new search direction given by (17) is a sufficient descent direction.
Proof. Under some algebraic operations, the direction of (17) can be written as follows:Now, multiplying both sides of (18) by , then we getSince and , we get Since ,Let (where is the positive constant), then
3. Global Convergence
In this section, the subsequent assumption is usually used in proving the global conjugate gradient methods.
Assumption 1. (see [15]).(i)The level set is bounded, that is, there exists a constant z > 0, such as .(ii)In neighborhood N of S, the function f is continuously differentiable and its gradient is Lipschitz continuous, i.e., there exists a constant L > 0 such that Below the assumptions (i) and (ii) on f, we can deduce that there exists > 0 such as the following equations:
Lemma 1. (see [16]). Assume that Assumption 1 holds and suppose that, for any conjugate gradient method, is a descent direction and the step size satisfies conditions (5) and (7) ifThen
Theorem 2. Suppose that Assumption 1 holds, and the direction defined by (17) is descent and is computed using (5) and (7), then
Proof. By using some algebraic operations of (17) and taking the absolute value, we getSince and , we getSince ,Since , we getthat is, , the proof is complete.
4. Standard Bat Algorithm
The bat algorithm proposed by Yang [17] is an intelligent optimization algorithm inspired by the echolocation behavior of bats. When flying and hunting, bats emit some short, ultrasonic pulses to the environment and list to their echoes. Studies show that the information from the echoes will enable bats to build a precise image of their surroundings and determine precisely the distance, shapes, and prey’s location. The capability of such echolocation of microbats is fascinating, as these bats can find their prey and discriminate against different types of insects even in complete darkness [17]. The earlier studies showed that BA can solve unconstrained optimization problems with much more efficiency and robustness compared to GA and PSO [18, 19].
The used idealized rules in bat algorithm are as follows:(a)All bats use echolocation to sense distance and the location of a bat x_{i} is encoded as a solution to an optimization problem under consideration.(b)Bats fly randomly with velocity at position x_{i} with a varying frequency (from a minimum f_{min} to a maximum frequency f_{max}) or a varying wavelength λ and loudness A to search for prey. They can automatically adjust the wavelengths (or frequencies) of their emitted pulses and the rate of pulse emission r depending on the proximity of the target.(c)Loudness varies from a large positive value A_{0} to a minimum constant value A_{min} [17].
For each bat (i), its position (x_{i}) and velocity () in a ddimensional search space should be defined. x_{i} and should be subsequently updated during the iterations. The rules for updating the position and velocities of a virtual bat (i) are given as in [17]:where rand ∈ [0, 1] is a random vector drawn from a uniform distribution. Here, is the current global best location (solution) which is located after comparing all solutions among all the n bats. A new solution for each bat is generated locally using random walk given bywhere ε ∈ [−1, 1] is a random number, while is the average loudness of all the bats at this time step.
The loudness and the rate of pulses emission are updated as the iterations proceed. The loudness decreases and the pulse rate increases as the bat gets closer to its prey. The equation for updating the loudness and the pulse rate is given bywhere and > 0 are constants. As t ⟶ ∞, we have ⟶ 0 and .
The initial loudness A_{0} can typically be A_{0} ∈ [1, 2], while the initial emission rate r^{0} ∈ [0, 1].
The basic steps of the standard bat algorithm are summarized in the pseudocode as shown in Algorithm 1.

5. Enhanced Bat Algorithm
This paper attempts to improve the bat algorithm from a different perspective from the previous improvements by hybridizing the bat method using optimization methods, by using the optimal size of the cubic step and the optimal search direction for the synchronous gradient feature of the optimal search direction for echo detection. First, local movements can be improved by controlling the optimum step sizes, while the second bat movement should be directed by other bats and the best local moves toward optimal movement. More specifically, two different adjustments will be made to improve the efficiency of the bat algorithm.
5.1. The First Modification (Optimal Step Size)
The first modification concerns local search mechanisms: in standard bats, they are allowed to move from their current locations to new random locations using local random walk. In modified bats, they are allowed to switch from their current locations to new locations optimally using local optimal walking, as we adjust this step to the optimal size using one of the optimization methods called optimal step size () when the step length is calculated by performing a line search [1].
5.2. The Second Modification Using New Spectral Conjugate Gradient Method
A bat emits two pulses in two different directions, one to the direction of the bat with the best position (the best solution is steepest descent) and the other to the direction of the new conjugate gradient bat. From the echoes, the bat can know if the food exists around these two bats or not. The best position is determined by the objective fitness, while, around the optimally selected bat, it depends on its fitness value. If it has a better fitness value as the actual bat, then the food is considered to exist; otherwise, there is not a food source in the neighborhood. If the food is confirmed to exist around the two bats (Choice 1), the current bat moves to a direction at the surrounding neighborhood of the two bats where the food is supposed to be plenty. If not (Choice 2), it moves toward the best bat.
The mathematical formulas of the bats’ movements are thus given by
The directions of the movement generated by equation (17) are directed towards the bat with the best position. This mechanism allows the BA to exploit more around the best position; however, if the best bat is not near the global optimality, there is a risk that the solutions generated by such moves could be trapped in local optima. The new proposed movement in equation (17) has the ability to diversify the movement directions which can enhance the exploration capability, especially at the different repetition stages especially at the initial stages of iterations, and can thus avoid premature convergence. Furthermore, when it approaches the end of the iteration process, the bats tend to get around the best bats with stronger exploitability which in turn can reduce the distances between them and thus enhance the speed of convergence which gives stability to the algorithm. The new algorithm CGBAT is illustrated by presenting the algorithm and flowchart as follows.
5.2.1. CGBAT Algorithm
In this section, we develop the movement of bat algorithm to reach the goal by using the new direction which is defined in equation (17).(1)objective function (2)initialize the bat population and for k = 1, …, n(3)Define pulse frequency at .(4)Initialize pulse rates and the loudness .(5)While (t ≤ t_{max}).(6)Adjust frequency equation (1)(7)Update locations/solutions equation (2)(8)Update velocities equation (3)(9)if (10)Generate new search movement using equation (17)(11)end if.(12)Generate new solution by a flying optimally step length using equation (39).(13)if .(14)Accept the new solutions(15)Increase equation (37)(16)Reduce equation (38)(17)end if(18)Rank the bats and find the current best (19)end while(20)Output results for postprocessing
5.2.2. CGBAT Flowchart
In this section, we describe the movement of CGBAT algorithm to reach the goal by using the flowchart of new search direction which is defined in equation (17) (Figure 1).
6. Experimental Results and Comparisons
To prove the efficiency performance of all newly proposed algorithms, two comparison experiments have been conducted. The first is a comparison between the new spectral conjugate gradient and the standard algorithms in this field, and the second experiment is a comparison between the new bat algorithm (CGBAT) and cuckoo search, firefly search, and practical swarm.
6.1. Experimental Results and Comparisons in New CG
In this section, we have reported some numerical experiments that are performed on a set of 30 unconstrained optimization test problems to analyze the efficiency of . Detail of these test problems, with their given initial points. The termination criterion used in our experiments is . In our comparisons below, we employ the following algorithms:(i)SS: scalar in spectral in Birgin and Martínez algorithm with the Wolfe line search(ii)HS: Hestenes–Stiefel algorithm with the Wolfe line search(iii)New: new algorithm using equation (17) with the Wolfe line search
Table 1 shows the numerical computations of these newly proposed CG algorithms against other wellknown CGalgorithms to check their performance and we have used the following wellknown measures or tools used normally for this type of comparison of CG algorithms: NOI = the total number of iterations NOF = the total number of function evaluation TIME = the total CPU time required for the processor to execute the CG algorithm and reach the minimum value of the required function minimization

To evaluate the modified conjugate gradient technique, this technique is analyzed and tested in some numerical tests (see [20]) and to demonstrate the performance of those methods, we applied Dolan and Moré [21], a new tool to analyze the efficiency of algorithms.
They introduced the notion of a performance profile as means to evaluate and compare the performance of the set of solvers S on a test set P. Assuming that there exist ns solvers and np problems, for each problem p and solvers, they defined computing time (the number of function evaluations or others) required to solve problem p by solver s
Requiring a baseline for comparisons, they compared the performance on problem p by solver s with the best performance by any solver on this problem, based on the performance ratio:
Suppose that a parameter for all is chosen, and if and only if solver s does not solve problem p (Figure 2).
Figure 3 shows the Dolan–More performance profile for these methods, which are subject to the frequency of a suitable performance compared to the basic methods. Figure 4 shows us through the Dolan–More performance profile for these methods, which are measured by the CPU time, which makes us deduce from the three forms presented. The new method is very suitable for solving issues of many dimensions.
6.2. Experimental Results and Comparisons in CGBAT
To validate the performance of the proposed optimally directional bat algorithm, we have carried out various numerical experiments that have been then tested using several standard and nonstandard benchmarks from the CEC’2005 benchmark suite, which can be summarized as two comparison experiments. The first one is a comparison between the new directional bat algorithm and the standard algorithms including the bat algorithm on the classical benchmark functions, cuckoo search, firefly search, and practical swarm and the second one is a comparison has been performed against some advanced optimization algorithms such as Dolan and Moré [19], a new tool to analyze the efficiency of algorithms.
6.2.1. Benchmarking and Parameter Settings
Thirty popular benchmark functions are shown in Tables 2–4. We have been used to verify the performance of the new bat algorithm (CGBAT), compared with that of standard BA, FA, CS, and PSO. The description and the setting parameters of these algorithms are as follows:(1)CGBAT: an extensive analysis was performed to carry out parameter settings of BA; for best practice, we recommend the following settings: r_{0} = 0.1, r_{∞} = 0.7, A_{0} = 0.9, A_{∞} = 0.6, f_{min} = 0 and f_{max} = 2, , and .(2)BA: the standard bat algorithm was implemented as it is described in [17] with r_{0} = 0.1, A_{0} = 0.9, α = γ = 0.9, f_{min} = 0, and f_{max} = 2.(3)FA: the firefly algorithm where A_{0} = 0.9 is the intensity at the source point described in [15].(4)CS: the cuckoo search via Lèvy flights described in [22] is considered with the probability of the discovery of alien egg spa = 0.25.(5)PSO: a classical particle swarm optimization [23, 24] model has been considered. The parameter settings are c_{1} = 1.5 and c_{2} = 1.2 and the inertia coefficient is a monotonically decreasing function from 0.9 to 0.4.



For a fair comparison, the common parameters are considered the same. The population size was set to N = 50, and the number of function evaluations is the same as 15000, without counting the initial evaluations, though all algorithms were initialized randomly in the similar manner. Therefore, we set t_{max} = 500 except for CS. Due to the fact that the CS algorithm uses a number of 2N function evaluations at each iteration, we adjust t_{max} for this case to 250. The dimensionality of all benchmark functions is D = 30.
6.2.2. The First Experiment
For meaningful statistical analysis, each algorithm was run 51 times using a different initial population at each turn. The global minimum obtained after each trial was recorded for further statistical analysis. Subsequently, the mean value of the global minimum, the standard deviation (SD), the best solution, the median, and the worst solution values have been computed and are presented in Tables 2–4. From the results presented in Tables 2–4, the new directional bat algorithm achieved better results for 20 functions (F1, F2, F3, F4, F8, F10, F12, F13, F14, F15, F16, F18, F19, F20, F22, F23, F24, F25, F28, and F29), while the BA obtained better results for 3 functions (F7, F26, and F27). The FA has better scores for 2 functions (F5 and F17). CS obtained best results for F9, F21, and F30 and PSO for F11. We can show that the following monotonically decreasing function is more suitable and gives stability to the algorithm.
6.2.3. The Second Experiment (Nonparametric Statistical Tests)
In this section, to evaluate CGBAT performance, nonparametric statistical tests were carried out. We performed Friedman's test and pairwise comparisons. Table 5 shows the descriptive statistics for the five algorithms, which gives the number of values studied, the mean and the standard deviation, and the highest value of the values for each method and the lowest value for it. Note that CGBAT has the least arithmetic mean of 72929.28687 and the standard deviation of 372879.0388, which is lower than the rest of the algorithms, with the lowest and greatest value for the method.

Table 6 presents the Friedman rank test. For this test, an algorithm is considered better if it has a low rank. From the results, CGBAT has the lowest rank for the two tests, which means that it is the best performing algorithm from the comparison. In addition, the last two rows present the test statistic and value. The statistic is distributed according to the chisquare distribution with 4 degrees of freedom. The lower value of the different tests suggests the existence of significant differences among the considered algorithms at α = 0.01 level of significance.

To highlight the differences between CGBAT with each of the other algorithms, Table 7 presents the pairwise comparison results using the Friedman test. The control method is CGBAT. The analysis of the Friedman rank test shows significant differences between CGBAT and four algorithms (BA, FA, CS, and PSO) according to the values of chisquare statistic which all are less than α = 0.05 level of significance, and since CGBAT algorithm for all pairwise comparisons has a minimum rank (1.28 with BA, 1.24 with FA, 1.24 with CS, and 1.10 with PSO), the results reveal that CGBAT algorithm is significantly superior to BA, FA, CS, and PSO algorithms.

6.2.4. The Third Experiment (Convergence Curve Analysis)
The convergence curve is an important indicator for the performance of the algorithm, through which we can see the convergence speed and the ability of the new algorithm optimum. In order to evaluate the modified CGBAT, this technique is analyzed and tested in some numerical tests, and to illustrate the performance of these methods, we applied Dolan and Moré [21] to analyze the efficiency of the algorithm. Performance profiles based on mean performance, standard deviation (SD), and best solution are shown in Figures 5–7.
7. Conclusions
In this study, we have submitted new spectral CG methods. A crucial property of proposed CG methods is that it secures sufficient descent directions. Under mild conditions, we have demonstrated that the new algorithms are globally convergent for each uniformly convex and general functions using the strong Wolfe line search conditions. The preliminary numerical results show that new algorithms perform very well and also an improved version of the standard bat algorithm, called the new directional bat algorithm (CGBAT), has been proposed and presented. Two modifications have been embedded to the BA to increase its exploitation and exploration capabilities and consequently have significantly enhanced the BA performance. Three sets of experiments have been carried out to prove the superiority of the proposed CGBA). The performance is compared by using thirty test functions, under seven optimization algorithms (SS, HS, New, CGBAT, BA, CS, FA, and PSO). The comparison results show that the enhanced algorithms (New and CGBAT) are better than the original algorithms and have relatively stable performance in both the optimization ability and the convergence speed.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
The research was supported by College of Computer Sciences and Mathematics, University of Mosul, Republic of Iraq, under Project no. 4795793.
References
 I. Fister Jr., D. Fister, and X.S. Yang, “A hybrid bat algorithm,” ElektroteHniski Vestnik, vol. 80, no. 1‐2, pp. 1–7, 2013. View at: Google Scholar
 M. A. AlBetar, M. A. Awadallah, H. Faris, X.S. Yang, A. Tajudin Khader, and O. A. Alomari, “Batinspired algorithms with natural selection mechanisms for global optimization,” Neurocomputing, vol. 273, pp. 448–465, 2018. View at: Publisher Site  Google Scholar
 P. Wolfe, “Convergence conditions for ascent methods. II: some corrections,” SIAM Review, vol. 13, no. 2, pp. 185–188, 1971. View at: Publisher Site  Google Scholar
 M. R. Hestense and E. X. Stiefel, “Methods of conjugate gradients for solving linear system,” Journal of Research of the National Bureau of Standards, vol. 49, no. 6, pp. 409–436, 1952. View at: Publisher Site  Google Scholar
 E. Polak and G. Ribiere, “Note sur la convergence de méthodes de directions conjuguées,” ESAIM, Mathematical Modeling and Numerical Analysis, vol. 3, no. 16, pp. 35–43, 1969. View at: Publisher Site  Google Scholar
 R. Fletcher and C. M. Revees, “Function minimization by conjugate gradients,” The Computer Journal, vol. 7, no. 2, pp. 149–154, 1964. View at: Publisher Site  Google Scholar
 Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM Journal on Optimization, vol. 10, no. 1, pp. 177–182, 1999. View at: Publisher Site  Google Scholar
 L. Guanghui, H. Jiye, H. Jiye, and Y. Hongxia, “Global convergence of the FletcherReeves algorithm with inexact linesearch,” Applied Mathematics—A Journal of Chinese Universities, vol. 10, no. 1, pp. 75–82, 1995. View at: Publisher Site  Google Scholar
 H. Liu, “A new conjugate gradient method for unconstrained optimization,” Far East Journal of Mathematical Sciences (FJMS), vol. 40, pp. 145–152, 2010. View at: Google Scholar
 S. Nocedal and S. J. Wright, Numerical Optimization, Springer, Berlin, Germany, 1999.
 L. Zhang, W. Zhou, and D. Li, “Global convergence of a modified FletcherReeves conjugate gradient method with Armijotype line search,” Numerische Mathematik, vol. 104, no. 4, pp. 561–572, 2006. View at: Publisher Site  Google Scholar
 E. G. Birgin and J. M. Martínez, “A spectral conjugate gradient method for unconstrained optimization,” Applied Mathematics and Optimization, vol. 43, no. 2, pp. 117–128, 2001. View at: Publisher Site  Google Scholar
 J. Barzilai and J. M. Borwein, “Twopoint step size gradient methods,” IMA Journal of Numerical Analysis, vol. 8, no. 1, pp. 141–148, 1988. View at: Publisher Site  Google Scholar
 A. Y. AlBayati and M. Salah, New Variable Metric Method for Unconstrained Non Linear Optimization, Academic Press, London, UK, 1994.
 H. Yabe and M. Takano, “Global convergence properties of nonlinear conjugate gradient methods with modified secant condition,” Computational Optimization and Applications, vol. 28, no. 2, pp. 203–225, 2004. View at: Publisher Site  Google Scholar
 E. T. Hamed, H. I. Ahmed, and A. Y. AlBayati, “A new hybrid algorithm for convex nonlinear unconstrained optimization,” Journal of Applied Mathematics, vol. 2019, Article ID 8728196, 6 pages, 2019. View at: Publisher Site  Google Scholar
 X.S. Yang, “A new met heuristic batinspired algorithm,” in Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), J. González, D. Pelta, C. Cruz et al., Eds., vol. 284, pp. 65–74, Springer, BerlinHeidelberg, Germany, 2010. View at: Google Scholar
 A. H. Gandomi, X.S. Yang, A. H. Alavi, and S. Talatahari, “Bat algorithm for constrained optimization tasks,” Neural Computing and Applications, vol. 22, no. 6, pp. 1239–1255, 2013. View at: Publisher Site  Google Scholar
 X. S. Yang and A. Hossein Gandomi, “Bat algorithm: a novel approach for global engineering optimization,” Engineering Computations, vol. 29, no. 5, pp. 464–483, 2012. View at: Publisher Site  Google Scholar
 N. Andrei, “An unconstrained optimization test functions collection,” Advanced Modeling and Optimization, vol. 10, pp. 147–161, 2008. View at: Google Scholar
 E. D. Dolan and J. J. Moré, “Benchmarking optimization software with performance profiles,” Mathematical Programming, vol. 91, no. 2, pp. 201–213, 2002. View at: Publisher Site  Google Scholar
 X.S. Yang and S. Deb, “Cuckoo search via Lévy flights,” in Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Salamanca, Spain, October 2009. View at: Publisher Site  Google Scholar
 R. C. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” in Proceedings of the Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya. Japan, October 1995. View at: Publisher Site  Google Scholar
 R. C. Eberhart and S. Yuhui, “Particle swarm optimization: developments, applications and resources,” in Proceedings of the 2001 Congress on Evolutionary Computation Seoul, Seoul, Korea, May 2001. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2020 Huda I. Ahmed et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.