Abstract

Metaheuristic algorithms are used to solve many optimization problems. Firefly algorithm, particle swarm improvement, harmonic search, and bat algorithm are used as search algorithms to find the optimal solution to the problem field. In this paper, we have investigated and analyzed a new scaled conjugate gradient algorithm and its implementation, based on the exact Wolfe line search conditions and the restart Powell criterion. The new spectral conjugate gradient algorithm is a modification of the Birgin and Martínez method, a manner to overcome the lack of positive definiteness of the matrix defining the search direction. The preliminary computational results for a set of 30 unconstrained optimization test problems show that this new spectral conjugate gradient outperforms a standard conjugate gradient in this field and we have applied the newly proposed spectral conjugate gradient algorithm in bat algorithm to reach the lowest possible goal of bat algorithm. The newly proposed approach, namely, the directional bat algorithm (CG-BAT), has been then tested using several standard and nonstandard benchmarks from the CEC’2005 benchmark suite with five other algorithms and has been then tested using nonparametric statistical tests and the statistical test results show the superiority of the directional bat algorithm, and also we have adopted the performance profiles given by Dolan and More which show the superiority of the new algorithm (CG-BAT).

1. Introduction

In 2010, Yang proposed a new optimization algorithm, namely, bat algorithm (BA), based on swarm intelligence and the inspiration from observing the bats. Although the original BA presents superior results in the experiments than PSO, we notice that the performance and the accuracy of the original BA still have the capacity to present better. The algorithm exploits the so-called echolocation of bats.

Bats use sonar echoes to detect and avoid obstacles. It is generally known that sound pulses are transformed to frequency which reflects from obstacle. Bats can use time delay from emission to reflection and use it for navigation. They typically emit short loud, sound impulses. The pulse rate is usually defined as 10 to 20 times per second. After hitting and reflecting, the bats transform their own pulse into useful information to gauge how far away the prey is. The bats are using wavelengths that vary from the range 0.7 to 17 mm or inbound frequencies of 20–500 kHz. To implement the algorithm, the pulse frequency and rate have to be defined. The pulse rate can be simply determined in the range from 0 to 1, where 0 means that there is no emission and 1 means that bats are emitting maximum [1].

The bat-inspired algorithm is a recent swarm-based intelligent system which mimics the echolocation system of microbats. In the bat-inspired algorithm, the bats randomly fly around the best bat locations found during the search so as to improve their hunting of prey. In practice, one bat location from a set of best bat locations is selected. Thereafter, that best bat location is used by local search with a random walk strategy to inform other bats about the prey location. This selection mechanism can be improved using other natural selection mechanisms adopted from other advanced algorithms such as genetic algorithm. Therefore, six selection mechanisms are studied to choose the best bat location: global-best, tournament, proportional, linear rank, exponential rank, and random. Consequently, six versions of the bat-inspired algorithm are proposed and studied which are global-best bat-inspired algorithm (GBA), tournament bat-inspired algorithm (TBA), proportional bat-inspired algorithm (PBA), linear rank bat-inspired algorithm (LBA), exponential rank bat-inspired algorithm (EBA), and random bat-inspired algorithm (RBA). Using two sets of global optimization functions, the bat-inspired versions are evaluated and the sensitivity analyses of each version to its parameters studied [2]. A success of an algorithm always depends on well balanced of these components. The aim of this study is to improve the performance of the standard bat algorithm by increasing its exploration and exploitation abilities along the main line of the BA. In this paper, two improvement strategies are presented. The first improvement strategy is the development of a spectral conjugate gradient technique, which can be used to guide the research process, and the second improvement strategy is to improve the bat algorithm using the conjugate gradient method to arrive at the best solution for the current iteration, which can be used to enhance the ability to local search. The newly proposed optimally directional bat algorithm (CG-BAT) will be tested on several benchmark problems chosen from the well-known CEC’2005 benchmark set and compared with several other swarm and evolutionary algorithms. Therefore, this study is organized as follows. A new scalar in a spectral conjugate gradient is described in Section 2. A global convergence is described in Section 3, the standard bat algorithm is presented in Section 4. Then, the enhanced bat algorithm is presented in Section 5. Finally, the results of the numerical experiments are presented in Section 6, followed by the conclusions in Section 7. Too much exploration but too little exploitation may cause difficulties that algorithm converges towards optimal solutions. The conjugate gradient technique could be a helpful procedure to search out the minimum value of any nonlinear function to find optimal solutions:where is a real-valued function. The numerical formula is given bywhere is a step length to be computed by a line search procedure [3]. The search direction is outlined as follows:and is gradient and is a parameter of conjugacy condition. Some famed formulas of this parameter are outlined as follows:where , which are referred to as Hestenes–Stiefel (HS) [4], Polak–Ribière (PR) [5], Fletcher–Reeves (FR) [6], and Dai–Yuan (DY) [7] severally. Many authors have studied the convergence of the on top of formulas for years [811].

To prove the convergence analysis of the conjugate gradient technique, the following weak Wolfe conditions are used:

Used the strong Wolfe conditions consist of (5) and

The constants are within the period , and additional details are found in [3]. Well, the sufficient descent property is defined as follows:where denotes the Euclidean norm, provided that c is any positive constant [7].

2. A New Scalar in Spectral CG Method ()

Birgin and Martínez (SS) [12] instructed a spectral conjugate gradient technique outlined bywhere .

The parameter has the following form:where the spectral in [13] is determined by using the following equation:

In this section, we will derive a new spectral CG method as follows:

The matrix is asymmetrical and positive definite and the scalar is defined by Al-Bayati and Salah [14] as .

By equating (9) and (12), we get

Multiplying both sides of (13) by , we get

Since and , we get

Since ,

If we use exact line search, then the new scalar is equal to one.

The new direction is defined by the following equation:

Theorem 1. Let the line search in (2) satisfies the strong Wolfe condition, then the new search direction given by (17) is a sufficient descent direction.

Proof. Under some algebraic operations, the direction of (17) can be written as follows:Now, multiplying both sides of (18) by , then we getSince and , we get Since ,Let (where is the positive constant), then

3. Global Convergence

In this section, the subsequent assumption is usually used in proving the global conjugate gradient methods.

Assumption 1. (see [15]).(i)The level set is bounded, that is, there exists a constant z > 0, such as .(ii)In neighborhood N of S, the function f is continuously differentiable and its gradient is Lipschitz continuous, i.e., there exists a constant L > 0 such thatBelow the assumptions (i) and (ii) on f, we can deduce that there exists  > 0 such as the following equations:

Lemma 1. (see [16]). Assume that Assumption 1 holds and suppose that, for any conjugate gradient method, is a descent direction and the step size satisfies conditions (5) and (7) ifThen

Theorem 2. Suppose that Assumption 1 holds, and the direction defined by (17) is descent and is computed using (5) and (7), then

Proof. By using some algebraic operations of (17) and taking the absolute value, we getSince and , we getSince ,Since , we getthat is, , the proof is complete.

4. Standard Bat Algorithm

The bat algorithm proposed by Yang [17] is an intelligent optimization algorithm inspired by the echolocation behavior of bats. When flying and hunting, bats emit some short, ultrasonic pulses to the environment and list to their echoes. Studies show that the information from the echoes will enable bats to build a precise image of their surroundings and determine precisely the distance, shapes, and prey’s location. The capability of such echolocation of microbats is fascinating, as these bats can find their prey and discriminate against different types of insects even in complete darkness [17]. The earlier studies showed that BA can solve unconstrained optimization problems with much more efficiency and robustness compared to GA and PSO [18, 19].

The used idealized rules in bat algorithm are as follows:(a)All bats use echolocation to sense distance and the location of a bat xi is encoded as a solution to an optimization problem under consideration.(b)Bats fly randomly with velocity at position xi with a varying frequency (from a minimum fmin to a maximum frequency fmax) or a varying wavelength λ and loudness A to search for prey. They can automatically adjust the wavelengths (or frequencies) of their emitted pulses and the rate of pulse emission r depending on the proximity of the target.(c)Loudness varies from a large positive value A0 to a minimum constant value Amin [17].

For each bat (i), its position (xi) and velocity () in a d-dimensional search space should be defined. xi and should be subsequently updated during the iterations. The rules for updating the position and velocities of a virtual bat (i) are given as in [17]:where rand ∈ [0, 1] is a random vector drawn from a uniform distribution. Here, is the current global best location (solution) which is located after comparing all solutions among all the n bats. A new solution for each bat is generated locally using random walk given bywhere ε ∈ [−1, 1] is a random number, while is the average loudness of all the bats at this time step.

The loudness and the rate of pulses emission are updated as the iterations proceed. The loudness decreases and the pulse rate increases as the bat gets closer to its prey. The equation for updating the loudness and the pulse rate is given bywhere and  > 0 are constants. As t ⟶ ∞, we have  ⟶ 0 and .

The initial loudness A0 can typically be A0 ∈ [1, 2], while the initial emission rate r0 ∈ [0, 1].

The basic steps of the standard bat algorithm are summarized in the pseudocode as shown in Algorithm 1.

(1)define objective function
(2)initialize the bat population and for k = 1, ⋯, n
(3)Define pulse frequency at .
(4)Initialize pulse rates and the loudness .
(5)While (t ≤ tmax).
(6)Adjust frequency equation (33)
(7)Update velocities equation (34)
(8)Update locations/solutions equation (35)
(9)if (rand > )
(10)Select a solution among the best solutions
(11)Generate a local solution around the selected best solution equation (36)
(12)end if
(13)Generate a new solution by flying randomly
(14)if (rand <  & F() < F())
(15)Accept the new solutions
(16)Increase equation (37)
(17)Reduce equation (38)
(18)end if
(19)Rank the bats and find the current best
(20)end while
(21)Output results for post-processing

5. Enhanced Bat Algorithm

This paper attempts to improve the bat algorithm from a different perspective from the previous improvements by hybridizing the bat method using optimization methods, by using the optimal size of the cubic step and the optimal search direction for the synchronous gradient feature of the optimal search direction for echo detection. First, local movements can be improved by controlling the optimum step sizes, while the second bat movement should be directed by other bats and the best local moves toward optimal movement. More specifically, two different adjustments will be made to improve the efficiency of the bat algorithm.

5.1. The First Modification (Optimal Step Size)

The first modification concerns local search mechanisms: in standard bats, they are allowed to move from their current locations to new random locations using local random walk. In modified bats, they are allowed to switch from their current locations to new locations optimally using local optimal walking, as we adjust this step to the optimal size using one of the optimization methods called optimal step size () when the step length is calculated by performing a line search [1].

5.2. The Second Modification Using New Spectral Conjugate Gradient Method

A bat emits two pulses in two different directions, one to the direction of the bat with the best position (the best solution is steepest descent) and the other to the direction of the new conjugate gradient bat. From the echoes, the bat can know if the food exists around these two bats or not. The best position is determined by the objective fitness, while, around the optimally selected bat, it depends on its fitness value. If it has a better fitness value as the actual bat, then the food is considered to exist; otherwise, there is not a food source in the neighborhood. If the food is confirmed to exist around the two bats (Choice 1), the current bat moves to a direction at the surrounding neighborhood of the two bats where the food is supposed to be plenty. If not (Choice 2), it moves toward the best bat.

The mathematical formulas of the bats’ movements are thus given by

The directions of the movement generated by equation (17) are directed towards the bat with the best position. This mechanism allows the BA to exploit more around the best position; however, if the best bat is not near the global optimality, there is a risk that the solutions generated by such moves could be trapped in local optima. The new proposed movement in equation (17) has the ability to diversify the movement directions which can enhance the exploration capability, especially at the different repetition stages especially at the initial stages of iterations, and can thus avoid premature convergence. Furthermore, when it approaches the end of the iteration process, the bats tend to get around the best bats with stronger exploitability which in turn can reduce the distances between them and thus enhance the speed of convergence which gives stability to the algorithm. The new algorithm CG-BAT is illustrated by presenting the algorithm and flowchart as follows.

5.2.1. CG-BAT Algorithm

In this section, we develop the movement of bat algorithm to reach the goal by using the new direction which is defined in equation (17).(1)objective function (2)initialize the bat population and for k = 1, …, n(3)Define pulse frequency at .(4)Initialize pulse rates and the loudness .(5)While (t ≤ tmax).(6)Adjust frequency equation (1)(7)Update locations/solutions equation (2)(8)Update velocities equation (3)(9)if (10)Generate new search movement using equation (17)(11)end if.(12)Generate new solution by a flying optimally step length using equation (39).(13)if .(14)Accept the new solutions(15)Increase equation (37)(16)Reduce equation (38)(17)end if(18)Rank the bats and find the current best (19)end while(20)Output results for postprocessing

5.2.2. CG-BAT Flowchart

In this section, we describe the movement of CG-BAT algorithm to reach the goal by using the flowchart of new search direction which is defined in equation (17) (Figure 1).

6. Experimental Results and Comparisons

To prove the efficiency performance of all newly proposed algorithms, two comparison experiments have been conducted. The first is a comparison between the new spectral conjugate gradient and the standard algorithms in this field, and the second experiment is a comparison between the new bat algorithm (CG-BAT) and cuckoo search, firefly search, and practical swarm.

6.1. Experimental Results and Comparisons in New CG

In this section, we have reported some numerical experiments that are performed on a set of 30 unconstrained optimization test problems to analyze the efficiency of . Detail of these test problems, with their given initial points. The termination criterion used in our experiments is . In our comparisons below, we employ the following algorithms:(i)SS: scalar in spectral in Birgin and Martínez algorithm with the Wolfe line search(ii)HS: Hestenes–Stiefel algorithm with the Wolfe line search(iii)New: new algorithm using equation (17) with the Wolfe line search

Table 1 shows the numerical computations of these newly proposed CG algorithms against other well-known CG-algorithms to check their performance and we have used the following well-known measures or tools used normally for this type of comparison of CG algorithms:NOI = the total number of iterationsNOF = the total number of function evaluationTIME = the total CPU time required for the processor to execute the CG algorithm and reach the minimum value of the required function minimization

To evaluate the modified conjugate gradient technique, this technique is analyzed and tested in some numerical tests (see [20]) and to demonstrate the performance of those methods, we applied Dolan and Moré [21], a new tool to analyze the efficiency of algorithms.

They introduced the notion of a performance profile as means to evaluate and compare the performance of the set of solvers S on a test set P. Assuming that there exist ns solvers and np problems, for each problem p and solvers, they defined computing time (the number of function evaluations or others) required to solve problem p by solver s

Requiring a baseline for comparisons, they compared the performance on problem p by solver s with the best performance by any solver on this problem, based on the performance ratio:

Suppose that a parameter for all is chosen, and if and only if solver s does not solve problem p (Figure 2).

Figure 3 shows the Dolan–More performance profile for these methods, which are subject to the frequency of a suitable performance compared to the basic methods. Figure 4 shows us through the Dolan–More performance profile for these methods, which are measured by the CPU time, which makes us deduce from the three forms presented. The new method is very suitable for solving issues of many dimensions.

6.2. Experimental Results and Comparisons in CG-BAT

To validate the performance of the proposed optimally directional bat algorithm, we have carried out various numerical experiments that have been then tested using several standard and nonstandard benchmarks from the CEC’2005 benchmark suite, which can be summarized as two comparison experiments. The first one is a comparison between the new directional bat algorithm and the standard algorithms including the bat algorithm on the classical benchmark functions, cuckoo search, firefly search, and practical swarm and the second one is a comparison has been performed against some advanced optimization algorithms such as Dolan and Moré [19], a new tool to analyze the efficiency of algorithms.

6.2.1. Benchmarking and Parameter Settings

Thirty popular benchmark functions are shown in Tables 24. We have been used to verify the performance of the new bat algorithm (CG-BAT), compared with that of standard BA, FA, CS, and PSO. The description and the setting parameters of these algorithms are as follows:(1)CG-BAT: an extensive analysis was performed to carry out parameter settings of BA; for best practice, we recommend the following settings: r0 = 0.1, r = 0.7, A0 = 0.9, A = 0.6, fmin = 0 and fmax = 2, , and .(2)BA: the standard bat algorithm was implemented as it is described in [17] with r0 = 0.1, A0 = 0.9, α = γ = 0.9, fmin = 0, and fmax = 2.(3)FA: the firefly algorithm where A0 = 0.9 is the intensity at the source point described in [15].(4)CS: the cuckoo search via Lèvy flights described in [22] is considered with the probability of the discovery of alien egg spa = 0.25.(5)PSO: a classical particle swarm optimization [23, 24] model has been considered. The parameter settings are c1 = 1.5 and c2 = 1.2 and the inertia coefficient is a monotonically decreasing function from 0.9 to 0.4.

For a fair comparison, the common parameters are considered the same. The population size was set to N = 50, and the number of function evaluations is the same as 15000, without counting the initial evaluations, though all algorithms were initialized randomly in the similar manner. Therefore, we set tmax = 500 except for CS. Due to the fact that the CS algorithm uses a number of 2N function evaluations at each iteration, we adjust tmax for this case to 250. The dimensionality of all benchmark functions is D = 30.

6.2.2. The First Experiment

For meaningful statistical analysis, each algorithm was run 51 times using a different initial population at each turn. The global minimum obtained after each trial was recorded for further statistical analysis. Subsequently, the mean value of the global minimum, the standard deviation (SD), the best solution, the median, and the worst solution values have been computed and are presented in Tables 24. From the results presented in Tables 24, the new directional bat algorithm achieved better results for 20 functions (F1, F2, F3, F4, F8, F10, F12, F13, F14, F15, F16, F18, F19, F20, F22, F23, F24, F25, F28, and F29), while the BA obtained better results for 3 functions (F7, F26, and F27). The FA has better scores for 2 functions (F5 and F17). CS obtained best results for F9, F21, and F30 and PSO for F11. We can show that the following monotonically decreasing function is more suitable and gives stability to the algorithm.

6.2.3. The Second Experiment (Nonparametric Statistical Tests)

In this section, to evaluate CG-BAT performance, nonparametric statistical tests were carried out. We performed Friedman's test and pairwise comparisons. Table 5 shows the descriptive statistics for the five algorithms, which gives the number of values studied, the mean and the standard deviation, and the highest value of the values for each method and the lowest value for it. Note that CG-BAT has the least arithmetic mean of 72929.28687 and the standard deviation of 372879.0388, which is lower than the rest of the algorithms, with the lowest and greatest value for the method.

Table 6 presents the Friedman rank test. For this test, an algorithm is considered better if it has a low rank. From the results, CG-BAT has the lowest rank for the two tests, which means that it is the best performing algorithm from the comparison. In addition, the last two rows present the test statistic and value. The statistic is distributed according to the chi-square distribution with 4 degrees of freedom. The lower value of the different tests suggests the existence of significant differences among the considered algorithms at α = 0.01 level of significance.

To highlight the differences between CG-BAT with each of the other algorithms, Table 7 presents the pairwise comparison results using the Friedman test. The control method is CG-BAT. The analysis of the Friedman rank test shows significant differences between CG-BAT and four algorithms (BA, FA, CS, and PSO) according to the values of chi-square statistic which all are less than α = 0.05 level of significance, and since CG-BAT algorithm for all pairwise comparisons has a minimum rank (1.28 with BA, 1.24 with FA, 1.24 with CS, and 1.10 with PSO), the results reveal that CG-BAT algorithm is significantly superior to BA, FA, CS, and PSO algorithms.

6.2.4. The Third Experiment (Convergence Curve Analysis)

The convergence curve is an important indicator for the performance of the algorithm, through which we can see the convergence speed and the ability of the new algorithm optimum. In order to evaluate the modified CG-BAT, this technique is analyzed and tested in some numerical tests, and to illustrate the performance of these methods, we applied Dolan and Moré [21] to analyze the efficiency of the algorithm. Performance profiles based on mean performance, standard deviation (SD), and best solution are shown in Figures 57.

7. Conclusions

In this study, we have submitted new spectral CG methods. A crucial property of proposed CG methods is that it secures sufficient descent directions. Under mild conditions, we have demonstrated that the new algorithms are globally convergent for each uniformly convex and general functions using the strong Wolfe line search conditions. The preliminary numerical results show that new algorithms perform very well and also an improved version of the standard bat algorithm, called the new directional bat algorithm (CG-BAT), has been proposed and presented. Two modifications have been embedded to the BA to increase its exploitation and exploration capabilities and consequently have significantly enhanced the BA performance. Three sets of experiments have been carried out to prove the superiority of the proposed CG-BA). The performance is compared by using thirty test functions, under seven optimization algorithms (SS, HS, New, CG-BAT, BA, CS, FA, and PSO). The comparison results show that the enhanced algorithms (New and CG-BAT) are better than the original algorithms and have relatively stable performance in both the optimization ability and the convergence speed.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The research was supported by College of Computer Sciences and Mathematics, University of Mosul, Republic of Iraq, under Project no. 4795793.