Scientific Programming

Scientific Programming / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 5557259 | https://doi.org/10.1155/2021/5557259

Mustafa Tunay, "A New Design of Metaheuristic Search Called Improved Monkey Algorithm Based on Random Perturbation for Optimization Problems", Scientific Programming, vol. 2021, Article ID 5557259, 14 pages, 2021. https://doi.org/10.1155/2021/5557259

A New Design of Metaheuristic Search Called Improved Monkey Algorithm Based on Random Perturbation for Optimization Problems

Academic Editor: Roberto Natella
Received22 Feb 2021
Revised28 Mar 2021
Accepted29 Apr 2021
Published07 May 2021

Abstract

The aim of this paper is to present a design of a metaheuristic search called improved monkey algorithm (MA+) that provides a suitable solution for the optimization problems. The proposed algorithm has been renewed by a new method using random perturbation (RP) into two control parameters (p1 and p2) to solve a wide variety of optimization problems. A novel RP is defined to improve the control parameters and is constructed off the proposed algorithm. The main advantage of the control parameters is that they more generally prevented the proposed algorithm from getting stuck in optimal solutions. Many optimization problems at the maximum allowable number of iterations can sometimes lead to an inferior local optimum. However, the search strategy in the proposed algorithm has proven to have a successful global optimal solution, convergence optimal solution, and much better performance on many optimization problems for the lowest number of iterations against the original monkey algorithm. All details in the improved monkey algorithm have been represented in this study. The performance of the proposed algorithm was first evaluated using 12 benchmark functions on different dimensions. These different dimensions can be classified into three different types: low-dimensional (30), medium-dimensional (60), and high-dimensional (90). In addition, the performance of the proposed algorithm was compared with the performance of several metaheuristic algorithms using these benchmark functions on many different types of dimensions. Experimental results show that the improved monkey algorithm is clearly superior to the original monkey algorithm, as well as to other well-known metaheuristic algorithms, in terms of obtaining the best optimal value and accelerating convergence solution.

1. Introduction

In this section, the background understanding of optimization in calculus, mathematical optimization, heuristics, and metaheuristic approaches is given. The research based on optimization [13] seeks out a solution iteratively for analytical solutions that have been analyzed. The design of an improved monkey algorithm for a multivariate system is noticed.

Fermat and Lagrange were the first to suggest formulas that are based on calculations for determining the optima. Newton and Gauss were the first to suggest iterative methods for finding the best solution. Actually, this means an approach to optimization in calculus in the case of a point on a function of one variable. It gives the best solution (the maximum or minimum of the function). Many optimization problems are primarily to find the best solution within certain boundaries. It refers to the best available functions that solve objective applied mathematics functions. Formally, “linear programming” was started by Kantorovich in 1939. It is also called linear optimization (LO). The LO is a technique to get the best solution in a calculus model whose elements are represented by linear relationships. Therefore, linear optimization is also a special case of mathematical optimization. The first well-known approach was the simplex method by using mathematical optimization. Danzig studied the simplex method in 1947 for solving linear programming problems. Since then, many optimization methods or techniques have been developed. These are, respectively, as follows: quasi-Newton method [4], steepest descent method [5], possible directions method [6], Newton method [7], penalty method [8, 9], and quadratic programming [10]. Karush–Kuhn–Tucker conditions are first derivative tests for a solution in nonlinear programming (nonlinear optimization) as an optimal expression in mathematical optimization. Kuhn and Tucker studied first derivative tests in 1951. Karush explained in his Master’s thesis in 1939 the necessary conditions for a constrained optimum. The Karush–Kuhn–Tucker conditions of nonlinear programming generalize the method of Lagrange multipliers, which allows only equality constraints. Mathematical programming is briefly about the selection of a best element from some set of available alternatives. Quantitative disciplines are common optimization problems of sorts arising in all from computer science, engineering operations research, economics, and industry. Since then, many optimization methods or techniques have been developed as solutions of interest in mathematics for centuries. Thus, mathematical programming is a rising trend for many fields. These kinds are, respectively, as follows: linear programming [1113], nonlinear programming [14, 15], objective programming [16, 17], and dynamic programming [18, 19]. With regard to nonlinear optimization, that is, having at least one goal or nonlinear constraint function, the known approaches have encountered a lot of difficulties. Unfortunately, all tasks in engineering design are almost nonlinear.

Heuristic methods were first used in philosophy and mathematics for finding solutions to complex problems. Heuristics are problem-dependent methods. Thus, they are usually adapted to a specific problem and try to make full use of its features. However, they are often too greedy, tend to fall into the local optimum trap, and generally cannot get a global optimal solution. The study of this method was developed in human decision-making in the 1970s–1980s by Tversky and Kahneman. In the 1980s, metaheuristic approaches attracted the attention of engineers and they studied all kinds of optimization. Metaheuristics are problem-independent methods and they are of a high level. A set of strategies are provided for developing heuristic optimization algorithms. In general, they are not greedy. In fact, they can even accept temporary deterioration of the solution which allows them to explore the solution space more deeply and thus get a better solution. One of the most well-known approaches was genetic algorithms (GAs). Holland studied the principle of “survival of the fittest” in the 1960s. Subsequently, Simulated Annealing was published in 1983. The optimization problems were solved by Simulated Annealing (SA).

The SA is currently formulated by an objective function for many variables; that is, it means several constraints. Therefore, with SA in practice, the constraint can be penalized as part of the objective function for the best solution. The following are five metaheuristic periods:(1)In 1940: pretheoretical period(2)From 1940 to 1980: the early period(3)From 1980 to 2000: the method-centric period(4)In 2000s: the framework-centric period(5)Scientific period (future)

Nowadays, there are many optimization algorithms that are designed to find the global optimal solutions to optimization problems. One of them is metaheuristic algorithms that can be efficiently used to solve “local minima” problems and determine global solutions of the optimization problems. The set of metaheuristic algorithms include ant colony optimization (ACO) [20, 21], ant lion optimizer (ALO) [22], bat algorithm (BAT) [23], cuckoo search (CS) [24], elephant herding optimization (EHO) [25], particle swarm optimization (PSO) [26], krill herd (KH) [27], moth-flame optimization (MFO) [28], monarch butterfly optimization (MBO) [29, 30], mussels wandering optimization (MWO) [31], moth search algorithm (MSA) [32], and whale optimization algorithm (WOA) [33] for finding good solution to optimization problems. Even today, new methods are being developed as new metaheuristics are invented. Other metaheuristics research works have been done on the designing of the evolutionary theory such as biogeography-based optimization (BBO) [34], the differential evolution (DE) [35], evolution strategies (ES) [36], genetic algorithm (GA) [37, 38], harmony search (HS) [39], gravitational search algorithm (GSA) [40], sine cosine algorithm (SCA) [41], dragonfly algorithm (DA), and hybrid ABC/DA (HAD) [42].

What is more, the improved monkey algorithm (MA+), which finds the best solution and solves optimization problems, is designed in this study. In addition, the proposed algorithm is a new metaheuristic search for the optimization of multivariate systems. There exists much insufficiency for monkey algorithm about its solution search area which may bring about the premature convergence and the low search accuracy when solving complex optimization of multivariate systems. Then, considering that monkey algorithm converges very slowly, a random perturbation method can be used to ensure the diversity of monkey algorithm against premature convergence. The design of a random perturbation into two parameters in a convergence state helps the best monkey position to jump out of possible local optima to further increase the performance of the proposed algorithm (MA+). Thus, the search strategy in the proposed algorithm has proven to have a successful global optimal solution, convergence optimal solution, and much better performance on many complex optimization problems for the lowest number of iterations.

This paper is organized as follows: Section 2 describes the proposed algorithm and the design of a random perturbation into two parameters is explained clearly. Section 3 describes the experimental results and discussion. The information of twelve benchmark functions is given. Moreover, the performance of the proposed algorithm is evaluated and is compared with many comparative algorithms (many metaheuristic algorithms and modified comparative algorithms) on different dimensional functions. Finally, the conclusion is summarized in Section 4.

The aim of this paper is to present the design of a new optimization method called improved monkey algorithm (MA+) to find a good solution for the optimization of multivariate systems. The design of the proposed algorithm (MA+) is a new metaheuristic search method for optimization problems inspired by the behavior of the movement of a monkey. The original monkey algorithm (MA) mainly consists of four processes, namely, initialization process, climb process, watch-jump process, and somersault process. The improvement of the monkey algorithm is renewed by adding random perturbation (RP) in the original four processes. All processes in the proposed algorithm have been designed in Figure 1.

Step A. Initialization Process
The proposed algorithm begins with random generation of a position for each monkey. Each monkey position is set to M, where M represents population size (number of monkeys). Hence, ith is a position as xi = (i = 1, 2, …, M) for each monkey as in the following equation:Each monkey’s position is evaluated in objective function. It is also set to be present in the searching area (lower boundary-upper boundary). Lower and upper boundaries (lb and ub) are for all solutions (monkey’s position). All solutions should be in the searching area between lower boundary (lb) and upper boundary (ub).

Step B. Climb Process
The climb process changes the monkeys’ positions step by step from the initial positions to new ones that can evaluate an improvement in the objective function. Step length (a) parameter in the design of climb process updates the movement of monkeys’ positions. The total number of monkeys is M. Hence, ith is a position as xi = ((xi1, xi2, … xin), i = 1, 2, …, M) for each monkey. The step length in changing (updating) position of monkey is in the following equation:where is updating the position of monkeys (j = 1, 2, …, n); a (positive number) is step length in climb process with a = 10−3. Each ith monkey position evaluates an improvement in the objective function in the climb number of iterations (Nc). This function is called the pseudogradient of the objective function and is expressed as follows:The step length in the climb process has a crucial role in the precision of the approximation of the local solution, so the climb process supports a feasible solution. y is a feasible position for each monkey. is updating with y till reaching a feasible solution. Otherwise, does not change.

Step C. Watch-Jump Process
This process checks each monkey position after the climb process. In other words, it is checked whether their position has reached the top or not. Moreover, each monkey looks around to see whether there is a position that is higher than the current position. If they have it, they will jump from their current position. Otherwise, it means that their position has not reached the top. Of course this will be realized for the monkeys who have the best positions (close to the top or at the top). Therefore, each monkey takes a maximal distance from its current position. The maximal distance in the watch-jump process is parameter b as the eyesight. This process is expressed as follows:The parameter b is the eyesight with b = 0.5. This process updates with y. Both are evaluated: . Otherwise, it checks equation (4) till reaching an appropriate point . Then, in this process, the climb process is repeated by employing . Thus, each monkey takes a maximal distance from its current position.

Step D. Somersault Process
This process enables finding out new positions (searching domains). This process enables finding new positions (searching domains) to the barycentre of all monkeys' current positions defined as a pivot. Monkeys will somersault along the direction pointing to pivot. All solutions should be in somersault interval [c, d] with [−1, 1]. The search space of monkeys for this problem has large feasible spaces till increasing values of |c| and d, respectively. In this process, a real number is generated randomly as s between somersault intervals [−1, 1] and is expressed as follows:where p is the somersault pivot, j = 1, 2, …,n, and then if till feasible solution. Otherwise, this process repeats equations (5) and (6) until a feasible solution (y) is found.

Step E. Random Perturbation (RP) Process
This process controls monkeys current position, which can be stuck at some optimal solution. After somersault process, a novel random perturbation (RP) process is constructed of the proposed algorithm into two control parameters with p1 = 0.5 and p2 = 0.2, respectively. p1 improves monkeys’ current position if they are stuck at a local minimum. The same or worse value is found in searching space consecutively; they have a tolerance number (tolX) in the number of iterations for improving monkeys’ position (one at a time). p2 improves along the direction pointing to their current position with different perturbation to out from some possible local minimum or to search for other (and better) minima. The details regarding the pseudocode of the proposed algorithm are shown in Algorithm 1.
The design of the proposed algorithm for solution to global numerical optimization problems begins with population (M), boundaries (lb, ub), eyesight (b), climb number (Nc), somersault interval c, d ∈ [−1, 1], and control parameters (p1, p2). All input parameters are designed for the proposed algorithm. In addition, a random dimension is calculated as ceil (rand x D). This calculation is evaluated as a random scalar that is drawn from the standard normal distribution multiplying with the parameter p1. Thus, it prevents the proposed algorithm from getting stuck in some optimal solutions while controlling the monkeys’ position. The second one is about uniformly distributed random numbers (rand [1 x D]) multiplying with the parameter p2. This scalar point is combined with each element of the vector x. Thus, improvement of monkeys’ positions is found along the direction pointing with different perturbation to out from some possible local minimum or to search for other (and better) minima. The special steps of the improved control parameters with RP are described in Algorithm 1.

Step A–D: Inputs Step E
global_min = −1
for i = 1 to M do
for j = 1 to tolX do
  d ← ceil (rand x D)
  yi ← xid (1 + p1 randn)
  if lb < yi < ub then
   Continue
  end if
  if global_min>0 then
  |if f (yi) > f (xij) then
   |xij ← yi
  |end if
  Else
  |if f (yi) < f (xij) then
   |xij ← yi
  |end if
  end if
end for
end for
for i = 1 to M do
for j = 1 to n do
  yi ← xij (·) (1 + p2 rand [1, D])
  if lb < yi < ub then
   Continue
  end if
  if global_min>0 then
  |if f (yi) > f (xij) then
   |xij ← yi
  |end if
  Else
  |if f (yi) < f (xij) then
   |xij ← yi
  |end if
  end if
end for
end for
Output

3. Results and Discussion

3.1. Benchmark Functions

The performance of the improved monkey algorithm (MA+) is implemented in Matlab (2017). The features of equipment of the computer used are given as follows:(1)CPU: i5–6200 U(2)CPU speed: 2.30 GHz–2401 MHz(3)RAM: 4.00 GB(4)OS: Microsoft Windows 10

The information for 12 benchmark functions is listed in Table 1 as the name of each function, its equation, and range. The improved monkey algorithm (MA+) performed on various benchmark functions. They are 12 benchmark functions, namely, sphere function (F1), Schwefel 2.22 function (F2), Schwefel 1.2 function (F3), Rosenbrock function (F4), Ackley function (F5), Griewank function (F6), sum squares function (F7), Dixon-Price function (F8), Bent Cigar function (F9), sum of different powers function (F10), Holzman function (F11), and hyperellipsoid function (F12). The performance of the improved monkey algorithm and the performance of comparative algorithms (metaheuristic) are evaluated against 12 benchmark functions in the next section.


Name of functionEquationRange

Sphere
Schwefel 2.22
Schwefel 1.2
Rosenbrock
Ackley
Griewank
Sum squares
Dixon-Price
Bent Cigar
Sum of different powers
Holzman
Hyperellipsoid

3.2. The Performance of Improved Monkey Algorithm on Different Dimensional Functions

The performance of the improved monkey algorithm and the performance of original monkey algorithm (MA) were evaluated against 12 benchmark functions on different dimensions that can be classified into three different types: low-dimensional (30D), medium-dimensional (60D), and high-dimensional (90D). Both algorithms have the same size of population and number of iterations (max) and their maximum function evaluation times are equal. Each algorithm is run 100 times independently for all conditions. The parameters and the same conditions for both algorithms are set as follows: size of population (M) = 50, number of iterations (Ite.) = 50, and dimension (D) = 30, 60, and 90. The best mean and the best standard deviation of experimental results are marked in bold for each function.

The first part of this experiment was conducted on the 30, 60, and 90 dimensions. The experimental results were illustrated in Table 2 that shows 12 benchmark optimization functions (F1–F12), and the improved monkey algorithm (MA+) achieves the best optimization results on the best, mean, and standard deviation values. Therefore, the experimental results on the all-dimensional benchmark functions showed that the performance of the improved monkey algorithm is much better than that of the original monkey algorithm (MA) for the all-dimensional functions. The experimental results are more intuitively demonstrated by the convergence plots and global search ability of the two algorithms (MA and MA+), and the convergence plots of the both algorithms on the 30-dimensional functions are in Figures 2(a)2(l) for all benchmark functions.


FDMAMA+
BestMeanSTDBestMeanSTD

F1308.61E − 019.90E − 016.35E − 029.95E − 1121.03E − 401.69E − 40
608.40E + 009.30E +005.40E − 017.79E − 798.20E − 291.13E − 28
903.10E +013.49E +011.65E +001.70E − 721.16E − 231.51E − 23

F2307.85E + 009.25E + 007.70E − 012.05E − 891.09E − 231.18E − 23
604.35E + 016.35E + 013.61E + 016.60E − 658.35E − 187.35E − 18
902.44E + 027.32E + 091.39E + 101.40E − 492.52E − 143.71E − 14

F3305.15E + 035.75E + 034.75E + 021.10E − 1221.22E − 361.00E − 36
601.03E + 051.10E + 055.65E + 033.09E − 901.14E − 241.07E − 24
905.20E + 055.75E + 053.09E + 043.16E − 621.45E − 191.80E − 19

F4309.82E + 031.23E + 042.00E + 032.69E + 012.69E + 012.04E − 01
603.99E + 055.29E + 057.25E + 045.70E + 015.71E + 015.74E − 02
904.10E + 064.79E + 065.59E + 058.70E + 018.71E + 015.40E − 02

F5305.20E + 005.75E + 003.02E − 018.88E − 169.55E − 159.70E − 15
609.19E + 009.40E + 001.81E − 018.88E − 163.53E − 145.40E − 14
901.21E + 011.26E + 012.09E − 014.44E − 151.33E − 112.69E − 11

F6303.20E + 003.75E + 006.29E − 010.00E + 000.00E + 000.00E + 00
603.19E + 013.36E + 011.75E + 000.00E + 000.00E + 000.00E + 00
901.09E + 021.16E + 025.35E + 000.00E + 000.00E + 000.00E + 00

F7305.0E + 015.6E + 014.19E + 001.2E − 1414.68E − 384.70E − 38
601.02E +031.09E + 034.88E + 011.15E − 925.54E − 288.75E − 28
905.32E + 035.86E + 033.03E + 024.30E − 552.20E − 211.53E − 21

F8308.09E + 011.09E + 022.09E + 016.67E − 016.67E − 012.55E − 05
605.59E + 037.65E + 039.22E + 026.67E − 016.67E − 014.70E − 05
901.03E + 051.13E + 055.76E + 036.67E − 016.67E − 011.25E − 04

F9303.30E + 083.70E + 082.97E + 071.25E − 1441.40E − 311.75E − 31
602.70E + 093.34E + 093.92E + 089.20E − 1155.59E − 215.40E − 21
901.18E + 101.32E + 108.63E + 083.29E − 656.76E − 157.40E − 15

F10303.05E − 109.75E − 097.80E − 090.00E + 001.9E − 1960.00E + 00
602.12E − 085.73E − 083.68E − 080.00E + 009.3E − 1900.00E + 00
901.01E − 081.35E − 077.80E − 080.00E + 001.3E − 1650.00E + 00

F11301.35E + 011.75E + 012.39E + 005.25E − 1653.49E − 545.20E − 54
601.59E + 031.65E + 036.02E + 012.49E − 824.49E − 311.07E − 30
902.19E + 042.50E + 041.69E + 032.59E − 652.90E − 233.65E − 23

F12302.10E + 022.56E + 023.50E + 011.00E − 1241.15E − 371.67E − 37
608.50E + 031.06E + 041.05E + 035.33E − 971.35E − 261.53E − 26
907.92E + 048.61E + 043.69E + 032.88E − 651.97E − 202.91E − 20

Table 2 shows the best mean optimization results for all functions (F1–F12) on the 30 dimensions, and the performance of the proposed improved algorithm was obtained: 1.03E − 40, 1.09E − 23, 1.22E − 36, 2.69E + 01, 9.55E − 15, 0.00E + 00, 4.68E − 38, 6.67E − 01, 1.40E − 31, 1.9E − 196, 3.49E – 54, and 1.15E − 37 respectively. Additionally, Figures 2(a) to 2(l) reveal that the original monkey algorithm is much poorer for the 30-dimensional functions, while the proposed improved algorithm still shows a distinguished searching ability, global optimal solution, and its convergence speed in all functions.

In addition, the performance of the proposed improved algorithm was obtained, 8.20E − 29, 8.35E − 18, 1.14E − 24, 5.71E + 01, 3.53E − 14, 0.00E + 00, 5.54E − 28, 6.67E − 01, 5.59E − 21, 9.3E − 190, 4.49E – 31, and 1.35E – 26, respectively, against 12 60-dimensional functions. Finally, the performance of the proposed algorithm was obtained, 1.16E − 23, 2.52E − 14, 1.45E − 19, 8.71E + 01, 1.33E − 11, 0.00E+ 00, 2.20E − 21, 6.67E − 01, 6.76E − 15, 1.3E − 165, 2.90E – 23, and 1.97E – 20, respectively, against 12 90-dimensional functions.

3.3. Comparison of MA+ with Metaheuristic Algorithms on Different Dimensions

The improved monkey algorithm (MA+) was compared with many metaheuristic optimization algorithms against 12 benchmark optimization functions on different dimensions. The information of benchmark functions is listed in Table 1. All algorithms have the same information and use the same initial parameters, the same dimensions, and the same number of iterations, and their maximum function evaluation times are equal [30, 42]. The best experimental comparative results are marked in bold for each function and all details are shown in Tables 36.


The best mean and the best standard deviation for comparing the performances of some metaheuristic algorithms (ABC, DA, and HAD) after 100 runs
FDABCDAHADMA+
MeanSTDMeanSTDMeanSTDMeanSTD

F1302.98E + 011.02E + 011.76E + 004.34E + 002.61E − 146.53E − 141.03E − 401.69E − 40
601.97E + 022.30E + 012.29E + 004.68E + 003.46E − 108.08E − 108.20E − 291.13E − 28
904.07E + 023.72E + 018.53E + 001.27E + 012.05E − 093.08E − 091.16E − 231.51E − 23

F2307.54E + 001.19E + 007.46E + 005.38E + 003.40E − 086.41E − 081.09E − 231.18E − 23
606.19E + 011.17E + 012.80E + 012.77E + 014.66E − 064.04E − 068.35E − 187.35E − 18
901.59E + 022.97E + 013.10E + 012.50E + 014.89E − 054.85E − 052.52E − 143.71E − 14

F3303.82E + 048.56E + 031.24E + 032.23E + 034.31E − 057.84E − 051.22E − 361.00E − 36
601.57E + 053.43E + 041.09E + 049.04E + 033.32E − 055.79E − 051.14E − 241.07E − 24
903.31E + 053.78E + 044.48E + 044.85E + 043.92E − 055.64E − 051.45E − 191.80E − 19

F4301.36E + 071.14E + 071.15E + 053.47E + 052.75E + 013.29E − 012.69E + 012.04E − 01
602.09E + 085.78E + 075.71E + 051.69E + 065.77E + 019.94E − 025.70E + 015.74E − 02
905.48E + 088.12E + 077.18E + 051.59E + 068.77E + 011.52E − 018.71E + 015.40E − 02

F5301.70E + 015.08E − 011.76E + 003.83E + 005.89E − 088.64E − 089.55E − 159.70E − 15
601.95E + 011.62E − 017.41E + 003.91E + 007.18E − 069.71E − 063.53E − 145.40E − 14
902.00E + 011.66E − 017.00E + 002.22E + 003.08E − 054.34E − 051.33E − 112.69E − 11

F6309.22E + 014.22E + 017.64E + 001.79E + 014.79E − 121.20E − 110.00E + 000.00E + 00
606.93E + 021.11E + 021.74E + 012.90E + 018.68E − 091.48E − 080.00E + 000.00E + 00
901.43E + 031.07E + 022.27E + 015.99E + 012.72E − 074.38E − 070.00E + 000.00E + 00

F7301.32E + 034.45E + 025.26E + 011.03E + 022.01E − 152.67E − 154.68E − 384.70E − 38
602.04E + 045.32E + 038.19E + 022.27E + 031.18E − 093.50E − 095.54E − 288.75E − 28
907.00E + 046.98E + 032.23E + 034.78E + 038.01E − 101.35E − 092.20E − 211.53E − 21

F8307.68E + 047.07E + 041.42E + 033.91E + 036.67E − 018.29E − 086.67E − 012.55E − 05
602.84E + 069.97E + 052.14E + 045.99E + 046.67E − 011.22E − 056.67E − 014.70E − 05
901.25E + 071.73E + 065.42E + 041.30E + 056.67E − 011.23E − 046.67E − 011.25E − 04

F9307.47E + 092.77E + 091.51E + 083.15E + 083.72E − 127.07E − 121.40E − 311.75E − 31
607.19E + 101.07E + 101.23E + 092.31E + 093.11E − 054.24E − 055.59E − 215.40E − 21
901.55E + 111.30E + 102.36E + 094.40E + 091.03E − 022.23E − 026.76E − 157.40E − 15

F10307.62E − 035.74E − 031.35E − 054.26E − 054.40E − 141.25E − 131.9E − 1960.00E + 00
601.35E − 016.36E − 029.39E − 062.57E − 052.99E − 119.21E − 119.3E − 1900.00E + 00
903.98E − 012.20E − 011.80E − 064.81E − 065.91E − 111.50E − 101.3E − 1650.00E + 00

F11301.19E + 041.18E + 046.30E + 011.97E + 028.72E − 181.12E − 173.49E − 545.20E − 54
608.35E + 051.29E + 054.42E + 031.24E + 045.34E − 158.87E − 154.49E − 311.07E − 30
902.77E + 064.33E + 052.58E + 047.14E + 041.61E − 143.13E − 142.90E − 233.65E − 23

F12302.36E + 021.14E + 024.98E + 001.07E + 015.16E − 141.54E − 131.15E − 371.67E − 37
607.72E + 031.09E + 031.41E + 021.73E + 021.16E − 101.38E − 101.35E − 261.53E − 26
903.85E + 042.49E + 036.56E + 021.01E + 032.09E − 084.14E − 081.97E − 202.91E − 20


The best mean for comparing the performances of some metaheuristic algorithms on 30, 60, and 90 dimensions after 100 runs
FDACOBATBBODEGAPSOMA+

F1301.63E + 021.67E + 025.73E + 002.79E + 019.58E + 015.12E + 011,03E − 40
603.76E + 023.91E + 023.09E + 011.74E + 022.86E + 022.13E + 028.20E − 29
906.02E + 026.19E + 027.44E + 013.80E + 024.65E + 024.29E + 021.16E − 23

F2301.13E + 022.95E + 121.19E + 015.38E + 018.60E + 011.14E + 021.09E − 23
602.48E + 022.29E + 284.65E + 011.71E + 022.03E + 022.49E + 028.35E − 18
903.88E + 026.75E + 439.55E + 012.97E + 023.23E + 023.89E + 022.52E − 14

F3306.01E + 041.28E + 052.72E + 046.13E + 044.83E + 041.90E + 051.22E − 36
602.52E + 054.84E + 051.11E + 052.38E + 051.85E + 058.30E + 051.14E − 24
905.61E + 051.10E + 062.41E + 055.40E + 054.13E + 052.18E + 061.45E − 19

F4301.06E + 082.32E + 088.62E + 051.59E + 074.09E + 072.25E + 072.69E + 01
605.87E + 085.97E + 081.11E + 072.12E + 083.24E + 082.11E + 085.70E + 01
901.00E + 099.99E + 083.96E + 075.81E + 086.66E + 087.39E + 088.71E + 01

F5301.85E + 011.99E + 018.82E + 001.87E + 011.77E + 011.87E + 019.55E − 15
601.90E + 011.99E + 011.18E + 011.90E + 011.86E + 011.90E + 013.53E − 14
901.91E + 011.99E + 011.37E + 011.91E + 011.88E + 011.91E + 011.33E − 11

F6308.57E + 015.77E + 022.12E + 019.38E + 011.27E + 021.69E + 020.00E + 00
604.32E + 021.33E + 031.09E + 026.02E + 024.64E + 027.27E + 020.00E + 00
907.13E + 022.14E + 032.51E + 021.31E + 038.87E + 021.62E + 030.00E + 00

F7309.37E + 039.27E + 033.12E + 021.29E + 035.03E + 032.30E + 034.68E − 38
604.39E + 044.29E + 043.09E + 031.65E + 043.05E + 041.62E + 045.54E − 28
901.03E + 051.03E + 051.15E + 045.51E + 048.04E + 045.06E + 042.20E − 21

F8301.65E + 061.66E + 067.07E + 039.65E + 043.79E + 052.28E + 056.67E − 01
608.84E + 068.63E + 061.54E + 052.54E + 064.67E + 062.58E + 066.67E − 01
902.19E + 072.15E + 077.95E + 051.03E + 071.34E + 079.36E + 066.67E − 01

F9303.26E + 106.09E + 102.14E + 099.29E + 091.82E + 101.55E + 101.40E − 31
608.60E + 101.46E + 111.11E + 106.46E + 108.06E + 105.24E + 105.59E − 21
901.32E + 112.37E + 112.60E + 101.42E + 111.56E + 111.02E + 116.76E − 15

F10308.84E + 007.17E − 016.15E − 341.81E − 028.20E − 017.49E − 011.9E − 196
602.11E + 011.00E + 001.35E − 064.62E − 019.82E + 001.93E + 009.3E − 190
903.42E + 011.26E + 002.44E − 052.25E + 002.16E + 012.97E + 001.3E − 165

F11304.19E + 054.19E + 051.60E + 032.51E + 041.02E + 056.73E + 043.49E − 54
602.24E + 062.13E + 063.98E + 046.43E + 051.16E + 069.31E + 054.49E − 31
905.50E + 065.24E + 061.85E + 052.65E + 063.39E + 063.19E + 062.90E − 23

F12302.34E + 031.76E + 031.61E − 021.75E + 021.20E + 022.79E + 021.15E − 37
602.28E + 041.66E + 042.11E + 014.88E + 037.93E + 033.44E + 031.35E − 26
908.42E + 045.92E + 043.99E + 022.61E + 044.28E + 041.48E + 041.97E − 20


The best mean optimization results for comparing the performances of other metaheuristic algorithms with D = 30, 60, and 90 after 100 runs
FDEHOKHMFOMSASCAWOAMA+

F1302.49E − 074.63E − 016.57E + 012.30E − 082.32E + 012.42E − 091.03E − 40
606.44E − 074.75E + 002.70E + 023.67E − 071.15E + 024.67E − 098.20E − 29
901.06E − 069.04E + 004.97E + 021.17E − 062.26E + 027.45E − 091.16E − 23

F2304.12E − 031.14E + 014.66E + 021.78E − 041.52E + 013.76E − 051.09E − 23
609.34E − 032.45E + 141.13E + 179.50E − 044.20E + 019.19E − 058.35E − 18
901.46E023.56E + 271.37E + 321.63E − 036.81E + 011.52E − 042.52E − 14

F3302.30E − 046.71E + 034.61E + 041.59E − 074.07E + 042.25E + 021.22E − 36
609.56E − 041.05E + 051.79E + 057.08E − 061.74E + 051.33E + 031.14E − 24
902.08E − 032.46E + 053.84E + 054.86E − 054.14E + 052.25 E + 031.45E − 19

F4302.89E + 018.69E + 034.74E + 072.86E + 013.30E + 072.87E + 012.69E + 01
605.89E + 012.28E + 043.61E + 085.89E + 012.25E + 085.85E + 015.70E + 01
908.89E + 013.85E + 047.55E + 088.89E + 015.21E + 088.83E + 018.71E + 01

F5301.94E − 034.84E + 001.85E + 016.84E − 051.59E + 011.21E − 049.55E − 15
602.22E − 037.16E + 002.01E + 011.88E − 041.89E + 011.19E − 043.53E − 14
902.33E − 037.80E + 002.04E + 013.57E − 041.83E + 011.41E − 041.33E − 11

F6301.44E − 043.03E + 002.28E + 021.09E − 098.46E + 016.10E − 020.00E + 00
602.14E − 046.83E + 009.28E + 027.15E − 094.15E + 023.88E − 020.00E + 00
902.58E − 047.77E + 001.70E + 033.90E − 088.37E + 023.47E − 020.00E + 00

F7301.16E − 054.21E + 013.09E + 031.95E − 071.13E + 031.30E − 074.68E − 38
606.21E − 055.33E + 022.63E + 041.24E − 051.10E + 047.04E − 075.54E − 28
901.55E − 041.55E + 037.70E + 044.37E − 053.34E + 049.83E − 072.20E − 21

F8309.51E − 014.69E + 013.21E + 056.71E − 012.10E + 057.59E − 016.67E − 01
609.89E − 016.05E + 024.55E + 067.88E − 012.98E + 069.29E − 016.67E − 01
909.95E − 011.86E + 031.45E + 079.95E − 011.03E + 079.73E − 016.67E − 01

F9307.35E + 012.16E + 062.23E + 101.74E − 028.21E + 092.21E + 001.40E − 31
601.94E + 027.53E + 081.01E + 113.61E − 014.19E + 101.52E + 005.59E − 21
903.19E + 023.25E + 091.88E + 111.52E + 008.77E + 102.76E + 006.76E − 15

F10304.29E − 12−7.30E + 024.82E − 029.39E − 158.48E − 027.48E − 171.9E − 196
605.48E − 12−1.34E + 032.30E − 012.12E − 144.85E − 015.51E − 169.3E − 190
905.40E − 12−1.85E + 034.53E − 011.13E − 148.17E − 017.91E − 171.3E − 165

F11308.13E − 13−2.21E + 088.48E + 042.46E − 144.50E + 045.38E − 113.49E − 54
605.90E − 12−6.49E + 121.14E + 066.19E − 127.49E + 052.28E − 124.49E − 31
901.59E − 11−1.65E + 173.76E + 067.59E − 112.36E + 069.29E − 102.90E − 23

F12304.21E − 064.27E + 004.60E + 023.33E − 061.55E + 022.85E − 081.15E − 37
604.57E − 051.96E + 028.62E + 031.44E − 043.33E + 033.31E − 071.35E − 26
901.82E − 049.54E + 023.97E + 049.07E − 041.59E + 041.89E − 061.97E − 20


The mean and standard deviations obtained by the MBO, GCMBO, OPMBO, and MA + on the test optimization functions after 30 runs
FDMBOGCMBOOPMBOMA+
MeanSTDMeanSTDMeanSTDMeanSTD

F1201.02E + 012.41E + 014.03E − 096.14E − 091.99E − 105.68E − 106.67E − 528.17E − 52
502.09E + 021.39E + 021.11E − 092.15E − 098.85E − 097.50E − 092.40E − 323.00E − 32
1004.83E + 023.27E + 021.00E + 014.01E + 014.87E + 002.52E + 011.70E − 221.94E − 22

F2201.95E + 012.47E + 012.27E + 005.83E + 001.72E − 061.80E − 068.80E − 291.10E − 28
501.30E + 027.90E + 012.36E + 013.56E + 012.73E − 051.65E − 051.99E − 203.92E − 20
1002.66E + 021.75E + 025.61E + 017.16E + 017.60E − 024.13E − 011.90E − 131.65E − 13

F3209.91E + 037.05E + 035.47E + 033.63E + 031.29E + 012.54E + 014.04E − 454.67E − 45
507.17E + 045.12E + 042.81E + 041.75E + 041.98E + 038.64E + 031.39E − 271.45E − 27
1003.63E + 051.78E + 051.12E + 056.38E + 049.42E + 032.84E + 049.10E − 181.12E − 17

F4206.36E + 021.01E + 037.10E + 019.60E + 011.41E + 017.96E + 001.65E + 012.90E − 01
505.93E + 036.24E + 032.43E + 023.53E + 024.60E + 015.35E + 014.72E + 011.19E − 01
1001.35E + 041.43E + 041.00E + 032.06E + 034.32E + 021.25E + 039.75E + 014.20E − 01

F5208.92E + 007.96E + 002.61E − 011.08E + 006.94E − 065.00E − 068.59E − 158.69E − 15
501.59E + 016.11E + 001.96E + 004.66E + 004.03E − 052.24E − 052.80E − 141.95E − 14
1001.86E + 013.99E + 008.63E + 009.13E + 003.93E − 021.89E − 014.03E − 107.08E − 11

Tables 3 to 5 show the experimental comparative results for all functions (F1–F12) on the 30, 60, and 90 dimensions. Each algorithm is run 100 times independently for all 12 benchmark optimization functions on all these dimensions in these tables. However, Table 6 shows the experimental comparative results for some functions (F1–F5) on 20, 50, and 100 dimensions. Each algorithm is run 30 times independently for each of 5 benchmark optimization functions on all these dimensions in Table 6.

The initial parameters and the same conditions for all algorithms were set as follows: size of population is set to 50 and each algorithm ran till it reached a number of iterations (50).

In the first stage, the performance of the proposed algorithm is compared with the performances of a selected collection of comparative algorithms that have been evaluated. The included algorithms are ABC, DA, and HAD. The best mean and the best standard deviation of experimental results are shown in Table 3 for each function. Table 3 shows that the proposed algorithm has an outstanding performance in the majority of the evaluation cases for F1–F7 and F9–F12 benchmark functions, respectively. However, the performance of HAD algorithm is equal to the result in the case of all dimensions on only Dixon-Price function (F8) with the proposed algorithm. The best standard deviation on Dixon-Price function was obtained by HAD algorithm. All details are shown in Table 3.

In the second stage, the performance of the proposed algorithm is compared with those of some metaheuristic optimization algorithms that have been evaluated. The included algorithms are ACO, BAT, BBO, DE, GA, and PSO. The best mean of experimental results is listed in Table 4 for each function.

In the third stage, the performance of the proposed algorithm is compared with those of other metaheuristic optimization algorithms that have been evaluated. The included algorithms are EHO, KH, MFO, MSA, SCA, and WOA. The best mean of experimental results is listed in Table 5 for each function.

Finally, the performance of the proposed algorithm was also compared with the performances of three algorithms, namely, the monarch butterfly optimization (MBO) algorithm, MBO with greedy strategy and self-adaptive crossover operator (GCMBO), and MBO with opposition-based learning and random local perturbation (OPMBO) using five benchmark functions, and all details are listed in Table 6.

To sum up, the experimental comparative results showed reaching the much better solution and the best convergence performance of escaping local optimum for the proposed algorithm when it is compared with ACO, BAT, BBO, DE, GA, PSO, EHO, KH, MFO, MSA, SCA, WOA, MBO, GCMBO, and OPMBO. All those comparative results showed an outstanding performance of the proposed algorithm in the majority of the evaluation cases. All details are listed in Tables 46.

4. Conclusions

This paper presented a novel metaheuristic search and cognitively inspired algorithm, based on the monkey algorithm. The proposed algorithm has been widely employed for solving various kinds of optimization problems and was evaluated extensively against 12 benchmark optimization functions on different types of dimensions for each function. A new random perturbation was defined to improve the control parameters and was constructed of the proposed algorithm. The main advantage of the control parameters was that they efficiently prevented the improved monkey algorithm from getting stuck in optimal solutions and found global optimal solution for 8 benchmark functions, namely, sphere function (F1), Schwefel 2.22 function (F2), Schwefel 1.2 function (F3), sum squares function (F7), Bent Cigar function (F9), sum of different powers function (F10), Holzman function (F11), and hyperellipsoid function (F12) in Figures 2(a)2(c), 2(g), and 2(i)2(l), respectively. Moreover, Ackley function (F5) in Figure 2(e) and Griewank function (F6) in Figure (2f) early obtained 8.88E − 16 and 0.00E + 00, respectively, for the lowest iterations. These experimental results are the best solution for these functions, and they also reached global optimum solutions early without getting stuck in local optimum solutions. However, Rosenbrock function (F4) in Figure (2d) and Dixon-Price function (F8) in Figure (2h) caused getting stuck in optimal solutions and obtained poor performance for only the lowest iterations, but of course the proposed algorithm obtained the best solution for these functions at the maximum allowable number of iterations. Briefly, the search strategy in the proposed algorithm has more generally proven to have a successful global optimal solution, convergence optimal solution, and much better performance on many optimization problems for the lowest number of iterations against the original monkey algorithm.

The performance of the improved monkey algorithm was compared with many metaheuristic optimization algorithms, including a collection of 18 optimizer algorithms. The comparative results included simple statistics for the best, mean, and convergence plots. All those comparative results showed that the proposed algorithm had an outstanding performance in majority of the evaluation cases.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The author declares that he has no conflicts of interest.

Acknowledgments

The author would like to acknowledge Faculty of Engineering and Architecture, Department of Computer Engineering, Istanbul Gelisim University, Avcılar-Istanbul, Turkey.

References

  1. R. H. Abiyev and M. Tunay, “Optimization of high dimensional functions through hypercube evaluation,” Computational Intelligence and Neuroscience, vol. 2015, Article ID 967320, 11 pages, 2015. View at: Publisher Site | Google Scholar
  2. M. Tunay, “Evolutionary search algorithm based on hypercube optimization for high-dimensional functions,” International Journal of Computational and Experimental Science and Engineering (IJCESEN), vol. 6, no. 1, pp. 42–62, 2020. View at: Publisher Site | Google Scholar
  3. R. H. Abiyev and M. Tunay, “Optimization search using hypercubes,” in Proceedings of the 2020 4th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), pp. 1–8, Istanbul, Turkey, October 2020. View at: Google Scholar
  4. L. Marini, B. Morini, and M. Porcelli, “Quasi-Newton methods for constrained nonlinear systems: complexity analysis and applications,” Computational Optimization and Applications, vol. 71, no. 1, pp. 147–170, 2018. View at: Publisher Site | Google Scholar
  5. S. F. Husin, M. Mamat, and M. A. H. Ibrahim, “A modification of steepest descent method for solving large-scaled unconstrained optimization problems,” International Journal of Engineering & Technology, vol. 7, no. 3, pp. 72–75, 2018. View at: Publisher Site | Google Scholar
  6. O. O. Kryazhych, O. M. Trofymchuk, and O. V. Kovalenko, “The algorithm for determining the starting point in the simulation by the method of possible directions,” Radio Electronics, Computer Science, Control, vol. 3, pp. 40–46, 2019. View at: Publisher Site | Google Scholar
  7. S. Wang, E. Roosta-Khorasani, P. Xu, and M. W. Mahoney, “Giant: globally improved approximate Newton method for distributed optimization,” in Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 2332–2342, Montreal, Canada, December 2018. View at: Google Scholar
  8. M. Tunay, “A new intense stochastic search method based on hypercube evaluation for examination timetabling problems,” in Proceedings of the 2020 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), pp. 1–5, Istanbul, Turkey, June 2020. View at: Google Scholar
  9. C. Han, L. Ming, and Z. Dinghua, “Optimization of varying-parameter drilling for multi-hole parts using metaheuristic algorithm coupled with self-adaptive penalty method,” Applied Soft Computing, vol. 95, p. 106489, 2020. View at: Publisher Site | Google Scholar
  10. M. A. Akbay, C. B. Kalayci, and O. Polat, “A parallel variable neighborhood search algorithm with quadratic programming for cardinality constrained portfolio optimization,” Knowledge-Based Systems, vol. 198, p. 105944, 2020. View at: Publisher Site | Google Scholar
  11. U. M. Diwekar, Introduction to Applied Optimization, vol. 22, Springer Nature, Basingstoke, UK, 2020.
  12. P. Kuendee and U. Janjarassuk, “A comparative study of mixed-integer linear programming and genetic algorithms for solving binary problems,” in Proceedings of the 2018 5th International Conference on Industrial Engineering and Applications (ICIEA), pp. 284–288, Singapore, April 2018. View at: Google Scholar
  13. Z. Wang and H. Li, “A novel multi-objective evolutionary algorithm based on linear programming,” in Proceedings of the 2018 14th International Conference on Computational Intelligence and Security (CIS), pp. 345–348, Hangzhou, China, November 2018. View at: Google Scholar
  14. S. Agarwal, A. P. Singh, and N. Anand, “Evaluation performance study of Firefly algorithm, particle swarm optimization and artificial bee colony algorithm for non-linear mathematical optimization functions,” in Proceedings of the 2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT), pp. 1–8, Tiruchengode, India, July 2013. View at: Google Scholar
  15. K. Schittkowski and C. Zillober, “Nonlinear programming: algorithms, software, and applications,” IFIP Advances in Information and Communication Technology, vol. 166, pp. 73–107, 2006. View at: Publisher Site | Google Scholar
  16. N. Andreasson, M. Patriksson, and A. Evgrafov, An Introduction to Continuous Optimization: Foundations and Fundamental Algorithms, Courier Dover Publications, Garden City, NY, USA, 2020.
  17. R. W. Sebesta, Concepts of Programming Languages, Pearson, Boston, MA, USA, 2012.
  18. E. V. Denardo, Dynamic Programming: Models and Applications, Courier Corporation, Chelmsford, MA, USA, 2012.
  19. D. P. Bertsekas, Abstract Dynamic Programming, Athena Scientific, Nashua, NH, USA, 2018.
  20. M. Dorigo and T. Stützle, “Ant colony optimization: overview and recent advances,” Handbook of Metaheuristics, Springer, Boston, MA, USA. View at: Publisher Site | Google Scholar
  21. D. Ke-Lin and M. N. S. Swamy, “Ant colony optimization,” Search and Optimization by Metaheuristics, Birkhäuser, Basel, Switzerland, 2016. View at: Google Scholar
  22. S. Mirjalili, “The ant lion optimizer,” Advances in Engineering Software, vol. 83, pp. 80–98, 2015. View at: Publisher Site | Google Scholar
  23. X.-S. Yang, “A new metaheuristic bat-inspired algorithm,” Nature Inspired Cooperative Strategies for Optimization (NICSO 2010), Springer, Berlin, Germany, 2010. View at: Publisher Site | Google Scholar
  24. X.-S. Yang and S. Deb, “Cuckoo search via Lévy flights,” in Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), pp. 210–214, Coimbatore, India, December 2009. View at: Google Scholar
  25. G.-G. Wang, S. Deb, and L. S. Coelho, “Elephant herding optimization,” in Proceedings of the 3rd International Symposium on Computational and Business Intelligence (ISCBI), pp. 1–5, Bali, Indonesia, December 2015. View at: Google Scholar
  26. Q. Zhao and C. Li, “Two-stage multi-swarm particle swarm optimizer for unconstrained and constrained global optimization,” IEEE Access, vol. 8, pp. 124905–124927, 2020. View at: Publisher Site | Google Scholar
  27. A. H. Gandomi and A. H. Alavi, “Krill herd: a new bio-inspired optimization algorithm,” Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 12, pp. 4831–4845, 2012. View at: Publisher Site | Google Scholar
  28. S. Mirjalili, “Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm,” Knowledge-Based Systems, vol. 89, pp. 228–249, 2015. View at: Publisher Site | Google Scholar
  29. G.-G. Wang, S. Deb, and Z. Cui, “Monarch butterfly optimization,” Neural Computing and Applications, vol. 31, pp. 1995–2014, 2019. View at: Publisher Site | Google Scholar
  30. L. Sun, S. Chen, J. Xu, and Y. Tian, “Improved monarch butterfly optimization algorithm based on opposition-based learning and random local perturbation,” Complexity, vol. 2019, Article ID 4182148, 20 pages, 2019. View at: Publisher Site | Google Scholar
  31. J. An, Q. Kang, L. Wang, and Q. Wu, “Mussels wandering optimization: an ecologically inspired algorithm for global optimization,” Cognitive Computation, vol. 5, no. 2, pp. 188–199, 2013. View at: Publisher Site | Google Scholar
  32. G.-G. Wang, “Moth search algorithm: a bio-inspired metaheuristic algorithm for global optimization problems,” Memetic Computing, vol. 10, pp. 151–164, 2018. View at: Publisher Site | Google Scholar
  33. S. Mirjalili and A. Lewis, “The whale optimization algorithm,” Advances in Engineering Software, vol. 95, pp. 51–67, 2016. View at: Publisher Site | Google Scholar
  34. L. Wee Loon, W. Antoni, D. Mohammad, and H. Habibollah, “A biogeography-based optimization algorithm hybridized with tabu search for the quadratic assignment problem,” Computational Intelligence and Neuroscience, vol. 2016, Article ID 5803893, 12 pages, 2016. View at: Publisher Site | Google Scholar
  35. R. Storn and K. Price, ““Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at: Publisher Site | Google Scholar
  36. M. Emmerich, O. M. Shir, and H. Wang, “Evolution Strategies,” in Handbook of Heuristics, R. Martí, P. Panos, and M. Resende, Eds., pp. 1–31, Springer, Berlin, Germany, 2018. View at: Publisher Site | Google Scholar
  37. A. Lambora, K. Gupta, and K. Chopra, “Genetic algorithm-a literature review,” in Proceedings of the 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), pp. 380–384, Faridabad, India, February 2019. View at: Google Scholar
  38. M. Tunay and R. H. Abiyev, “Hybrid local search based genetic algorithm and its practical application,” International Journal of Soft Computing and Engineering, vol. 5, no. 2, pp. 21–27, 2015. View at: Google Scholar
  39. Z. W. Geem, J. H. Kim, and G. V. Loganathan, “A new heuristic optimization algorithm: harmony search,” Simulation, vol. 76, no. 2, pp. 60–68, 2001. View at: Google Scholar
  40. E. Rashedi, H. Nezamabadi-Pour, and S. Saryazdi, “GSA: a gravitational search algorithm,” Information Sciences, vol. 179, no. 13, pp. 2232–2248, 2009. View at: Publisher Site | Google Scholar
  41. S. Mirjalili, “SCA: a sine cosine algorithm for solving optimization problems,” Knowledge-Based Systems, vol. 96, pp. 120–133, 2016. View at: Publisher Site | Google Scholar
  42. W. A. H. M. Ghanem and A. Jantan, “A cognitively inspired hybridization of artificial bee colony and dragonfly algorithms for training multi-layer perceptrons,” Cognitive Computation, vol. 10, pp. 1096–1134, 2018. View at: Publisher Site | Google Scholar

Copyright © 2021 Mustafa Tunay. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views970
Downloads380
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.