Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2021 / Article
Special Issue

Meta-Heuristic Techniques for Solving Computational Engineering Problems

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 2571863 | https://doi.org/10.1155/2021/2571863

Amit Kumar Bairwa, Sandeep Joshi, Dilbag Singh, "Dingo Optimizer: A Nature-Inspired Metaheuristic Approach for Engineering Problems", Mathematical Problems in Engineering, vol. 2021, Article ID 2571863, 12 pages, 2021. https://doi.org/10.1155/2021/2571863

Dingo Optimizer: A Nature-Inspired Metaheuristic Approach for Engineering Problems

Academic Editor: Yann Favennec
Received23 Jul 2020
Revised08 Oct 2020
Accepted16 Dec 2020
Published10 Jun 2021

Abstract

Optimization is a buzzword, whenever researchers think of engineering problems. This paper presents a new metaheuristic named dingo optimizer (DOX) which is motivated by the behavior of dingo (Canis familiaris dingo). The overall concept is to develop this method involving the collaborative and social behavior of dingoes. The developed algorithm is based on the hunting behavior of dingoes that includes exploration, encircling, and exploitation. All the above prey hunting steps are modeled mathematically and are implemented in the simulator to test the performance of the proposed algorithm. Comparative analyses are drawn among the proposed approach and grey wolf optimizer (GWO) and particle swarm optimizer (PSO). Some of the well-known test functions are used for the comparative study of this work. The results reveal that the dingo optimizer performed significantly better than other nature-inspired algorithms.

1. Introduction

The challenges of the modern world are composed of various goals that must be optimized at the same time. Optimization is a process that seeks one or more solutions to the problem that leads to the extreme values of one or more objective [1]. The optimization can, therefore, be done based on single or multiple objective functions [24].

Keeping this in the mind, there is a requirement of new metaheuristic-based solution to reduce the burden of any of the model designing. The objective of this paper is to develop a nature-based algorithm called dingo optimizer, which can be abbreviated as DOX. It is based on dingo’s social hierarchy and prey hunting behavior.

Metaheuristic algorithms are remarkably common due to its nature of flexibility, simplicity, less mathematical complexity, and avoidance of local optima. If we talk about flexibility, then it means we can use such algorithms in a wide variety of engineering problems. Such algorithms provide satisfactory results for many of the complex problems [5]. It is simple because it is inspired by nature like animal behavior to accomplish a particular task, physical phenomena, and other evolutionary behavior.

One of the main reasons to use the metaheuristics in real-life problems is that almost all the optimization solutions start with the random processes, and for such solutions, there is no need to find out the optimum. Metaheuristic algorithms are very powerful in terms of finding local optima compared with the traditional optimization algorithms. Finding the real search space in the real world problem is very much complicated because of finding with lots of local optima in the search. That is the reason metaheuristic algorithms are most suitable to find out such challenging issues.

There are so many metaheuristic algorithms proposed every year, and they show the promising result with respect to the engineering problem. However, day by day the nature and complexity of new applications are introducing with new challenges. And, it might not be possible to solve the particular problem with the guarantee. This motivates us to develop a new metaheuristics algorithm as dingo optimizer (DOX). Also, the method which is mathematically modeled and inspired by the social hierarchy of dingoes is also motivated to solve a real-time engineering problem.

This paper is arranged as follows. Section 2 provides a literature survey that explains the brief principles of DOX focused on dog pack hunting. The proposed multiobjective DOX is presented in Section 3. In section 4, the performance of DOX is tested with various benchmark functions and experimental results are compared with other state-of-the-art algorithms. In Section 5, the conclusion and future research directions discussed.

2. Literature Review

In the past few years, the problems related to real-life have increased, and it is motivating researchers to develop a better metaheuristic technique with the concept of randomization and local search. Such approaches have been used to determine the best solutions to real-life engineering problems that are feasible. These methods are more widely accepted due to their difficulty and reliability relative to other existing methods.

Metaheuristic algorithms can be categorized into evolutionary-based [13], physical algorithms [54], bio-inspired algorithms, swarm-based [25], and others methods. The evolutionary algorithm gives approximately close solutions to all types of optimization models since these approaches are not dependent on basic fitness and assumptions [55]. Bio-inspired algorithms are also popular to solve various complicated problems, which are motivated by biological evolution such as selection, replication, mutation, and recombination. Physical algorithms are motivated by evolutionary algorithms by the principle of natural selection in which a species attempts to live in different environments based on the fitness test. The most common parameters such as inertia force, electromagnetic force, and gravitational force help the algorithm to search the agent’s coordinate and search around the space. Swarm-based algorithms are inspired by the self-organized nature of the social creatures, which shows the collective behavior of decentralization. Corporate wisdom is influenced by the contact of swarms with each other and their surroundings. Some of the popular swarm intelligence technique is quite similar to the nature-inspired algorithm. Generally, swarm-based algorithms are more popular as they have fewer operators (i.e., discovery, encircling, and exploitation). Table 1 lists all these algorithms which are divided into single objective and multiobjectives depending on the number of objective functions.


Types of algorithmsSingle objectiveMultiobjective

Evolutionary algorithms (EA) [6]Genetic algorithm (GA) [7]NSGA II [2, 8]
Genetic programming (GP) [9]Multiobjective GP [10]
Differential evolution (DE) [11]Multiobjective DE [12]
Evolutionary strategy (ES) [13]Multiobjective ES [14]

Bio-inspired algorithms (BIA)Artificial immune system (AIS) algorithm [15]Multiobjective AIS [16]
Bacterial foraging algorithm (BFA) [17]Multiobjective BFA [18]
Dendritic cell algorithm (DCA) [19]
Krill herd algorithm (KHA) [20]

Physical algorithmsSimulated annealing (SA) [21]Multiobjective SA
Memetic algorithm (MA) [22]Multiobjective MA
Shuffled frog-leaping algorithm [23]Multiobjective SFA

Swarm intelligence (SI)Ant colony optimization (ACO) [24]Multiobjective ACO
Particle swarm optimization (PSO) [25]Multiobjective PSO [1]
Artificial bee colony (ABC) [26]Multiobjective ABC
Fish swarm algorithm (FSA) [27]Multiobjective FSA
Grey wolf optimizer (GWO) [28]
Dragonfly algorithm (DA) [29]

Other nature-inspired algorithmsFirefly algorithm [30]Multiobjective firefly [31]
Whale optimization algorithm (WOA) [32]
Gravitational search algorithm [33]Multiobjective GSA
Bat algorithm (BA) [34]Multiobjective Bat
Cuckoo search algorithm (CSA) [35]Multiobjective cuckoo
Cat swarm algorithm (CSA) [36]Multiobjective CSO

Human behavior-inspired algorithmsHarmony search (HS) [37]Multiobjective HS [38]
Tabu search (TS) [39]Multiobjective TS [40]
Parameter adaptive harmony search (PAHS) [41]Multiobjective PAHS [41]
Group search optimizer (GSO) [42]Multiobjective GSO
Exchange market algorithm (EMA) [43]Multiobjective EMA
Imperialist competitive algorithm (ICA) [44]Multiobjective ICA
Soccer league competition algorithm (SLCA) [45]Multiobjective SLCA
League championship algorithm (LCA) [46]Multiobjective LCA
Social-based algorithm (SBA) [47]Multiobjective SBA
Firework algorithm (FA) [48]Multiobjective FA
Colliding bodies optimization (CBO) [49]Multiobjective CBO
Soccer league competition algorithm (SLCA) [45]Multiobjective SLCA
Interior search algorithm (ISA) [50]Multiobjective ISA
Artificial ecosystem-based optimization (AEO) [51]
Spiral optimization algorithm (SOA) [52]
Adolescent identity search algorithm (AISA) [53]

3. Dingo Optimizer (DOX)

3.1. Motivation

Nature is always the most powerful teacher from the beginning. Ever species surviving on the Earth have its way of unique mechanism for survival. Social relationships are one of them, which is dynamic. Based on the general study of the social behavior of the animal, it can be segmented into some of the categories. The first category is depending on the environmental factors, i.e., nearby resource availability and challenges created by other species. Another category depends on individual behavior or quality.

Keeping this in the mind, dingo is motivation to our work which follows strictly the social relationship. Dingo is the dog’s sort. The scientific name of dingo is Canis lupus (wolf) dingo, changed recently from Canis familiaris (dog). Dingoes are complicated, intelligent, and highly social animals. Dingoes are skillful hunters living in a pack of the average size 12–15.

Social hierarchy is highly structured, alpha is on the top of the hierarchy, and they might be male or female. They can be identified based on the responsibility like making decisions, sleeping places, and hunting. The most dominant and strongest member in the pack is called alpha, considered as the leader of the pack of dingoes. It reflects that the discipline and organization are more important than power. The decision taken by the alpha is dictated to the pack. In general, all the members of a pack acknowledge the alpha by holding their tails down.

Beta dingoes are at the second level in the hierarchy, which played a role of intermediate between alpha and another pack for the related tasks. It plays the important role as an adviser of alpha and maintains the discipline for the whole pack. The beta confirms the orders of the alpha in the group and communicates to the alpha. The beta dingo is second in the hierarchy after the alpha. If alpha does not survive due to any of the reasons, all the commands will be handed over by the beta to control the other lower-level dingoes.

If a dingo does not belong to an alpha or beta, they are considered as subordinate. These subordinates follow alphas and betas. Scouts shall be liable for observing the area of the territories and shall alert the group in case of any threat circumstance. Hunters shall support the alphas and betas to catch the prey and provide food for the group.

Based on the studies, dingoes have an accurate sense of communication. They communicate with each other through sensing different sound intensities in the air. In DOX, dingo creates sound feedback in such a way that dingoes exchange their knowledge with others to create common community details. The amplitude of the vibration is modified by the strength of the person as the dingo enters a new location from the previous one.

Group hunting is an interesting social behavior of dingoes, which makes its more extension to the social behavior of dingoes. Hunting strategy is categorized in their phases as follows:Chasing and approachingEncircling and harassingAttack

The above steps are properly shown in Figure 1. Also, hunting behavior and the social arrangements of dingo are modeled mathematically, to develop DOX to perform nature-inspired optimization.

Exploration and exploitation are the two main components of DOX. In the exploration part, the algorithm reaches several expected solutions in the search space but exploitation allows searching for optimal solutions within the given space. To find out the best solution for any real-life problem, both the components are required with fine-tuning. However, it is a challenge to make a balance between the components of the proposed algorithm due to its stochastic nature. To solve a real-life engineering problem, this inspiring fact motivates a hybridize metaheuristics algorithm design.

3.2. Mathematical Models

The representation of the searching, encircling, and attacking prey is designed mathematically to perform the dingo optimization in this section.

3.2.1. Encircling

Dingoes are enough capable to find the location of the prey. After tracing the location, the pack followed by alpha encircles the prey. To model dingo’s social hierarchy, it is assumed that the existing best agent approach is the goal or aim prey, which is similar to the optimal since the quest area is not known a priori. In the meantime, other quest agencies are still seeking to refresh their strategies on the next possible approach. This behavior of the dingoes is modeled by the following mathematical equations (1)–(5). Also, a detailed description of the nomenclatures used in the equation is provided in Table 2


ElementsDescription

Distance between the dingo and prey
Position vector (prey)
Position vector (dingo)
Coefficient vector
Coefficient vector
Random vector in [0,1]
Random vector in [0,1]
Linearly decrement from 3 to 0 at every iteration
Absolute value and multiplication with vectors
Maximum no. of iteration

Positions of the neighborhood dingoes can be illustrated using Figure 2, which is represented using a two-dimensional position vector. According to the position of the prey , a dingo can update its position at the position of (P, Q). All the possible locations are marked in the diagram around the best agent, concerning the current location by changing the value of and vectors. For example, by setting and , dingo can be reached at . It can also be represented using 3-dimensional space as in Figure 3. It is clearly illustrated how random vectors a1 and a2 enable dingoes to enter any place between the points. Equations (1) and (2) help dingoes to change their locations inside the quest area around the prey in any random location. To reach a search space with N dimensions, the same equations can be used and the dingo will move in hypercubes around the best result got so far.

3.2.2. Hunting

However, in the search space according to the concept, agents do not normally have a calculation of the position of the prey (optimum). Designing the dingoes hunting plan mathematically, we assume that all the pack members including alpha, beta, and others have good knowledge about the potential location of prey. The alpha dingo always commands the hunting. However, sometimes beta and other dingoes might also participate in hunting. Hence, we consider the first two best values achieved so far. As per the location of the best search agent, other dingoes also need to update their position. According to the discussion, equations (6)–(14) are modeled in this concern. Also, a detailed description of the nomenclatures used in the equation is provided in Table 3.


Elements and description

Distance between the dingo and prey
Positioning of a prey vector
Positioning of a dingo vector
Coefficient vector
Coefficient vector
Random vector in [0, 1]
Random vector in [0, 1]
Linear decrease over the course of iterations
Absolute value
Maximum no. of iteration
Fitness value of alpha () dingo
Fitness value of alpha () dingo
Fitness value of other dingoes

To calculate the intensity of each dingo, following equations are being used:

The position update in the 2D search space is described in Figure 4. In this, we can easily visualize the position updated of alpha, beta, and other dingoes. It can also be understood that dingoes (alpha, beta, and others) update their positions randomly and calculate the position of the prey in the search space.

3.2.3. Attacking Prey

If there is no position update, it means dingo finished the hunt by attacking the prey. To mathematically formulate the strategy, the value of is decreased linearly. Point to be noted is that the alteration range of is also decreased by . This may also be known as which is a random value in the [-3b, 3b] interval where b is reduced from 3 to 0 during iterations. When random values of are in [1, 1], a search agent’s next position may be in any position between its current and the prey’s position.

The proposed encircling method does indeed reveal exploration to some extent; however, to accentuate exploration, DOX requires more operators. Figure 4 is the illustration that shows that drives the dingo to strike the prey. The DOX assists its quest agents in changing their location based on the positioning of , , others, and the targeted prey. Even, with these operators, the DOX can inactivate local solutions.

3.2.4. Searching

Dingoes hunt for the prey mostly according to the pack’s location. They always travel forward to hunt for and strike predators. Accordingly, is used for random values where, if the value is less than –1, it means prey is moving away from the search agent, but if the value is greater than 1, it means pack approaches the prey. This intervention helps the DOX to scan the targets globally. To find out which prey is better suited, Figure 4 reflects that 1 lets dingoes avoid the predators. Another component of DOX that makes exploration likely is . In equation (3), the vector can produce any random number between [0, 3] for arbitrary prey weights. DOX represents a stochastic function, regarded as vector ≤1 precedes than ≥1 to explain the impact of the gap formulated in equation (1).

This would be good for searching and avoidance of nearby optima. Depending on a dingo’s location, it will arbitrarily agree on the prey’s value and make it necessary to meet dingo rigidly or beyond. Intentionally, we used to provide stochastic exploration values from the initial to the final iterations. This method is effective in protecting the solution from local optima. Eventually, the DOX terminates itself whenever it meets the termination criteria.

3.3. Optimization Algorithm

The DOX pseudo code demonstrates how it can solve optimization problems, and several points can be mentioned in Algorithm 1. Here, stopping criteria belong to the maximum number of iterations. The dingo optimization algorithm process is discussed in the following steps.

Input: The population of dingoes (n = 1, 2, …, n)
Output: The best dingo. (Here, the best values is minimum)
(1)Generate initial search agents
(2)Initialize the value of , and .
(3)While Termination condition not reached do
(4)Evaluate each dingo’s fitness and intensity cost.
(5) = Dingo with the best search
(6) = Dingo with the second best search
(7) = Dingoes search results afterwords
(8)Iteration1
(9)repeat
(10)for i = 1: do
(11) Renew the latest search agent status.
(12)endfor
(13) Estimate the fitness and intensity cost of dingoes.
(14) Record the value of , ,
(15) Record the value of , , and .
(16) Iteration = Iteration +1
(17) check if, Iteration Stopping criteria
(18) output
(19)endwhile

4. Results and Discussion

4.1. Experimental Setup

The overall simulation is done in MATLAB, taking into account the various parameters which will be explained in the setup of the simulation. The proposed DOX is implemented in Windows 10 with memory 8 GB RAM and processor Intel CPU 2.50 GHz. To generate the solutions for each predefined benchmark function, the DOX uses 25 individual runs and each run applies 500 times of iterations.

4.2. Results

The DOX is the algorithm that has been tested on 23 well-known test functions [56]. These test functions are the classical functions used by various research groups. The results of the model being suggested using the dingo algorithm are shown as follows. Such testing functions were chosen to align our experiments with the current metaheuristics despite the convenience. These benchmark functions are defined in appendix Tables 46 where Dim indicates the function size, Range is the search space boundary, and the maximum is . Figure 5 represents comparison of convergence curves of DOX obtained in some of the benchmark problems. The benchmark functions are typical functions of minimization and can be segmented into four different categories.


FunctionDimRange

 = 
 = 
 = 
 = 
 = 
 = 
 = 


FunctionDimRange

 = 
 = 
 = 
 =  + 1
 = 
 = 
 = 
 = 


FunctionDimRange

 = 2
 = 
 = 
 = 
 = 
 = 
 = 
 = 
 = 
 = 

5. DOX for Engineering Problems

Here, DOX was tested on a small engineering design problem called a pressure vessel. Such kind of problem is having different design constraints to handle the optimization.

5.1. Pressure Vessel Design

This is the problem which is used by many researchers to validate the solution that was proposed by Kannan and Kramer [57] to minimize the total cost, including cost of material, forming, and welding of cylindrical vessel which are capped at both ends by hemispherical heads.(1)p1: thickness of the shell(2)p2: thickness of the head(3)p3: inner radius(4)p4: length of the cylindrical section without considering the head

The mathematical formulation of this problem is formulated as follows.

Consider .

Minimize the following function:

Subject to

Variable range is as follows:

This problem has been popular among researchers in various studies.

Table 4 is a comparison of the best optimal solution for DOX and other documented approaches, such as GWO and PSO. According to this table, DOX will find an optimum design at a minimal rate. Table 4 shows DOX comparison of the historical effects of the issue of construction of pressure vessels. The DOX results work better than any other algorithm in terms of the best optimal solution.


Algo.Optimum

DOX
GWO
PSO

The DOX algorithm obtained the near optimal solution in the initial steps of iterations and achieved better results than other optimization methods for pressure vessel problem.

The comparison of best optimal solution among several algorithms is given in Table 4. This problem has been tested with different optimization methods such as GWO and PSO. The comparison for the best solution obtained by such algorithms is presented.

6. Statistical Testing

The ANOVA test was performed to test whether the outcomes obtained from the proposed algorithms vary statistically substantially from the findings of other algorithms. We took 30 as the sample size for the ANOVA test. We used 95% confidence for the ANOVA test. The results of the ANOVA test for the benchmark functions are shown in Table 8. The findings demonstrate that the DOX is statistically important relative to other rival algorithms.


FP valueDOXGWOPSO

F11.91E–64GWO,PSODOX,PSODOX,GWO
F23.62E–65GWO,PSODOX,PSODOX,GWO
F33.57E–36GWO,PSODOX,PSODOX,GWO
F42.87E–22GWO,PSODOX,PSODOX,GWO
F52.15E–24GWO,PSODOX,PSODOX,GWO
F61.94E–54GWO,PSODOX,PSODOX,GWO
F71.56E–13GWO,PSODOX,PSODOX,GWO
F81.59E–43GWO,PSODOX,PSODOX,GWO
F91.88E–87GWO,PSODOX,PSODOX,GWO
F101.29E–34GWO,PSODOX,PSODOX,GWO
F111.87E–35GWO,PSODOX,PSODOX,GWO
F122.36E–23GWO,PSODOX,PSODOX,GWO
F131.91E–98GWO,PSODOX,PSODOX,GWO
F146.35E–36GWO,PSODOX,PSODOX,GWO
F151.14E–07GWO,PSODOX,PSODOX,GWO
F162.63E–35GWO,PSODOX,PSODOX,GWO
F171.83E–58GWO,PSODOX,PSODOX,GWO
F184.61E–36GWO,PSODOX,PSODOX,GWO
F199.54E–15GWO,PSODOX,PSODOX,GWO
F201.99E–73GWO,PSODOX,PSODOX,GWO
F212.54E–62GWO,PSODOX,PSODOX,GWO
F228.11E–06GWO,PSODOX,PSODOX,GWO
F232.70E–18GWO,PSODOX,PSODOX,GWO
F245.55E–36GWO,PSODOX,PSODOX,GWO
F251.22E–87GWO,PSODOX,PSODOX,GWO
F267.66E–45GWO,PSODOX,PSODOX,GWO
F272.48E–69GWO,PSODOX,PSODOX,GWO
F282.31E–43GWO,PSODOX,PSODOX,GWO
F291.14E–47GWO,PSODOX,PSODOX,GWO

7. Conclusion and Future Scope

As per the comparison of DOX with other popular metaheuristic algorithms such as PSO and DSO, DOX provides well competitive outcomes as presented in the results. The DOX is analyzed for the exploration and exploitation activity of agents using twenty-three test functions. The concise results, which are based on comparative analysis between the proposed DOX and other optimization algorithms, demonstrate that the approach suggested will cope with different kinds of constraints and provide stronger alternatives than any other optimizer. The suggested methodology is inspired by the real-life problems, which required less computational or mathematical efforts to find the best available optima.

Some other major findings may be preferred for future studies. DOX may be used to address various technological problems. Multiobjective problems can be solved as another future contribution as MODOX. Binary DOX might also be other benchmarks to expand this algorithm.

Appendix

A. Benchmark Functions

A.1. Unimodel Functions

The list of the unimodal test functions (F1–F7) is given in Table 4.

A.2. Multimodal Functions

The list of the multimodal test functions (F8–F16) is given in Table 5.

A.3. Fixed-Dimension Multimodal Functions

The list of the fixed-dimension multimodal test functions (F14–F23) is given in Table 6.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. H. Ali, W. Shahzad, and F. A. Khan, “Energy-efficient clustering in mobile ad-hoc networks using multi-objective particle swarm optimization,” Applied Soft Computing, vol. 12, no. 7, pp. 1913–1928, 2012. View at: Publisher Site | Google Scholar
  2. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: nsga-ii,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at: Publisher Site | Google Scholar
  3. G. Qi, H. Wang, M. Haner, C. Weng, S. Chen, and Z. Zhu, “Convolutional neural network based detection and judgement of environmental obstacle in vehicle operation,” CAAI Transactions on Intelligence Technology, vol. 4, no. 2, pp. 80–91, 2019. View at: Publisher Site | Google Scholar
  4. C. Zhu, W. Yan, X. Cai, S. Liu, T. H. Li, and G. Li, “Neural saliency algorithm guide bi‐directional visual perception style transfer,” CAAI Transactions on Intelligence Technology, vol. 5, no. 1, pp. 1–8, 2020. View at: Publisher Site | Google Scholar
  5. T. Wiens, “Engine speed reduction for hydraulic machinery using predictive algorithms,” International Journal of Hydromechatronics, vol. 2, no. 1, pp. 16–31, 2019. View at: Publisher Site | Google Scholar
  6. X. Xin Yao, Y. Yong Liu, and G. Guangming Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 82–102, 1999. View at: Publisher Site | Google Scholar
  7. D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley Longman Publishing Co., Inc., Boston, Massachusetts, USA, 1989.
  8. X. Xue, J. Lu, and J. Chen, “Using NSGA‐III for optimising biomedical ontology alignment,” CAAI Transactions on Intelligence Technology, vol. 4, no. 3, pp. 135–141, 2019. View at: Publisher Site | Google Scholar
  9. W. Banzhaf, F. D. Francone, R. E. Keller, and P. Nordin, Genetic Programming: An Introduction: On the Automatic Evolution of Computer Programs and its Applications, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1998.
  10. C. M. Fonseca and P. J. Fleming, “Genetic algorithms for multiobjective optimization: formulationdiscussion and generalization,” in Proceedings of the 5th International Conference on Genetic Algorithms, pp. 416–423, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, June 1993. View at: Google Scholar
  11. R. Storn and K. Price, “Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at: Publisher Site | Google Scholar
  12. F. Xue, A. C. Sanderson, and R. J. Graves, “Multi-objective differential evolution - algorithm, convergence analysis, and applications,” IEEE Congress on Evolutionary Computation, vol. 1, pp. 743–750, 2005. View at: Google Scholar
  13. H.-G. Beyer and H.-P. Schwefel, “Evolution strategies –a comprehensive introduction,” Natural Computing, vol. 1, no. 1, pp. 3–52, 2002. View at: Publisher Site | Google Scholar
  14. X. Hu, C. A. C. Coello, and Z. Huang, “A new multi-objective evolutionary algorithm: neighbourhood exploring evolution strategy,” Engineering Optimization, vol. 37, no. 4, pp. 351–379, 2005. View at: Publisher Site | Google Scholar
  15. T. Jansen and C. Zarges, “Artificial immune systems for optimisation,” in Proceedings of the Fourteenth International Conference on Genetic and Evolutionary Computation Conference Companion - GECCO Companion '12, pp. 1059–1078, Association for Computing Machinery, New York, NY, USA, July 2012. View at: Publisher Site | Google Scholar
  16. F. Campelo, F. G. Guimarães, and H. Igarashi, “Overview of artificial immune systems for multi-objective optimization,” in Proceedings of the 4th International Conference on Evolutionary Multi-Criterion Optimization, EMO’07, pp. 937–951, Springer-Verlag, Berlin, Heidelberg, March 2007. View at: Google Scholar
  17. K. M. Passino, “Bacterial foraging optimization,” International Journal of Swarm Intelligence Research, vol. 1, no. 1, pp. 1–16, 2010. View at: Publisher Site | Google Scholar
  18. “Multi-swarm cooperative multi-objective bacterial foraging optimisation,” International Journal of Bio-Inspired Computation, vol. 13, no. 1, pp. 21–31, 2019. View at: Google Scholar
  19. Z. Chelly and Z. Elouedi, “A survey of the dendritic cell algorithm,” Knowledge and Information Systems, vol. 48, no. 3, pp. 505–535, 2016. View at: Publisher Site | Google Scholar
  20. A. Gandomi and A. Alavi, Krill Herd Algorithm.
  21. P. J. M. Laarhoven and E. H. L. Aarts, Simulated Annealing: Theory and Applications, Kluwer Academic Publishers, New York, NY, USA, 1987.
  22. N. Krasnogor, Memetic Algorithms, Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. View at: Publisher Site
  23. M. Eusuff, K. Lansey, and F. Pasha, “Shuffled frog-leaping algorithm: a memetic meta-heuristic for discrete optimization,” Engineering Optimization, vol. 38, no. 2, pp. 129–154, 2006. View at: Publisher Site | Google Scholar
  24. M. Dorigo, M. Birattari, and T. Stützle, “Ant colony optimization,” IEEE Computational Intelligence Magazine, vol. 1, no. 4, pp. 28–39, 2006. View at: Publisher Site | Google Scholar
  25. J. Kennedy and R. Eberhart, “Particle swarm optimization,” Proceedings of ICNN'95 - International Conference on Neural Networks, vol. 4, pp. 1942–1948, 1995. View at: Publisher Site | Google Scholar
  26. D. Karaboga and B. Basturk, “Artificial bee colony (abc) optimization algorithm for solving constrained optimization problems,” in Foundations of Fuzzy Logic and Soft Computing, P. Melin, O. Castillo, L. T. Aguilar, J. Kacprzyk, and W. Pedrycz, Eds., pp. 789–798, Springer Berlin Heidelberg, Berlin, Heidelberg, 2007. View at: Google Scholar
  27. M. Neshat, A. Adeli, G. Sepidnam, M. Sargolzaei, and A. Najaran Toosi, “A review of artificial fish swarm optimization methods and applications,” International Journal on Smart Sensing and Intelligent Systems, vol. 5, no. 1, pp. 108–148, 2012. View at: Publisher Site | Google Scholar
  28. G. wolf optimizer, “Grey wolf optimizer,” Advances in Engineering Software, vol. 69, pp. 46–61, 2014, https://doi.org/10.1016/j.advengsoft.2013.12.007. View at: Google Scholar
  29. S. Mirjalili, “Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems,” Neural Computing and Applications, vol. 27, no. 4, pp. 1053–1073, 2016. View at: Publisher Site | Google Scholar
  30. J. Ma, H. Y. Chen, R. Su, Y. Wang, S. Zhang, and S. Shan, “Improved firefly algorithm and its application,” in Proceedings of the 4th International Conference on Crowd Science and Engineering, ICCSE’19, pp. 180–185, Association for Computing Machinery, New York, NY, USA, October 2019. View at: Publisher Site | Google Scholar
  31. X.-S. Yang, “Firefly algorithms for multimodal optimization,” 2010, https://arxiv.org/abs/1003.1466. View at: Google Scholar
  32. S. Mirjalili and A. Lewis, “The whale optimization algorithm,” Advances in Engineering Software, vol. 95, no. C, pp. 51–67, 2016. View at: Publisher Site | Google Scholar
  33. E. Rashedi, H. Nezamabadi-pour, and S. Saryazdi, “Gsa: a gravitational search algorithm,” Information Sciences, vol. 179, no. 13, pp. 2232–2248, 2009. View at: Publisher Site | Google Scholar
  34. X.-S. Y. Seyedali Mirjalili and S. M. Mirjalili, “Binary bat algorithm,” Neural Computing and Applications, vol. 25, no. 3-4, pp. 663–681, 2014. View at: Publisher Site | Google Scholar
  35. Y. C. Shih, “A cuckoo search algorithm: effects of coevolution and application in the development of distributed layouts,” Journal of Algorithms & Computational Technology, vol. 13, Article ID 1748302619889523, 2019. View at: Publisher Site | Google Scholar
  36. S.-C. Chu, P.-W. Tsai, and J.-S. Pan, “Cat swarm optimization,” in Proceedings of the 9th Pacific Rim International Conference on Artificial Intelligence, PRICAI’06, pp. 854–858, August 2006. View at: Publisher Site | Google Scholar
  37. V. Kumar, J. K. Chhabra, and D. Kumar, “Parameter adaptive harmony search algorithm for unimodal and multimodal optimization problems,” Journal of Computational Science, vol. 5, no. 2, pp. 144–155, 2014. View at: Publisher Site | Google Scholar
  38. L. Wang, Y. Mao, Q. Niu, and M. Fei, “A multi-objective binary harmony search algorithm,” in Lecture Notes in Computer Science Advances in Swarm Intelligence, Y. Tan, Y. Shi, Y. Chai, and G. Wang, Eds., pp. 74–81, Springer Berlin Heidelberg, Berlin, Heidelberg, 2011. View at: Publisher Site | Google Scholar
  39. Y. Fang, G. Liu, He Yi, and Y. Qiu, “Tabu search algorithm based on insertion method,” in Proceedings of the international conference on neural networks and signal processing, pp. 420–423, Nanjing, China, December 2003. View at: Google Scholar
  40. S. Carcangiu, A. Fanni, and A. Montisci, “Multiobjective tabu search algorithms for optimal design of electromagnetic devices,” IEEE Transactions on Magnetics, vol. 44, no. 6, pp. 970–973, 2008. View at: Publisher Site | Google Scholar
  41. V. Kumar, J. K. Chhabra, and D. Kumar, “Parameter adaptive harmony search algorithm for unimodal and multimodal optimization problems,” Journal of Computational Science, vol. 5, no. 2, pp. 144–155, 2014. View at: Publisher Site | Google Scholar
  42. S. He, Q. H. Wu, and J. R. Saunders, “Group search optimizer: an optimization algorithm inspired by animal searching behavior,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 973–990, 2009. View at: Publisher Site | Google Scholar
  43. N. Ghorbani and E. Babaei, “Exchange market algorithm,” Applied Soft Computing, vol. 19, pp. 177–187, 2014. View at: Publisher Site | Google Scholar
  44. E. Atashpaz-Gargari and C. Lucas, “Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition,” in Proceedings of the 2007 IEEE Congress on Evolutionary Computation, pp. 4661–4667, Singapore, September 2007. View at: Google Scholar
  45. N. Moosavian, “Soccer league competition algorithm for solving knapsack problems,” Swarm and Evolutionary Computation, vol. 20, pp. 14–22, 2015. View at: Publisher Site | Google Scholar
  46. A. Husseinzadeh Kashan, “League championship algorithm (lca): an algorithm for global optimization inspired by sport championships,” Applied Soft Computing, vol. 16, pp. 171–200, 2014. View at: Publisher Site | Google Scholar
  47. F. Ramezani and S. Lotfi, “Social-based algorithm (sba),” Applied Soft Computing, vol. 13, no. 5, pp. 2837–2856, 2013. View at: Publisher Site | Google Scholar
  48. Y. Tan, C. Yu, S. Zheng, and K. Ding, “Introduction to fireworks algorithm,” International Journal of Swarm Intelligence Research, vol. 4, no. 4, pp. 39–70, 2013. View at: Publisher Site | Google Scholar
  49. A. Kaveh and V. Mahdavi Dahoei, 2015, Colliding Bodies Optimization.
  50. A. Gandomi, 2014, Interior Search Algorithm.
  51. W. Zhao, L. Wang, and Z. Zhang, “Artificial ecosystem-based optimization: a novel nature-inspired meta-heuristic algorithm,” Neural Computing and Applications, vol. 32, pp. 1–43, 2020. View at: Publisher Site | Google Scholar
  52. L. Benasla, A. Belmadani, and R. Mostefa, “Spiral optimization algorithm for solving combined economic and emission dispatch,” International Journal of Electrical Power and Energy Systems, vol. 62, pp. 163–174, 2014. View at: Publisher Site | Google Scholar
  53. E. Bogar and S. Beyhan, “Adolescent identity search algorithm (aisa): a novel metaheuristic approach for solving optimization problems,” Applied Soft Computing, vol. 95, Article ID 106503, 2020. View at: Publisher Site | Google Scholar
  54. R. Wang, H. Yu, G. Wang, G. Zhang, and W. Wang, “Study on the dynamic and static characteristics of gas static thrust bearing with micro-hole restrictors,” International Journal of Hydromechatronics, vol. 2, p. 189, 2019. View at: Publisher Site | Google Scholar
  55. S. Osterland and J. Weber, “Analytical analysis of single-stage pressure relief valves,” International Journal of Hydromechatronics, vol. 2, p. 32, 2019. View at: Publisher Site | Google Scholar
  56. C. S. Marcin Molga, 2005, Test Functions for Optimization Needs.
  57. B. Kannan and S. Kramer, “An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design,” Journal of Mechanical Design, vol. 116, pp. 405–411, 1994. View at: Google Scholar

Copyright © 2021 Amit Kumar Bairwa et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1342
Downloads628
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.