MetaHeuristic Techniques for Solving Computational Engineering Problems
View this Special IssueResearch Article  Open Access
Amit Kumar Bairwa, Sandeep Joshi, Dilbag Singh, "Dingo Optimizer: A NatureInspired Metaheuristic Approach for Engineering Problems", Mathematical Problems in Engineering, vol. 2021, Article ID 2571863, 12 pages, 2021. https://doi.org/10.1155/2021/2571863
Dingo Optimizer: A NatureInspired Metaheuristic Approach for Engineering Problems
Abstract
Optimization is a buzzword, whenever researchers think of engineering problems. This paper presents a new metaheuristic named dingo optimizer (DOX) which is motivated by the behavior of dingo (Canis familiaris dingo). The overall concept is to develop this method involving the collaborative and social behavior of dingoes. The developed algorithm is based on the hunting behavior of dingoes that includes exploration, encircling, and exploitation. All the above prey hunting steps are modeled mathematically and are implemented in the simulator to test the performance of the proposed algorithm. Comparative analyses are drawn among the proposed approach and grey wolf optimizer (GWO) and particle swarm optimizer (PSO). Some of the wellknown test functions are used for the comparative study of this work. The results reveal that the dingo optimizer performed significantly better than other natureinspired algorithms.
1. Introduction
The challenges of the modern world are composed of various goals that must be optimized at the same time. Optimization is a process that seeks one or more solutions to the problem that leads to the extreme values of one or more objective [1]. The optimization can, therefore, be done based on single or multiple objective functions [2–4].
Keeping this in the mind, there is a requirement of new metaheuristicbased solution to reduce the burden of any of the model designing. The objective of this paper is to develop a naturebased algorithm called dingo optimizer, which can be abbreviated as DOX. It is based on dingo’s social hierarchy and prey hunting behavior.
Metaheuristic algorithms are remarkably common due to its nature of flexibility, simplicity, less mathematical complexity, and avoidance of local optima. If we talk about flexibility, then it means we can use such algorithms in a wide variety of engineering problems. Such algorithms provide satisfactory results for many of the complex problems [5]. It is simple because it is inspired by nature like animal behavior to accomplish a particular task, physical phenomena, and other evolutionary behavior.
One of the main reasons to use the metaheuristics in reallife problems is that almost all the optimization solutions start with the random processes, and for such solutions, there is no need to find out the optimum. Metaheuristic algorithms are very powerful in terms of finding local optima compared with the traditional optimization algorithms. Finding the real search space in the real world problem is very much complicated because of finding with lots of local optima in the search. That is the reason metaheuristic algorithms are most suitable to find out such challenging issues.
There are so many metaheuristic algorithms proposed every year, and they show the promising result with respect to the engineering problem. However, day by day the nature and complexity of new applications are introducing with new challenges. And, it might not be possible to solve the particular problem with the guarantee. This motivates us to develop a new metaheuristics algorithm as dingo optimizer (DOX). Also, the method which is mathematically modeled and inspired by the social hierarchy of dingoes is also motivated to solve a realtime engineering problem.
This paper is arranged as follows. Section 2 provides a literature survey that explains the brief principles of DOX focused on dog pack hunting. The proposed multiobjective DOX is presented in Section 3. In section 4, the performance of DOX is tested with various benchmark functions and experimental results are compared with other stateoftheart algorithms. In Section 5, the conclusion and future research directions discussed.
2. Literature Review
In the past few years, the problems related to reallife have increased, and it is motivating researchers to develop a better metaheuristic technique with the concept of randomization and local search. Such approaches have been used to determine the best solutions to reallife engineering problems that are feasible. These methods are more widely accepted due to their difficulty and reliability relative to other existing methods.
Metaheuristic algorithms can be categorized into evolutionarybased [13], physical algorithms [54], bioinspired algorithms, swarmbased [25], and others methods. The evolutionary algorithm gives approximately close solutions to all types of optimization models since these approaches are not dependent on basic fitness and assumptions [55]. Bioinspired algorithms are also popular to solve various complicated problems, which are motivated by biological evolution such as selection, replication, mutation, and recombination. Physical algorithms are motivated by evolutionary algorithms by the principle of natural selection in which a species attempts to live in different environments based on the fitness test. The most common parameters such as inertia force, electromagnetic force, and gravitational force help the algorithm to search the agent’s coordinate and search around the space. Swarmbased algorithms are inspired by the selforganized nature of the social creatures, which shows the collective behavior of decentralization. Corporate wisdom is influenced by the contact of swarms with each other and their surroundings. Some of the popular swarm intelligence technique is quite similar to the natureinspired algorithm. Generally, swarmbased algorithms are more popular as they have fewer operators (i.e., discovery, encircling, and exploitation). Table 1 lists all these algorithms which are divided into single objective and multiobjectives depending on the number of objective functions.

3. Dingo Optimizer (DOX)
3.1. Motivation
Nature is always the most powerful teacher from the beginning. Ever species surviving on the Earth have its way of unique mechanism for survival. Social relationships are one of them, which is dynamic. Based on the general study of the social behavior of the animal, it can be segmented into some of the categories. The first category is depending on the environmental factors, i.e., nearby resource availability and challenges created by other species. Another category depends on individual behavior or quality.
Keeping this in the mind, dingo is motivation to our work which follows strictly the social relationship. Dingo is the dog’s sort. The scientific name of dingo is Canis lupus (wolf) dingo, changed recently from Canis familiaris (dog). Dingoes are complicated, intelligent, and highly social animals. Dingoes are skillful hunters living in a pack of the average size 12–15.
Social hierarchy is highly structured, alpha is on the top of the hierarchy, and they might be male or female. They can be identified based on the responsibility like making decisions, sleeping places, and hunting. The most dominant and strongest member in the pack is called alpha, considered as the leader of the pack of dingoes. It reflects that the discipline and organization are more important than power. The decision taken by the alpha is dictated to the pack. In general, all the members of a pack acknowledge the alpha by holding their tails down.
Beta dingoes are at the second level in the hierarchy, which played a role of intermediate between alpha and another pack for the related tasks. It plays the important role as an adviser of alpha and maintains the discipline for the whole pack. The beta confirms the orders of the alpha in the group and communicates to the alpha. The beta dingo is second in the hierarchy after the alpha. If alpha does not survive due to any of the reasons, all the commands will be handed over by the beta to control the other lowerlevel dingoes.
If a dingo does not belong to an alpha or beta, they are considered as subordinate. These subordinates follow alphas and betas. Scouts shall be liable for observing the area of the territories and shall alert the group in case of any threat circumstance. Hunters shall support the alphas and betas to catch the prey and provide food for the group.
Based on the studies, dingoes have an accurate sense of communication. They communicate with each other through sensing different sound intensities in the air. In DOX, dingo creates sound feedback in such a way that dingoes exchange their knowledge with others to create common community details. The amplitude of the vibration is modified by the strength of the person as the dingo enters a new location from the previous one.
Group hunting is an interesting social behavior of dingoes, which makes its more extension to the social behavior of dingoes. Hunting strategy is categorized in their phases as follows: Chasing and approaching Encircling and harassing Attack
The above steps are properly shown in Figure 1. Also, hunting behavior and the social arrangements of dingo are modeled mathematically, to develop DOX to perform natureinspired optimization.
(a)
(b)
(c)
(d)
(e)
Exploration and exploitation are the two main components of DOX. In the exploration part, the algorithm reaches several expected solutions in the search space but exploitation allows searching for optimal solutions within the given space. To find out the best solution for any reallife problem, both the components are required with finetuning. However, it is a challenge to make a balance between the components of the proposed algorithm due to its stochastic nature. To solve a reallife engineering problem, this inspiring fact motivates a hybridize metaheuristics algorithm design.
3.2. Mathematical Models
The representation of the searching, encircling, and attacking prey is designed mathematically to perform the dingo optimization in this section.
3.2.1. Encircling
Dingoes are enough capable to find the location of the prey. After tracing the location, the pack followed by alpha encircles the prey. To model dingo’s social hierarchy, it is assumed that the existing best agent approach is the goal or aim prey, which is similar to the optimal since the quest area is not known a priori. In the meantime, other quest agencies are still seeking to refresh their strategies on the next possible approach. This behavior of the dingoes is modeled by the following mathematical equations (1)–(5). Also, a detailed description of the nomenclatures used in the equation is provided in Table 2

Positions of the neighborhood dingoes can be illustrated using Figure 2, which is represented using a twodimensional position vector. According to the position of the prey , a dingo can update its position at the position of (P, Q). All the possible locations are marked in the diagram around the best agent, concerning the current location by changing the value of and vectors. For example, by setting and , dingo can be reached at . It can also be represented using 3dimensional space as in Figure 3. It is clearly illustrated how random vectors a1 and a2 enable dingoes to enter any place between the points. Equations (1) and (2) help dingoes to change their locations inside the quest area around the prey in any random location. To reach a search space with N dimensions, the same equations can be used and the dingo will move in hypercubes around the best result got so far.
3.2.2. Hunting
However, in the search space according to the concept, agents do not normally have a calculation of the position of the prey (optimum). Designing the dingoes hunting plan mathematically, we assume that all the pack members including alpha, beta, and others have good knowledge about the potential location of prey. The alpha dingo always commands the hunting. However, sometimes beta and other dingoes might also participate in hunting. Hence, we consider the first two best values achieved so far. As per the location of the best search agent, other dingoes also need to update their position. According to the discussion, equations (6)–(14) are modeled in this concern. Also, a detailed description of the nomenclatures used in the equation is provided in Table 3.

To calculate the intensity of each dingo, following equations are being used:
The position update in the 2D search space is described in Figure 4. In this, we can easily visualize the position updated of alpha, beta, and other dingoes. It can also be understood that dingoes (alpha, beta, and others) update their positions randomly and calculate the position of the prey in the search space.
(a)
(b)
3.2.3. Attacking Prey
If there is no position update, it means dingo finished the hunt by attacking the prey. To mathematically formulate the strategy, the value of is decreased linearly. Point to be noted is that the alteration range of is also decreased by . This may also be known as which is a random value in the [3b, 3b] interval where b is reduced from 3 to 0 during iterations. When random values of are in [1, 1], a search agent’s next position may be in any position between its current and the prey’s position.
The proposed encircling method does indeed reveal exploration to some extent; however, to accentuate exploration, DOX requires more operators. Figure 4 is the illustration that shows that drives the dingo to strike the prey. The DOX assists its quest agents in changing their location based on the positioning of , , others, and the targeted prey. Even, with these operators, the DOX can inactivate local solutions.
3.2.4. Searching
Dingoes hunt for the prey mostly according to the pack’s location. They always travel forward to hunt for and strike predators. Accordingly, is used for random values where, if the value is less than –1, it means prey is moving away from the search agent, but if the value is greater than 1, it means pack approaches the prey. This intervention helps the DOX to scan the targets globally. To find out which prey is better suited, Figure 4 reflects that 1 lets dingoes avoid the predators. Another component of DOX that makes exploration likely is . In equation (3), the vector can produce any random number between [0, 3] for arbitrary prey weights. DOX represents a stochastic function, regarded as vector ≤1 precedes than ≥1 to explain the impact of the gap formulated in equation (1).
This would be good for searching and avoidance of nearby optima. Depending on a dingo’s location, it will arbitrarily agree on the prey’s value and make it necessary to meet dingo rigidly or beyond. Intentionally, we used to provide stochastic exploration values from the initial to the final iterations. This method is effective in protecting the solution from local optima. Eventually, the DOX terminates itself whenever it meets the termination criteria.
3.3. Optimization Algorithm
The DOX pseudo code demonstrates how it can solve optimization problems, and several points can be mentioned in Algorithm 1. Here, stopping criteria belong to the maximum number of iterations. The dingo optimization algorithm process is discussed in the following steps.

4. Results and Discussion
4.1. Experimental Setup
The overall simulation is done in MATLAB, taking into account the various parameters which will be explained in the setup of the simulation. The proposed DOX is implemented in Windows 10 with memory 8 GB RAM and processor Intel CPU 2.50 GHz. To generate the solutions for each predefined benchmark function, the DOX uses 25 individual runs and each run applies 500 times of iterations.
4.2. Results
The DOX is the algorithm that has been tested on 23 wellknown test functions [56]. These test functions are the classical functions used by various research groups. The results of the model being suggested using the dingo algorithm are shown as follows. Such testing functions were chosen to align our experiments with the current metaheuristics despite the convenience. These benchmark functions are defined in appendix Tables 4–6 where Dim indicates the function size, Range is the search space boundary, and the maximum is . Figure 5 represents comparison of convergence curves of DOX obtained in some of the benchmark problems. The benchmark functions are typical functions of minimization and can be segmented into four different categories.



5. DOX for Engineering Problems
Here, DOX was tested on a small engineering design problem called a pressure vessel. Such kind of problem is having different design constraints to handle the optimization.
5.1. Pressure Vessel Design
This is the problem which is used by many researchers to validate the solution that was proposed by Kannan and Kramer [57] to minimize the total cost, including cost of material, forming, and welding of cylindrical vessel which are capped at both ends by hemispherical heads.(1)p1: thickness of the shell(2)p2: thickness of the head(3)p3: inner radius(4)p4: length of the cylindrical section without considering the head
The mathematical formulation of this problem is formulated as follows.
Consider .
Minimize the following function:
Subject to
Variable range is as follows:
This problem has been popular among researchers in various studies.
Table 4 is a comparison of the best optimal solution for DOX and other documented approaches, such as GWO and PSO. According to this table, DOX will find an optimum design at a minimal rate. Table 4 shows DOX comparison of the historical effects of the issue of construction of pressure vessels. The DOX results work better than any other algorithm in terms of the best optimal solution.

The DOX algorithm obtained the near optimal solution in the initial steps of iterations and achieved better results than other optimization methods for pressure vessel problem.
The comparison of best optimal solution among several algorithms is given in Table 4. This problem has been tested with different optimization methods such as GWO and PSO. The comparison for the best solution obtained by such algorithms is presented.
6. Statistical Testing
The ANOVA test was performed to test whether the outcomes obtained from the proposed algorithms vary statistically substantially from the findings of other algorithms. We took 30 as the sample size for the ANOVA test. We used 95% confidence for the ANOVA test. The results of the ANOVA test for the benchmark functions are shown in Table 8. The findings demonstrate that the DOX is statistically important relative to other rival algorithms.

7. Conclusion and Future Scope
As per the comparison of DOX with other popular metaheuristic algorithms such as PSO and DSO, DOX provides well competitive outcomes as presented in the results. The DOX is analyzed for the exploration and exploitation activity of agents using twentythree test functions. The concise results, which are based on comparative analysis between the proposed DOX and other optimization algorithms, demonstrate that the approach suggested will cope with different kinds of constraints and provide stronger alternatives than any other optimizer. The suggested methodology is inspired by the reallife problems, which required less computational or mathematical efforts to find the best available optima.
Some other major findings may be preferred for future studies. DOX may be used to address various technological problems. Multiobjective problems can be solved as another future contribution as MODOX. Binary DOX might also be other benchmarks to expand this algorithm.
Appendix
A. Benchmark Functions
A.1. Unimodel Functions
The list of the unimodal test functions (F1–F7) is given in Table 4.
A.2. Multimodal Functions
The list of the multimodal test functions (F8–F16) is given in Table 5.
A.3. FixedDimension Multimodal Functions
The list of the fixeddimension multimodal test functions (F14–F23) is given in Table 6.
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
References
 H. Ali, W. Shahzad, and F. A. Khan, “Energyefficient clustering in mobile adhoc networks using multiobjective particle swarm optimization,” Applied Soft Computing, vol. 12, no. 7, pp. 1913–1928, 2012. View at: Publisher Site  Google Scholar
 K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: nsgaii,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at: Publisher Site  Google Scholar
 G. Qi, H. Wang, M. Haner, C. Weng, S. Chen, and Z. Zhu, “Convolutional neural network based detection and judgement of environmental obstacle in vehicle operation,” CAAI Transactions on Intelligence Technology, vol. 4, no. 2, pp. 80–91, 2019. View at: Publisher Site  Google Scholar
 C. Zhu, W. Yan, X. Cai, S. Liu, T. H. Li, and G. Li, “Neural saliency algorithm guide bi‐directional visual perception style transfer,” CAAI Transactions on Intelligence Technology, vol. 5, no. 1, pp. 1–8, 2020. View at: Publisher Site  Google Scholar
 T. Wiens, “Engine speed reduction for hydraulic machinery using predictive algorithms,” International Journal of Hydromechatronics, vol. 2, no. 1, pp. 16–31, 2019. View at: Publisher Site  Google Scholar
 X. Xin Yao, Y. Yong Liu, and G. Guangming Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 82–102, 1999. View at: Publisher Site  Google Scholar
 D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, AddisonWesley Longman Publishing Co., Inc., Boston, Massachusetts, USA, 1989.
 X. Xue, J. Lu, and J. Chen, “Using NSGA‐III for optimising biomedical ontology alignment,” CAAI Transactions on Intelligence Technology, vol. 4, no. 3, pp. 135–141, 2019. View at: Publisher Site  Google Scholar
 W. Banzhaf, F. D. Francone, R. E. Keller, and P. Nordin, Genetic Programming: An Introduction: On the Automatic Evolution of Computer Programs and its Applications, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1998.
 C. M. Fonseca and P. J. Fleming, “Genetic algorithms for multiobjective optimization: formulationdiscussion and generalization,” in Proceedings of the 5th International Conference on Genetic Algorithms, pp. 416–423, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, June 1993. View at: Google Scholar
 R. Storn and K. Price, “Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at: Publisher Site  Google Scholar
 F. Xue, A. C. Sanderson, and R. J. Graves, “Multiobjective differential evolution  algorithm, convergence analysis, and applications,” IEEE Congress on Evolutionary Computation, vol. 1, pp. 743–750, 2005. View at: Google Scholar
 H.G. Beyer and H.P. Schwefel, “Evolution strategies –a comprehensive introduction,” Natural Computing, vol. 1, no. 1, pp. 3–52, 2002. View at: Publisher Site  Google Scholar
 X. Hu, C. A. C. Coello, and Z. Huang, “A new multiobjective evolutionary algorithm: neighbourhood exploring evolution strategy,” Engineering Optimization, vol. 37, no. 4, pp. 351–379, 2005. View at: Publisher Site  Google Scholar
 T. Jansen and C. Zarges, “Artificial immune systems for optimisation,” in Proceedings of the Fourteenth International Conference on Genetic and Evolutionary Computation Conference Companion  GECCO Companion '12, pp. 1059–1078, Association for Computing Machinery, New York, NY, USA, July 2012. View at: Publisher Site  Google Scholar
 F. Campelo, F. G. Guimarães, and H. Igarashi, “Overview of artificial immune systems for multiobjective optimization,” in Proceedings of the 4th International Conference on Evolutionary MultiCriterion Optimization, EMO’07, pp. 937–951, SpringerVerlag, Berlin, Heidelberg, March 2007. View at: Google Scholar
 K. M. Passino, “Bacterial foraging optimization,” International Journal of Swarm Intelligence Research, vol. 1, no. 1, pp. 1–16, 2010. View at: Publisher Site  Google Scholar
 “Multiswarm cooperative multiobjective bacterial foraging optimisation,” International Journal of BioInspired Computation, vol. 13, no. 1, pp. 21–31, 2019. View at: Google Scholar
 Z. Chelly and Z. Elouedi, “A survey of the dendritic cell algorithm,” Knowledge and Information Systems, vol. 48, no. 3, pp. 505–535, 2016. View at: Publisher Site  Google Scholar
 A. Gandomi and A. Alavi, Krill Herd Algorithm.
 P. J. M. Laarhoven and E. H. L. Aarts, Simulated Annealing: Theory and Applications, Kluwer Academic Publishers, New York, NY, USA, 1987.
 N. Krasnogor, Memetic Algorithms, Springer Berlin Heidelberg, Berlin, Heidelberg, 2012. View at: Publisher Site
 M. Eusuff, K. Lansey, and F. Pasha, “Shuffled frogleaping algorithm: a memetic metaheuristic for discrete optimization,” Engineering Optimization, vol. 38, no. 2, pp. 129–154, 2006. View at: Publisher Site  Google Scholar
 M. Dorigo, M. Birattari, and T. Stützle, “Ant colony optimization,” IEEE Computational Intelligence Magazine, vol. 1, no. 4, pp. 28–39, 2006. View at: Publisher Site  Google Scholar
 J. Kennedy and R. Eberhart, “Particle swarm optimization,” Proceedings of ICNN'95  International Conference on Neural Networks, vol. 4, pp. 1942–1948, 1995. View at: Publisher Site  Google Scholar
 D. Karaboga and B. Basturk, “Artificial bee colony (abc) optimization algorithm for solving constrained optimization problems,” in Foundations of Fuzzy Logic and Soft Computing, P. Melin, O. Castillo, L. T. Aguilar, J. Kacprzyk, and W. Pedrycz, Eds., pp. 789–798, Springer Berlin Heidelberg, Berlin, Heidelberg, 2007. View at: Google Scholar
 M. Neshat, A. Adeli, G. Sepidnam, M. Sargolzaei, and A. Najaran Toosi, “A review of artificial fish swarm optimization methods and applications,” International Journal on Smart Sensing and Intelligent Systems, vol. 5, no. 1, pp. 108–148, 2012. View at: Publisher Site  Google Scholar
 G. wolf optimizer, “Grey wolf optimizer,” Advances in Engineering Software, vol. 69, pp. 46–61, 2014, https://doi.org/10.1016/j.advengsoft.2013.12.007. View at: Google Scholar
 S. Mirjalili, “Dragonfly algorithm: a new metaheuristic optimization technique for solving singleobjective, discrete, and multiobjective problems,” Neural Computing and Applications, vol. 27, no. 4, pp. 1053–1073, 2016. View at: Publisher Site  Google Scholar
 J. Ma, H. Y. Chen, R. Su, Y. Wang, S. Zhang, and S. Shan, “Improved firefly algorithm and its application,” in Proceedings of the 4th International Conference on Crowd Science and Engineering, ICCSE’19, pp. 180–185, Association for Computing Machinery, New York, NY, USA, October 2019. View at: Publisher Site  Google Scholar
 X.S. Yang, “Firefly algorithms for multimodal optimization,” 2010, https://arxiv.org/abs/1003.1466. View at: Google Scholar
 S. Mirjalili and A. Lewis, “The whale optimization algorithm,” Advances in Engineering Software, vol. 95, no. C, pp. 51–67, 2016. View at: Publisher Site  Google Scholar
 E. Rashedi, H. Nezamabadipour, and S. Saryazdi, “Gsa: a gravitational search algorithm,” Information Sciences, vol. 179, no. 13, pp. 2232–2248, 2009. View at: Publisher Site  Google Scholar
 X.S. Y. Seyedali Mirjalili and S. M. Mirjalili, “Binary bat algorithm,” Neural Computing and Applications, vol. 25, no. 34, pp. 663–681, 2014. View at: Publisher Site  Google Scholar
 Y. C. Shih, “A cuckoo search algorithm: effects of coevolution and application in the development of distributed layouts,” Journal of Algorithms & Computational Technology, vol. 13, Article ID 1748302619889523, 2019. View at: Publisher Site  Google Scholar
 S.C. Chu, P.W. Tsai, and J.S. Pan, “Cat swarm optimization,” in Proceedings of the 9th Pacific Rim International Conference on Artificial Intelligence, PRICAI’06, pp. 854–858, August 2006. View at: Publisher Site  Google Scholar
 V. Kumar, J. K. Chhabra, and D. Kumar, “Parameter adaptive harmony search algorithm for unimodal and multimodal optimization problems,” Journal of Computational Science, vol. 5, no. 2, pp. 144–155, 2014. View at: Publisher Site  Google Scholar
 L. Wang, Y. Mao, Q. Niu, and M. Fei, “A multiobjective binary harmony search algorithm,” in Lecture Notes in Computer Science Advances in Swarm Intelligence, Y. Tan, Y. Shi, Y. Chai, and G. Wang, Eds., pp. 74–81, Springer Berlin Heidelberg, Berlin, Heidelberg, 2011. View at: Publisher Site  Google Scholar
 Y. Fang, G. Liu, He Yi, and Y. Qiu, “Tabu search algorithm based on insertion method,” in Proceedings of the international conference on neural networks and signal processing, pp. 420–423, Nanjing, China, December 2003. View at: Google Scholar
 S. Carcangiu, A. Fanni, and A. Montisci, “Multiobjective tabu search algorithms for optimal design of electromagnetic devices,” IEEE Transactions on Magnetics, vol. 44, no. 6, pp. 970–973, 2008. View at: Publisher Site  Google Scholar
 V. Kumar, J. K. Chhabra, and D. Kumar, “Parameter adaptive harmony search algorithm for unimodal and multimodal optimization problems,” Journal of Computational Science, vol. 5, no. 2, pp. 144–155, 2014. View at: Publisher Site  Google Scholar
 S. He, Q. H. Wu, and J. R. Saunders, “Group search optimizer: an optimization algorithm inspired by animal searching behavior,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 5, pp. 973–990, 2009. View at: Publisher Site  Google Scholar
 N. Ghorbani and E. Babaei, “Exchange market algorithm,” Applied Soft Computing, vol. 19, pp. 177–187, 2014. View at: Publisher Site  Google Scholar
 E. AtashpazGargari and C. Lucas, “Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition,” in Proceedings of the 2007 IEEE Congress on Evolutionary Computation, pp. 4661–4667, Singapore, September 2007. View at: Google Scholar
 N. Moosavian, “Soccer league competition algorithm for solving knapsack problems,” Swarm and Evolutionary Computation, vol. 20, pp. 14–22, 2015. View at: Publisher Site  Google Scholar
 A. Husseinzadeh Kashan, “League championship algorithm (lca): an algorithm for global optimization inspired by sport championships,” Applied Soft Computing, vol. 16, pp. 171–200, 2014. View at: Publisher Site  Google Scholar
 F. Ramezani and S. Lotfi, “Socialbased algorithm (sba),” Applied Soft Computing, vol. 13, no. 5, pp. 2837–2856, 2013. View at: Publisher Site  Google Scholar
 Y. Tan, C. Yu, S. Zheng, and K. Ding, “Introduction to fireworks algorithm,” International Journal of Swarm Intelligence Research, vol. 4, no. 4, pp. 39–70, 2013. View at: Publisher Site  Google Scholar
 A. Kaveh and V. Mahdavi Dahoei, 2015, Colliding Bodies Optimization.
 A. Gandomi, 2014, Interior Search Algorithm.
 W. Zhao, L. Wang, and Z. Zhang, “Artificial ecosystembased optimization: a novel natureinspired metaheuristic algorithm,” Neural Computing and Applications, vol. 32, pp. 1–43, 2020. View at: Publisher Site  Google Scholar
 L. Benasla, A. Belmadani, and R. Mostefa, “Spiral optimization algorithm for solving combined economic and emission dispatch,” International Journal of Electrical Power and Energy Systems, vol. 62, pp. 163–174, 2014. View at: Publisher Site  Google Scholar
 E. Bogar and S. Beyhan, “Adolescent identity search algorithm (aisa): a novel metaheuristic approach for solving optimization problems,” Applied Soft Computing, vol. 95, Article ID 106503, 2020. View at: Publisher Site  Google Scholar
 R. Wang, H. Yu, G. Wang, G. Zhang, and W. Wang, “Study on the dynamic and static characteristics of gas static thrust bearing with microhole restrictors,” International Journal of Hydromechatronics, vol. 2, p. 189, 2019. View at: Publisher Site  Google Scholar
 S. Osterland and J. Weber, “Analytical analysis of singlestage pressure relief valves,” International Journal of Hydromechatronics, vol. 2, p. 32, 2019. View at: Publisher Site  Google Scholar
 C. S. Marcin Molga, 2005, Test Functions for Optimization Needs.
 B. Kannan and S. Kramer, “An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design,” Journal of Mechanical Design, vol. 116, pp. 405–411, 1994. View at: Google Scholar
Copyright
Copyright © 2021 Amit Kumar Bairwa et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.