Table of Contents Author Guidelines Submit a Manuscript
Applied Computational Intelligence and Soft Computing
Volume 2016, Article ID 7950348, 16 pages
http://dx.doi.org/10.1155/2016/7950348
Research Article

Modified Grey Wolf Optimizer for Global Engineering Optimization

1Department of Electronics and Communication Engineering, Chandigarh University, Mohali, Punjab 140413, India
2Department of Electronics and Communication Engineering, Thapar University, Patiala, Punjab 147004, India

Received 30 November 2015; Accepted 3 April 2016

Academic Editor: Samuel Huang

Copyright © 2016 Nitin Mittal et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Nature-inspired algorithms are becoming popular among researchers due to their simplicity and flexibility. The nature-inspired metaheuristic algorithms are analysed in terms of their key features like their diversity and adaptation, exploration and exploitation, and attractions and diffusion mechanisms. The success and challenges concerning these algorithms are based on their parameter tuning and parameter control. A comparatively new algorithm motivated by the social hierarchy and hunting behavior of grey wolves is Grey Wolf Optimizer (GWO), which is a very successful algorithm for solving real mechanical and optical engineering problems. In the original GWO, half of the iterations are devoted to exploration and the other half are dedicated to exploitation, overlooking the impact of right balance between these two to guarantee an accurate approximation of global optimum. To overcome this shortcoming, a modified GWO (mGWO) is proposed, which focuses on proper balance between exploration and exploitation that leads to an optimal performance of the algorithm. Simulations based on benchmark problems and WSN clustering problem demonstrate the effectiveness, efficiency, and stability of mGWO compared with the basic GWO and some well-known algorithms.

1. Introduction

Metaheuristic algorithms are powerful methods for solving many real-world engineering problems. The majority of these algorithms have been derived from the survival of fittest theory of evolutionary algorithms, collective intelligence of swarm particles, behavior of biological inspired algorithms, and/or logical behavior of physical algorithms in nature.

Evolutionary algorithms are those who mimic the evolutionary processes in nature. The evolutionary algorithms are based on survival of fittest candidate for a given environment. These algorithms begin with a population (set of solutions) which tries to survive in an environment (defined with fitness evaluation). The parent population shares its properties of adaptation to the environment to the children with various mechanisms of evolution such as genetic crossover and mutation. The process continues over a number of generations (iterative process) till the solutions are found to be most suitable for the environment. Some of the evolutionary algorithms are Genetic Algorithm (GA) [1], Evolution Strategies (ES) [2], Genetic Programming (GP) [3], Differential Evolution (DE) [4], and Biogeography-Based Optimization (BBO) [59].

The physical algorithms are inspired by physical processes such as heating and cooling of materials (Simulated Annealing [10]), discrete cultural information which is treated as in between genetic and culture evolution (Memetic Algorithm [11]), harmony of music played by musicians (Harmony Search [12, 13]), cultural behavior of frogs (Shuffled Frog-Leaping Algorithm [14]), Gravitational Search algorithm [15], Multiverse Optimizer (MVO) [16], and Chemical Reaction Optimization (CRO) [17].

Swarm intelligence is the group of natural metaheuristics inspired by the “collective intelligence” of swarms. The collective intelligence is built up through a population of homogeneous agents interacting with each other and with their environment. Example of such intelligence is found among colonies of ants, flocks of birds, schools of fish, and so forth. Particle Swarm Optimization [18] is developed based on the swarm behavior of birds. The firefly algorithm [19] is formulated based on the flashing behavior of fireflies. Bat Algorithm (BA) [20] is based on the echolocation behavior of bats. Ant Colony Optimization (ACO) [21, 22] is inspired by the pheromone trail laying behavior of real ant colonies. A new evolutionary optimization algorithm, Cuckoo Search (CS) Algorithm [23], is inspired by lifestyle of cuckoo birds. The major algorithms include Ant Colony Optimization (ACO) [21, 22], Particle Swarm Optimization (PSO) [18], Artificial Bee Colony (ABC) Algorithm [24], Fish Swarm Algorithm (FSA) [25], Glowworm Swarm Optimization (GSO) [26], Grey Wolf Optimizer (GWO) [27], Fruit Fly Optimization Algorithm (FFOA) [28], Bat Algorithm (BA) [20], Novel Bat Algorithm (NBA) [29], Dragonfly Algorithm (DA) [30], Cat Swarm Optimization (CSO) [31], Cuckoo Search (CS) Algorithm [23], Cuckoo Optimization Algorithm (COA) [32], and Spider Monkey Optimization (SMO) Algorithm [33].

The biologically inspired algorithms comprise natural metaheuristics derived from living phenomena and behavior of biological organisms. The intelligence derived with bioinspired algorithms is decentralized, distributed, self-organizing, and adaptive in nature under uncertain environments. The major algorithms in this field include Artificial Immune Systems (AIS) [34], Bacterial Foraging Optimization (BFO) [35], and Krill Herd Algorithm [36].

Because of their inherent advantages, such algorithms can be applied to various applications including power systems operations and control, job scheduling problems, clustering and routing problems, batch process scheduling, image processing, and pattern recognition problems.

GWO is recently developed heuristics inspired from the leadership hierarchy and hunting mechanism of grey wolves in nature and has been successfully applied for solving economic dispatch problems [37], feature subset selection [38], optimal design of double later grids [39], time forecasting [40], flow shop scheduling problem [41], optimal power flow problem [42], and optimizing key values in the cryptography algorithms [43]. A number of variants are also proposed to improve the performance of basic GWO that include binary GWO [44], a hybrid version of GWO with PSO [45], integration of DE with GWO [46], and parallelized GWO [47, 48].

Every optimization algorithm stated above needs to address the exploration and exploitation of a search space. In order to be successful, an optimization algorithm needs to establish a good ratio between exploration and exploitation. In this paper, a modified GWO (mGWO) is proposed to balance the exploration and exploitation trade-off in original GWO algorithm. Different functions with diverse slopes are employed to tune the parameters of GWO algorithm for varying exploration and exploitation combinations over the course of iterations. Increasing the exploration in comparison to exploitation increases the convergence speed and avoids the local minima trapping effect.

The rest of the paper is organized as follows. Section 2 gives the overview of original GWO. The proposed mGWO algorithm is explained in Section 3. The experimental results are demonstrated in Section 4. Section 5 solves the clustering problem in WSN for cluster head selection to demonstrate the applicability of the proposed algorithm. Finally, Section 6 concludes the paper.

2. Overview of Grey Wolf Optimizer Algorithm

Grey Wolf Optimizer (GWO) is a typical swarm-intelligence algorithm which is inspired from the leadership hierarchy and hunting mechanism of grey wolves in nature. Grey wolves are considered as apex predators; they have average group size of 5–12. In the hierarchy of GWO, alpha () is considered the most dominating member among the group. The rest of the subordinates to are beta () and delta () which help to control the majority of wolves in the hierarchy that are considered as omega (). The wolves are of lowest ranking in the hierarchy.

The mathematical model of hunting mechanism of grey wolves consists of the following:(i)Tracking, chasing, and approaching the prey.(ii)Pursuing, encircling, and harassing the prey until it stops moving.(iii)Attacking the prey.

2.1. Encircling Prey

Grey wolves encircle the prey during the hunt which can be mathematically written as [27]where indicates the current iteration, and are coefficient vectors, is the position vector of the prey, and indicates the position vector of a grey wolf.

The vectors and are calculated as follows:where components of are linearly decreased from 2 to 0 over the course of iterations and and are random vectors in .

2.2. Hunting

Hunting of prey is usually guided by and , and will participate occasionally. The best candidate solutions, that is, , , and , have better knowledge about the potential location of prey. The other search agents () update their positions according to the position of three best search agents. The following formulas are proposed in this regard:

2.3. Attacking Prey

In order to mathematically model for approaching the prey, we decrease the value of . The fluctuation range of is also decreased by . is a random value in the interval where is decreased linearly from 2 to 0 over the course of iterations. When random values of are in , the next position of a search agent can be in any position between its current position and the position of the prey. The value forces the wolves to attack the prey.

After the attack again they search for the prey in the next iteration, wherein they again find the next best solution among all wolves. This process repeats till the termination criterion is fulfilled.

3. Modified GWO Algorithm

Finding the global minimum is a common, challenging task among all minimization methods. In population-based optimization methods, generally, the desirable way to converge towards the global minimum can be divided into two basic phases. In the early stages of the optimization, the individuals should be encouraged to scatter throughout the entire search space. In other words, they should try to explore the whole search space instead of clustering around local minima. In the latter stages, the individuals have to exploit information gathered to converge on the global minimum. In GWO, with fine-adjusting of the parameters and , we can balance these two phases in order to find global minimum with fast convergence speed.

Although different improvements of individual-based algorithms promote local optima avoidance, the literature shows that population-based algorithms are better in handling this issue. Regardless of the differences between population-based algorithms, the common approach is the division of optimization process to two conflicting milestones: exploration versus exploitation. The exploration encourages candidate solutions to change abruptly and stochastically. This mechanism improves the diversity of the solutions and causes high exploration of the search space. In contrast, the exploitation aims for improving the quality of solutions by searching locally around the obtained promising solutions in the exploration. In this milestone, candidate solutions are obliged to change less suddenly and search locally.

Exploration and exploitation are two conflicting milestones where promoting one results in degrading the other. A right balance between these two milestones can guarantee a very accurate approximation of the global optimum using population-based algorithms. On the one hand, mere exploration of the search space prevents an algorithm from finding an accurate approximation of the global optimum. On the other hand, mere exploitation results in local optima stagnation and again low quality of the approximated optimum.

In GWO, the transition between exploration and exploitation is generated by the adaptive values of and . In this, half of the iterations are devoted to exploration and the other half are used for exploitation , as shown in Figure 1(a). Generally, higher exploration of search space results in lower probability of local optima stagnation. There are various possibilities to enhance the exploration rate as shown in Figure 1(b), in which exponential functions are used instead of linear function to decrease the value of over the course of iterations. Too much exploration is similar to too much randomness and will probably not give good optimization results. But too much exploitation is related to too little randomness. Therefore, there must be a balance between exploration and exploitation.

Figure 1: (a) Updating the value of for GWO, (b) some samples of possible functions for updating over the course of iterations, and (c) updating the value of over the course of iterations for mGWO.

In GWO, the value of decreases linearly from 2 to 0 using the update equation as follows:where indicates the maximum number of iterations and is the current iteration. Our mGWO employs exponential function for the decay of over the course of iterations. Consideras shown in Figure 1(c). Using this exponential decay function, the numbers of iterations used for exploration and exploitation are 70% and 30%, respectively.

The pseudocode of mGWO is given in Algorithm 1.

Algorithm 1: Pseudocode of mGWO algorithm.

4. Results and Discussion

This section investigates the effectiveness of mGWO in practice. It is common in this field to benchmark the performance of algorithms on a set of mathematical functions with known global optima. We also follow the same process and employ 27 benchmark functions for comparison. The test functions are divided to four groups: unimodal, multimodal, fixed-dimension multimodal, and composite benchmark functions. The unimodal functions () are suitable for benchmarking the exploitation of algorithms since they have one global optimum and no local optima. On the contrary, multimodal functions () have a large number of local optima and are helpful to examine exploration and local optima avoidance of algorithms.

The mathematical formulation of the employed test functions is presented in Tables 14. We consider 30 variables for unimodal and multimodal test function for further improving their difficulties.

Table 1: Unimodal benchmark functions.
Table 2: Multimodal benchmark functions.
Table 3: Fixed-dimension multimodal benchmark functions.
Table 4: Composite benchmark functions.

Since heuristic algorithms are stochastic optimization techniques, they have to be run at least more than 10 times to generate meaningful statistical results. It is again a common strategy that an algorithm is run on a problem times and average/standard deviation/median of the best obtained solution in the last iteration are calculated as the metrics of performance. We follow the same method to generate and report the results over 30 independent runs. In order to verify the performance of mGWO algorithm, PSO, BA, CS, and GWO algorithms are chosen. Note that we utilized 30 search agents and 3000 iterations for each of the algorithms.

The convergence curves of unimodal, multimodal, fixed-dimension multimodal, and composite benchmark functions for the competitive optimization algorithms are given in Figures 2, 3, 4, and 5, respectively. As Table 5 shows, mGWO algorithm provides the best results in 5 out of 7 unimodal benchmark test functions. The mGWO algorithm also provides very competitive results compared to CS on and . As discussed above, unimodal functions are suitable for benchmarking exploitation of the algorithms. Therefore, these results evidence high exploitation capability of the mGWO algorithm.

Table 5: Results of unimodal benchmark functions.
Figure 2: Convergence graph of unimodal benchmark functions.
Figure 3: Convergence graph of multimodal benchmark functions.
Figure 4: Convergence graph of fixed-dimension multimodal functions.
Figure 5: Convergence graph of composite functions.

The statistical results of the algorithms on multimodal test function are presented in Table 6. It may be seen that mGWO algorithm highly outperforms other algorithms on , , , and . It should be noted that mGWO algorithm outperforms other algorithms on these multimodal test functions except PSO for . The results of multimodal test function strongly prove that high exploration of mGWO algorithm is a suitable mechanism for avoiding local solutions. Since the multimodal functions have an exponential number of local solutions, the results show that mGWO algorithm is able to explore the search space extensively and find promising regions of the search space. In addition, high local optima avoidance of this algorithm is another finding that can be inferred from these results.

Table 6: Results of multimodal benchmark functions.

The rest of the results, which belong to and , are provided in Tables 7 and 8, respectively. The results are consistent with those of other test functions, in which mGWO shows very competitive results compared to other algorithms.

Table 7: Results of fixed-dimension multimodal functions.
Table 8: Results of composite functions.

5. Cluster Head Selection in WSN Using mGWO

Cluster head (CH) selection problem is a well-known problem in the field of wireless sensor networks (WSNs) in which the energy consumption cost of the network should be minimized [4953]. In this paper, this problem is solved using mGWO algorithm and compared with GA, PSO, BA, CS, and GWO.

The main challenges in designing and planning the operations of WSNs are to optimize energy consumption and prolong network lifetime. Cluster-based routing techniques, such as the well-known low-energy adaptive clustering hierarchy (LEACH) [50], are used to achieve scalable solutions and extend the network lifetime until the last node dies (LND). In order to achieve prolonged network lifetime in cluster-based routing techniques, the lifetime of the CHs plays an important role. Improper cluster formation may cause some CHs to be overloaded. Such overload may cause high energy consumption of the CH and degrade the overall performance of the WSN. Therefore, proper CH selection is the most important issue for clustering sensor nodes. Designing an energy efficient clustering algorithm is not an easy task. Therefore, nature-inspired optimization algorithms may be applied to tackle cluster-based routing problem in WSN. Evolutionary algorithms (EAs) have been used in recent years as metaheuristics to address energy-aware routing challenges by designing intelligent models that collaborate together to optimize an appropriate energy-aware objective function [52]. GWO is one of the powerful heuristics that can be applied for efficient load balanced clustering. In this paper, mGWO based clustering algorithm is used to solve the abovementioned load balancing problem. The algorithm forms clusters in such a way that the overall energy consumption of the network is minimized. Total energy consumption in the network is the sum of the total energy dissipated from the non-CHs to send information to their respective CHs and the total energy consumed by CH nodes to aggregate the information and send it to the base station (BS).

Consider a WSN of sensor nodes randomly deployed in the sensing field and organized into clusters: . The fitness function for the energy consumption may be defined aswhere is the total number of CHs, is a non-CH associated with the th CH, and is the energy dissipated for transmitting data from to .

In order to calculate radio energy transmission and reception costs, a -bit message and also the transmitter-receiver separation distance are given byThe term denotes the per-bit energy dissipation during transmission. The per-bit amplification energy is proportional to when the transmission distance exceeds the threshold (called crossover distance) and otherwise is proportional to . The parameters and denote transmitter amplification parameters for free-space and multipath fading models, respectively. The value of is given byThe reception energy of the -bit data message can be expressed bywhere denotes the per-bit energy dissipation during reception.

is the data aggregation energy expenditure and is set as . The values of other parameters are set to , , and , respectively [51].

For the simulation setup, 100 nodes are randomly deployed in a 100 m × 100 m area of the sensing field. BS is placed at the center of the field. The initial energy of all homogeneous nodes is set to  J. During this analysis, three parameters, namely, first node dead (FND), half nodes dead (HND), and last node dead (LND) are employed to outline the network lifetime.

Table 9 shows the best results obtained for the CH selection problem in WSN. The results of Table 9 show that mGWO algorithm is able to find the best results compared to other algorithms. The results of mGWO are closely followed by the CS and GWO algorithms.

Table 9: Comparison results of CH selection problem in WSN.

6. Conclusion

This paper proposed a modification to the Grey Wolf Optimizer named mGWO, inspired by the hunting behavior of grey wolves in nature. An exponential decay function is used to balance the exploration and exploitation in the search space over the course of iterations. The results proved that the proposed algorithm benefits from high exploration in comparison to the standard GWO.

The paper also considered the clustering problem in WSN in which the CH selection is performed using the proposed mGWO algorithm, which is a challenging and NP hard problem. The results show that the proposed method is found to be very effective for real-world applications due to fast convergence and fewer chances to get stuck at local minima. It can be concluded that the proposed algorithm is able to outperform the current well-known and powerful algorithms in the literature. The results prove the competence and superiority of mGWO to existing metaheuristic algorithms and it has an ability to become an effective tool for solving real word optimization problems.

Competing Interests

The authors declare that they have no competing interests.

References

  1. D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, 1989.
  2. T. Back, F. Hoffmeister, and H. P. Schwefel, “A survey of evolution strategies,” in Proceedings of the 4th International Conference on Genetic Algorithms, San Diego, Calif, USA, July 1991.
  3. J. R. Koza, Genetic Programming: On the Programming of Computers by Natural Selection, MIT Press, Cambridge, UK, 1992.
  4. R. Storn and K. Price, “Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at Publisher · View at Google Scholar · View at MathSciNet
  5. D. Simon, “Biogeography-based optimization,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 6, pp. 702–713, 2008. View at Publisher · View at Google Scholar · View at Scopus
  6. W. Gong, Z. Cai, C. X. Ling, and H. Li, “A real-coded biogeography-based optimization with mutation,” Applied Mathematics and Computation, vol. 216, no. 9, pp. 2749–2758, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  7. W. Gong, Z. Cai, and C. X. Ling, “DE/BBO: a hybrid differential evolution with biogeography-based optimization for global numerical optimization,” Soft Computing, vol. 15, no. 4, pp. 645–665, 2011. View at Publisher · View at Google Scholar · View at Scopus
  8. H. Ma and D. Simon, “Blended biogeography-based optimization for constrained optimization,” Engineering Applications of Artificial Intelligence, vol. 24, no. 3, pp. 517–525, 2011. View at Publisher · View at Google Scholar · View at Scopus
  9. U. Singh and T. S. Kamal, “Design of non-uniform circular antenna arrays using biogeography-based optimisation,” IET Microwaves, Antennas and Propagation, vol. 5, no. 11, pp. 1365–1370, 2011. View at Publisher · View at Google Scholar · View at Scopus
  10. S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983. View at Google Scholar
  11. P. Moscato, “On evolution, search, optimization, genetic algorithms and martial arts: towards Memetic Algorithms,” Caltech Concurrent Computation Program Report 826, 1989. View at Google Scholar
  12. Z. W. Geem, J. H. Kim, and G. V. Loganathan, “A new heuristic optimization algorithm: harmony search,” Simulation, vol. 76, no. 2, pp. 60–68, 2001. View at Publisher · View at Google Scholar · View at Scopus
  13. K. S. Lee and Z. W. Geem, “A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice,” Computer Methods in Applied Mechanics and Engineering, vol. 194, no. 36–38, pp. 3902–3933, 2005. View at Publisher · View at Google Scholar · View at Scopus
  14. M. Eusuff, K. Lansey, and F. Pasha, “Shuffled frog-leaping algorithm: a memetic meta-heuristic for discrete optimization,” Engineering Optimization, vol. 38, no. 2, pp. 129–154, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. E. Rashedi, H. Nezamabadi-Pour, and S. Saryazdi, “GSA: a gravitational search algorithm,” Information Sciences, vol. 179, no. 13, pp. 2232–2248, 2009. View at Publisher · View at Google Scholar · View at Scopus
  16. S. Mirjalili, S. M. Mirjalili, and A. Hatamlou, “Multi-verse optimizer: a nature-inspired algorithm for global optimization,” Neural Computing & Applications, vol. 27, no. 2, pp. 495–513, 2016. View at Publisher · View at Google Scholar · View at Scopus
  17. A. Y. S. Lam and V. O. K. Li, “Chemical-reaction-inspired metaheuristic for optimization,” IEEE Transactions on Evolutionary Computation, vol. 14, no. 3, pp. 381–399, 2010. View at Publisher · View at Google Scholar · View at Scopus
  18. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Network, vol. 4, pp. 1942–1948, Perth, Australia, December 1995. View at Scopus
  19. X.-S. Yang, “Firefly algorithm, stochastic test functions and design optimization,” International Journal of Bio-Inspired Computation, vol. 2, no. 2, pp. 78–84, 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. X.-S. Yang and A. H. Gandomi, “Bat algorithm: a novel approach for global engineering optimization,” Engineering Computations, vol. 29, no. 5, pp. 464–483, 2012. View at Publisher · View at Google Scholar · View at Scopus
  21. M. Dorigo, V. Maniezzo, and A. Colorni, “Ant system: optimization by a colony of cooperating agents,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 26, no. 1, pp. 29–41, 1996. View at Publisher · View at Google Scholar · View at Scopus
  22. M. Dorigo and L. M. Gambardella, “Ant colony system: a cooperative learning approach to the traveling salesman problem,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 53–66, 1997. View at Publisher · View at Google Scholar · View at Scopus
  23. X.-S. Yang and S. Deb, “Engineering optimisation by cuckoo search,” International Journal of Mathematical Modelling and Numerical Optimisation, vol. 1, no. 4, pp. 330–343, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  24. D. Karaboga and B. Akay, “A comparative study of artificial Bee colony algorithm,” Applied Mathematics and Computation, vol. 214, no. 1, pp. 108–132, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  25. X. Li, Z. Shao, and J. Qian, “An optimizing method base on autonomous animates: fish swarm algorithm,” Systems Engineering—Theory & Practice, vol. 22, pp. 32–38, 2002. View at Google Scholar
  26. K. Krishnanand and D. Ghose, “Glowworm swarm optimisation: a new method for optimising multi-modal functions,” International Journal of Computational Intelligence Studies, vol. 1, no. 1, pp. 93–119, 2009. View at Publisher · View at Google Scholar
  27. S. Mirjalili, S. M. Mirjalili, and A. Lewis, “Grey wolf optimizer,” Advances in Engineering Software, vol. 69, pp. 46–61, 2014. View at Publisher · View at Google Scholar · View at Scopus
  28. W.-T. Pan, “A new fruit fly optimization algorithm: taking the financial distress model as an example,” Knowledge-Based Systems, vol. 26, pp. 69–74, 2012. View at Publisher · View at Google Scholar · View at Scopus
  29. X.-B. Meng, X. Z. Gao, Y. Liu, and H. Zhang, “A novel bat algorithm with habitat selection and Doppler effect in echoes for optimization,” Expert Systems with Applications, vol. 42, no. 17-18, pp. 6350–6364, 2015. View at Publisher · View at Google Scholar · View at Scopus
  30. S. Mirjalili, “Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems,” Neural Computing & Applications, 2015. View at Publisher · View at Google Scholar · View at Scopus
  31. S. C. Chu and P. W. Tsai, “Computational intelligence based on the behaviour of cats,” International Journal of Innovative Computing Information and Control, vol. 3, pp. 163–173, 2007. View at Google Scholar
  32. R. Rajabioun, “Cuckoo optimization algorithm,” Applied Soft Computing Journal, vol. 11, no. 8, pp. 5508–5518, 2011. View at Publisher · View at Google Scholar · View at Scopus
  33. J. C. Bansal, H. Sharma, S. S. Jadon, and M. Clerc, “Spider Monkey Optimization algorithm for numerical optimization,” Memetic Computing, vol. 6, no. 1, pp. 31–47, 2014. View at Publisher · View at Google Scholar · View at Scopus
  34. D. Dasgupta, Artificial Immune Systems and Their Applications, Springer, 1999.
  35. S. Das, A. Biswas, S. Dasgupta, and A. Abraham, “Bacterial foraging optimization algorithm: theoretical foundations, analysis, and applications,” Studies in Computational Intelligence, vol. 203, pp. 23–55, 2009. View at Publisher · View at Google Scholar · View at Scopus
  36. A. H. Gandomi and A. H. Alavi, “Krill herd: a new bio-inspired optimization algorithm,” Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 12, pp. 4831–4845, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  37. V. K. Kamboj, S. K. Bath, and J. S. Dhillon, “Solution of non-convex economic load dispatch problem using Grey Wolf Optimizer,” Neural Computing and Applications, 2015. View at Publisher · View at Google Scholar · View at Scopus
  38. E. Emary, H. M. Zawbaa, C. Grosan, and A. E. Hassenian, “Feature subset selection approach by gray-wolf optimization,” in Afro-European Conference for Industrial Advancement, vol. 334 of Advances in Intelligent Systems and Computing, Springer, 2015. View at Google Scholar
  39. S. Gholizadeh, “Optimal design of double layer grids considering nonlinear behaviour by sequential grey wolf algorithm,” Journal of Optimization in Civil Engineering, vol. 5, no. 4, pp. 511–523, 2015. View at Google Scholar
  40. Y. Yusof and Z. Mustaffa, “Time series forecasting of energy commodity using grey wolf optimizer,” in Proceedings of the International Multi Conference of Engineers and Computer Scientists (IMECS '15), vol. 1, Hong Kong, March 2015.
  41. G. M. Komaki and V. Kayvanfar, “Grey wolf optimizer algorithm for the two-stage assembly flow shop scheduling problem with release time,” Journal of Computational Science, vol. 8, pp. 109–120, 2015. View at Publisher · View at Google Scholar · View at Scopus
  42. A. A. El-Fergany and H. M. Hasanien, “Single and multi-objective optimal power flow using grey wolf optimizer and differential evolution algorithms,” Electric Power Components and Systems, vol. 43, no. 13, pp. 1548–1559, 2015. View at Publisher · View at Google Scholar · View at Scopus
  43. K. Shankar and P. Eswaran, “A secure visual secret share (VSS) creation scheme in visual cryptography using elliptic curve cryptography with optimization technique,” Australian Journal of Basic & Applied Science, vol. 9, no. 36, pp. 150–163, 2015. View at Google Scholar
  44. E. Emary, H. M. Zawbaa, and A. E. Hassanien, “Binary grey wolf optimization approaches for feature selection,” Neurocomputing, vol. 172, pp. 371–381, 2016. View at Publisher · View at Google Scholar · View at Scopus
  45. V. K. Kamboj, “A novel hybrid PSOGWO approach for unit commitment problem,” Neural Computing and Applications, 2015. View at Publisher · View at Google Scholar
  46. A. Zhu, C. Xu, Z. Li, J. Wu, and Z. Liu, “Hybridizing grey Wolf optimization with differential evolution for global optimization and test scheduling for 3D stacked SoC,” Journal of Systems Engineering and Electronics, vol. 26, no. 2, pp. 317–328, 2015. View at Publisher · View at Google Scholar · View at Scopus
  47. T.-S. Pan, T.-K. Dao, T.-T. Nguyen, and S.-C. Chu, “A communication strategy for paralleling grey wolf optimizer,” Advances in Intelligent Systems and Computing, vol. 388, pp. 253–262, 2015. View at Publisher · View at Google Scholar · View at Scopus
  48. J. Jayapriya and M. Arock, “A parallel GWO technique for aligning multiple molecular sequences,” in Proceedings of the International Conference on Advances in Computing, Communications and Informatics (ICACCI '15), pp. 210–215, IEEE, Kochi, India, August 2015. View at Publisher · View at Google Scholar · View at Scopus
  49. M. M. Afsar and M.-H. Tayarani-N, “Clustering in sensor networks: a literature survey,” Journal of Network and Computer Applications, vol. 46, pp. 198–226, 2014. View at Publisher · View at Google Scholar · View at Scopus
  50. W. B. Heinzelman, A. Chandrakasan, and H. Balakrishnan, “Energy-efficient communication protocol for wireless microsensor networks,” in Proceedings of the 33rd Annual Hawaii International Conference on System Siences (HICSS '00), p. 223, IEEE, January 2000. View at Publisher · View at Google Scholar
  51. N. Mittal and U. Singh, “Distance-based residual energy-efficient stable election protocol for WSNs,” Arabian Journal for Science and Engineering, vol. 40, no. 6, pp. 1637–1646, 2015. View at Publisher · View at Google Scholar · View at Scopus
  52. E. A. Khalil and B. A. Attea, “Energy-aware evolutionary routing protocol for dynamic clustering of wireless sensor networks,” Swarm and Evolutionary Computation, vol. 1, no. 4, pp. 195–203, 2011. View at Publisher · View at Google Scholar · View at Scopus
  53. B. A. Attea and E. A. Khalil, “A new evolutionary based routing protocol for clustered heterogeneous wireless sensor networks,” Applied Soft Computing, vol. 12, no. 7, pp. 1950–1957, 2012. View at Publisher · View at Google Scholar · View at Scopus