Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2017, Article ID 2030489, 15 pages
https://doi.org/10.1155/2017/2030489
Research Article

Hybrid Algorithm of Particle Swarm Optimization and Grey Wolf Optimizer for Improving Convergence Performance

Department of Mathematics, Punjabi University, Patiala, Punjab 147002, India

Correspondence should be addressed to Narinder Singh; moc.liamy@airoghgnisredniran

Received 9 June 2017; Revised 29 August 2017; Accepted 30 August 2017; Published 16 November 2017

Academic Editor: N. Shahzad

Copyright © 2017 Narinder Singh and S. B. Singh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A newly hybrid nature inspired algorithm called HPSOGWO is presented with the combination of Particle Swarm Optimization (PSO) and Grey Wolf Optimizer (GWO). The main idea is to improve the ability of exploitation in Particle Swarm Optimization with the ability of exploration in Grey Wolf Optimizer to produce both variants’ strength. Some unimodal, multimodal, and fixed-dimension multimodal test functions are used to check the solution quality and performance of HPSOGWO variant. The numerical and statistical solutions show that the hybrid variant outperforms significantly the PSO and GWO variants in terms of solution quality, solution stability, convergence speed, and ability to find the global optimum.

1. Introduction

In recent years, several numbers of nature inspired optimization techniques have been developed. These include Particle Swarm Optimization (PSO), Gravitational Search algorithm (GSA), Genetic Algorithm (GA), Evolutionary Algorithm (EA), Deferential Evolution (DE), Ant Colony Optimization (ACO), Biogeographically Based Optimization (BBO), Firefly algorithm (FA), and Bat algorithm (BA). The common goal of these algorithms is to find the best quality of solutions and better convergence performance. In order to do this, a nature inspired variant should be equipped with exploration and exploitation to ensure finding global optimum.

Exploitation is the convergence capability to the most excellent result of the function near a good result and exploration is the capability of a variant to find whole parts of function area. Finally the goal of all nature inspired variants is to balance the capability of exploration and exploitation capably in order to search best global optimal solution in the search space. As per Eiben and Schippers [1], exploitation and exploration in nature inspired computing are not understandable due to lack of a usually accepted opinion and on the other side, with increase in one capability, the other will weaken and vice versa.

As per the above, the existing nature inspired variants are capable of solving several numbers of test and real life problems. It has been proved that there is no population-based variant, which can perform generally enough to find the solution of all types of optimization problems [2].

The Particle Swarm Optimization is one of the most usually used evolutionary variants in hybrid techniques due to its capability of searching global optimum, convergence speed, and simplicity.

There are several studies in the text which have been prepared to combine Particle Swarm Optimization variant with other variants of metaheuristics such as hybrid Particle Swarm Optimization with Genetic Algorithm (PSOGA) [3, 4], Particle Swarm Optimization with Differential Evolution (PSODE) [5], and Particle Swarm Optimization with Ant Colony Optimization (PSOACO) [6]. These hybrid algorithms are aimed at reducing the probability of trapping in local optimum. Recently a newly nature inspired optimization technique is originated, namely, GSA [7]. The various types of hybrid variant of Particle Swarm Optimization had been discussed below.

Ahmed et al. [8] presented hybrid variant of PSO called HPSOM. The main idea of the HPSOM was to integrate the Particle Swarm Optimization (PSO) with Genetic Algorithm (GA) mutation technique. The performance of the hybrid variant is tested on several numbers of classical functions and, on the basis of results obtained, authors have shown that hybrid variant outperforms significantly the Particle Swarm Optimization variant in terms of solution quality, solution stability, convergence speed, and ability to find the global optimum.

Mirjalili and Hashim’s [9] newly hybrid population-based algorithm (PSOGSA) was proposed with the combination of PSO and Gravitational Search Algorithm (GSA). The main idea is to integrate the capability of exploitation in Particle Swarm Optimization with the capability of exploration in Gravitation Search Algorithm to synthesize both variants’ strength. The performance of hybrid variant was tested on several numbers of benchmark functions. On the basis of results obtained, authors have proven that the hybrid variant possesses a better capability to escape from local optimums with faster convergence than the PSO and GSA.

Zhang et al. [10] presented a hybrid variant combining PSO with back-propagation (BP) variant called PSO-BP algorithm. This variant can make use of not only strong global searching ability of the PSOA, but also strong local searching ability of the BP algorithm. The convergence speed and convergent accuracy performance of newly hybrid variant of PSO were tested on several numbers of classical functions. On the basis of experimental results, authors have shown that the hybrid variant is better than the BP and Adaptive Particle Swarm Optimization Algorithm (APSOA) and BP algorithm in terms of solution quality and convergence speed.

Ouyang et al. [11] presented a hybrid PSO variant, which combines the advantages of PSO and Nelder-Mead Simplex Method (SM) variant, is put forward to solve systems of nonlinear equations, and can be used to overcome the difficulty in selecting good initial guess for SM and inaccuracy of PSO due to being easily trapped into local optimum.

Experimental results show that the hybrid variant has precision, high convergence rate, and great robustness and it can give suitable results of nonlinear equations.

Yu et al. [12] proposed a newly hybrid Particle Swarm Optimization variant to solve several problems by combining modified velocity model and space transformation search. Experimental studies on eight classical test problems reveal that the hybrid PSO holds good performance in solving both multimodal and unimodal problems.

Yu et al. [13] proposed a novel algorithm, HPSO-DE, by developing a balanced parameter between PSO and DE. This quality of this hybrid variant has been tested on several numbers of benchmark functions. In comparison with the Particle Swarm Optimization, Differential Evolution, and HPSO-DE variants, the newly hybrid variant finds better quality solutions more frequently, is more effective in obtaining better quality solutions, and works in a more effective way.

Abd-Elazim and Ali [14] presented a newly hybrid variant combined with bacterial foraging optimization algorithm (BFOA) and PSO, namely, bacterial swarm optimization (BSO). In this hybrid variant, the search directions of tumble behavior for each bacterium are oriented by the global best location and the individual’s best location of Particle Swarm Optimization. The performance of new variant has been compared with the PSO variant and BFOA variant. On the basis of the obtained results, they have shown the validity of the hybrid variant in tuning SVC compared with other metaheuristics.

Grey Wolf Optimizer is recently developed metaheuristics inspired from the hunting mechanism and leadership hierarchy of grey wolves in nature and has been successfully applied for solving optimizing key values in the cryptography algorithms [15], feature subset selection [16], time forecasting [17], optimal power flow problem [18], economic dispatch problems [19], flow shop scheduling problem [20], and optimal design of double layer grids [21]. Several algorithms have also been developed to improve the convergence performance of Grey Wolf Optimizer that includes parallelized GWO [22, 23], binary GWO [24], integration of DE with GWO [25], hybrid GWO with Genetic Algorithm (GA) [26], hybrid DE with GWO [27], and hybrid Grey Wolf Optimizer using Elite Opposition Based Learning Strategy and Simplex Method [28].

Mittal et al. [29] developed a modified variant of the GWO called modified Grey Wolf Optimizer (mGWO). An exponential decay function is used to improve the exploitation and exploration in the search space over the course of generations. On the basis of the obtained results, authors proved that the modified variant benefits from high exploration in comparison to the standard Grey Wolf Optimizer and the performance of the variant is verified on several numbers of standard benchmark and real life NP hard problems.

S. Singh and S. B. Singh [30] present a newly modified approach of GWO called Mean Grey Wolf Optimizer (MGWO). This approach has been originated by modifying the position update (encircling behavior) equations of GWO. MGWO approach has been tested on various standard benchmark functions and the accuracy of existing approach has also been verified with PSO and GWO. In addition, authors have also considered five datasets classification that have been utilized to check the accuracy of the modified variant. The obtained results are compared with the results using many different metaheuristic approaches, that is, Grey Wolf Optimization, Particle Swarm Optimization, Population-Based Incremental Learning (PBIL), Ant Colony Optimization (ACO), and so forth. On the basis of statistical results, it has been observed that the modified variant is able to find best solutions in terms of high level of accuracy in classification and improved local optima avoidance.

N. Singh and S. B. Singh [31] present a new hybrid swarm intelligence heuristics called HGWOSCA that is exercised on twenty-two benchmark test problems, five biomedical dataset problems, and one sine dataset problem. Hybrid GWOSCA is combination of Grey Wolf Optimizer (GWO) used for exploitation phase and Sine Cosine Algorithm (SCA) for exploration phase in uncertain environment. The movement directions and speed of the grey wolf (alpha) are improved using position update equations of SCA. The numerical and statistical solutions obtained with hybrid GWOSCA approach are compared with other metaheuristics approaches such as Particle Swarm Optimization (PSO), Ant Lion Optimizer (ALO), Whale Optimization Algorithm (WOA), Hybrid Approach GWO (HAGWO), Mean GWO (MGWO), Grey Wolf Optimizer (GWO), and Sine Cosine Algorithm (SCA). Results demonstrate that newly hybrid approach can be highly effective in solving benchmark and real life applications with or without constrained and unknown search areas.

In this study, we present a newly hybrid variant combining PSO and GWO variants named HPSOGWO. We use twenty-three unimodal, multimodal, and fixed-dimension multimodal functions to compare the performance of hybrid variant with both standard PSO and standard GWO.

The rest of the paper is structured as follows. The Particle Swarm Optimization (PSO) and Grey Wolf Optimizer (GWO) algorithm are discussed in Sections 2 and 3. The HPSOGWO mathematical model and pseudocode (shown in Pseudocode 1) are also discussed in Section 4. The benchmark tested functions are presented in Section 5 and results and discussion are represented in Section 6, respectively. Finally, the conclusion of the work is offered in Section 7.

Pseudocode 1: Pseudocode of the proposed variant (HPSOGWO).

2. Particle Swarm Optimization Variant

The PSO algorithm was firstly introduced by Kennedy and Eberhart in [32] and its fundamental judgment was primarily inspired by the simulation of the social behavior of animals such as bird flocking and fish schooling. While searching for food, the birds are either scattered or go together before they settle on the position where they can find the food. While the birds are searching for food from one position to another, there is always a bird that can smell the food very well; that is, the bird is aware of the position where the food can be found, having the correct food resource message. Because they are transmitting the message, particularly the useful message at any period while searching for the food from one position to another, the birds will finally flock to the position where food can be found.

This approach is learned from animal’s behavior to calculate global optimization functions/problems and every partner of the swarm/crowd is called a particle. In PSO technique, the position of each partner of the crowd in the global search space is updated by two mathematical equations. These mathematical equations are

3. Grey Wolf Optimizer (GWO)

The above literature shows that there are many swarm intelligence approaches originated so far, many of them inspired by search behaviors and hunting. But there is no swarm intelligence approach in the literature mimicking the leadership hierarchy of grey wolves, well known for their pack hunting. Motivated by various algorithms, Mirjalili et al. [33] presented a new swarm intelligence approach known as Grey Wolf Optimizer (GWO) inspired by the grey wolves and investigate its abilities in solving standard and real life applications. The GWO variant mimics the hunting mechanism and leadership hierarchy of grey wolves in nature. In GWO, the crowd is split into four different groups such as alpha, beta, delta, and omega which are employed for simulating the leadership hierarchy.

Grey wolf belongs to Canidae family. Grey wolves are measured as apex predators, meaning that they are at the top of the food chain. Grey wolves mostly prefer to live in a pack. The leaders are a female and a male known as alphas. The alpha () is generally liable for making decisions about sleeping, time to wake, hunting, and so on.

The second top level in the hierarchy of grey wolves is beta (). The beta () are subordinate wolves that help the first level wolf (alpha) in decision making or other pack actions. The second level wolf (beta) should respect the first level wolf (alpha) but orders the other lower level cipliner for the pack. The second level wolf (beta) reinforces the first level wolf (alpha’s) orders throughout the pack and gives feedback to the alpha.

The third level ranking grey wolf is omega (). This wolf plays the role of scapegoat. Third level grey wolves always have to submit to all the other dominant wolves. They are the last wolves that are permitted to eat. It may seem that the third level wolves do not have a significant personality in the pack, but it can be observed that the entire pack faces internal struggle and troubles in case of losing the omega. This is due to the venting of violence and frustration of all wolves by the omega (). This assists fulfilling the whole pack and maintaining the dominance structure.

If a wolf is not an alpha (), beta (), or omega (), she/he is known as a subordinate (or delta ()). Delta () wolves have to submit to alpha () and beta (), but they dominate the omega ().

In addition, three main steps of hunting, searching for prey, encircling prey, and attacking prey, are implemented to perform optimization.

The encircling behavior of each agent of the crowd is calculated by the following mathematical equations: The vectors and are mathematically formulated as follows:

3.1. Hunting

In order to mathematically simulate the hunting behavior, we suppose that the alpha (), beta (), and delta () have better knowledge about the potential location of prey. The following mathematical equations are developed in this regard:

3.2. Searching for Prey and Attacking Prey

is random value in gap . When random value , the wolves are forced to attack the prey. Searching for prey is the exploration ability and attacking the prey is the exploitation ability. The arbitrary values of are utilized to force the search to move away from the prey.

When , the members of the population are enforced to diverge from the prey.

4. A Newly Hybrid Algorithm

Many researchers have presented several hybridization variants for heuristic variants. According to Talbi [34], two variants can be hybridized in low level or high level with relay or coevolutionary techniques as heterogeneous or homogeneous.

In this text, we hybridize Particle Swarm Optimization with Grey Wolf Optimizer algorithm using low-level coevolutionary mixed hybrid. The hybrid is low level because we merge the functionalities of both variants. It is coevolutionary because we do not use both variants one after the other. In other ways, they run in parallel. It is mixed because there are two distinct variants that are involved in generating final solutions of the problems. On the basis of this modification, we improve the ability of exploitation in Particle Swarm Optimization with the ability of exploration in Grey Wolf Optimizer to produce both variants’ strength.

In HPSOGWO, first three agents’ position is updated in the search space by the proposed mathematical equations (5). Instead of using usual mathematical equations, we control the exploration and exploitation of the grey wolf in the search space by inertia constant. The modified set of governing equations are In order to combine PSO and GWO variants, the velocity and updated equation are proposed as follows:

5. Testing Functions

In this section, twenty-three benchmark problems are used to test the ability of HPSOGWO. These problems can be divided into three different groups: unimodal, multimodal, and fixed-dimension multimodal functions. The exact details of these test problems are shown in Tables 13.

Table 1: Unimodal benchmark functions.
Table 2: Multimodal benchmark functions.
Table 3: Fixed-dimension multimodal benchmark functions.

6. Analysis and Discussion on the Results

The PSO, GWO, and HPSOGWO pseudocodes are coded in MATLAB R2013a and implemented on Intel HD Graphics, 15.6′′ 3 GB Memory, i5 Processor 430 M, 16.9 HD LCD, Pentium-Intel Core™, and 320 GB HDD. Number of search agents is 30, maximum number of iterations is 500, and , , and ; all these parameter settings are applied to test the quality of hybrid and other metaheuristics.

In this paper, our objective is to present the best suitable optimal solution as compared to other metaheuristics. The best optimal solutions and best statistical values achieved by HPSOGWO variant for unimodal functions are shown in Tables 4 and 5, respectively.

Table 4: PSO, GWO, and HPSOGWO numerical results of unimodal benchmark functions.
Table 5: PSO, GWO, and HPSOGWO statistical results of unimodal benchmark functions.

Firstly, we tested the ability of HPSOGWO, PSO, and GWO variant that were run 30 times on each unimodal function. The HPSOGWO, GWO, and PSO algorithms have to be run at least more than ten times to search for the best numerical or statistical solutions. It is again a general method that an algorithm is run on a test problem many times and the best optimal solutions, mean and standard deviation of the superior obtained results in the last generation, are evaluated as metrics of performance. The performance of proposed hybrid variant is compared to PSO and GWO variant in terms of best optimal and statistical results. Similarly the convergence performances of HPSOGWO, PSO, and GWO variant have been compared on the basis of graph; see Figures 1(a)1(g). On the basis of obtained results and convergence performance of the variants, we concluded that HPSOGWO is more reliable in giving superior quality results with reasonable iterations and avoids premature convergence of the search process to local optimal point and provides superior exploration of the search course.

Figure 1: Convergence curve of PSO, GWO, and HPSOGWO variants on unimodal functions.

Further, we noted that the unimodal problems are suitable for standard exploitation. Therefore, these results prove the superior performance of HPSOGWO in terms of exploiting the optimum.

Secondly, the performance of the proposed hybrid variant has been tested on six multimodal benchmark functions. In contrast to the multimodal problems, unimodal benchmark problems have many local optima with the number rising exponentially with dimension. This makes them appropriate for benchmarking the exploration capability of an approach. The numerical and statistical results obtained from HPSOGWO, PSO, and GWO algorithms are shown in Tables 6 and 7.

Table 6: PSO, GWO, and HPSOGWO numerical results of multimodal benchmark functions.
Table 7: PSO, GWO, and HPSOGWO statistical results of multimodal benchmark functions.

Experimental results show that the proposed variant finds a superior quality of solution without trapping in local maximum and to attain faster convergence performance; see Figures 2(a)2(f), respectively. This approach outperforms GWO and PSO variants on the majority of the multimodal benchmark functions. The obtained solutions also prove that the HPSOGWO variant has merit in terms of exploration.

Figure 2: Convergence curve of PSO, GWO, and HPSOGWO variants on multimodal functions.

Thirdly, the suitable solutions of fixed-dimension multimodal benchmark functions are illustrated in Tables 8 and 9. The fixed dimensional benchmark functions have many local optima with the number growing exponentially with dimension. This makes them fitting for benchmarking the exploration capacity of a variant. Experimental numerical and statistical solutions have shown that the proposed variant is able to find superior quality of results on maximum number of fixed dimensional multimodal benchmark functions as compared to PSO and GWO variants. Further, the convergence performance of these variants has been plotted in Figures 3(a)3(j). All numerical and statistical solutions demonstrate that the hybrid variant has merit in terms of exploration.

Table 8: PSO, GWO, and HPSOGWO numerical results of fixed-dimension multimodal benchmark functions.
Table 9: PSO, GWO, and HPSOGWO statistical results of fixed-dimension multimodal benchmark functions.
Figure 3: Convergence curve of PSO, GWO, and HPSOGWO variants on fixed-dimension multimodal functions.

Finally, the accuracy of the newly hybrid approach has been verified using starting and ending time of the CPU (TIC and TOC), CPU time, and clock. These results are provided in Tables 1012, respectively. It may be seen that the hybrid algorithm solved most of the standard benchmark problems in the least time as compared to other metaheuristics.

Table 10: Time-consuming results of unimodal benchmark functions.
Table 11: Time-consuming results of multimodal benchmark functions.
Table 12: Time-consuming results of fixed-dimension multimodal benchmark functions.

To sum up, all simulation results assert that the HPSOGWO algorithm is very helpful in improving the efficiency of the PSO and GWO in terms of result quality as well as computational efforts.

7. Conclusion

In this article, a newly hybrid variant is proposed utilizing strengths of GWO and PSO. The main idea behind developing is to improve the ability of exploitation in Particle Swarm Optimization with the ability of exploration in Grey Wolf Optimizer to produce both variants’ strength. Twenty-three classical problems are used to test the quality of the hybrid variant compared to GWO and PSO. Experimental solutions proved that hybrid variant is more reliable in giving superior quality of solutions with reasonable computational iteration as compared to PSO and GWO.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. A. E. Eiben and C. A. Schippers, “On evolutionary exploration and exploitation,” Fundamenta Informaticae, vol. 35, no. 1–4, pp. 35–50, 1998. View at Google Scholar · View at Scopus
  2. D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 67–82, 1997. View at Publisher · View at Google Scholar · View at Scopus
  3. X. Lai and M. Zhang, “An efficient ensemble of GA and PSO for real function optimization,” in Proceedings of the 2nd IEEE International Conference on Computer Science and Information Technology (ICCSIT '09), pp. 651–655, Beijing, China, August 2009. View at Publisher · View at Google Scholar · View at Scopus
  4. A. A. A. Esmin, G. Lambert-Torres, and G. B. Alvarenga, “Hybrid evolutionary algorithm based on PSO and GA mutation,” in Proceedings of the 6th International Conference on Hybrid Intelligent Systems and 4th Conference on Neuro-Computing and Evolving Intelligence (HIS-NCEI '06), Rio de Janeiro, Brazil, December 2006. View at Publisher · View at Google Scholar · View at Scopus
  5. L. Li, B. Xue, B. Niu, L. Tan, and J. Wang, “A novel PSO-DE-based hybrid algorithm for global optimization,” in Advanced Intelligent Computing Theories and Applications: With Aspects of Artificial Intelligence, vol. 5227 of Lecture Notes in Computer Science, pp. 785–793, Springer, Berlin, Germany, 2008. View at Google Scholar
  6. N. Holden and A. A. Freitas, “A hybrid PSO/ACO algorithm for discovering classification rules in data mining,” Journal of Artificial Evolution and Applications, vol. 2008, Article ID 316145, 11 pages, 2008. View at Publisher · View at Google Scholar
  7. E. Rashedi, H. Nezamabadi-pour, and S. Saryazdi, “GSA: a gravitational search algorithm,” Information Sciences, vol. 179, no. 13, pp. 2232–2248, 2009. View at Publisher · View at Google Scholar
  8. A. Ahmed, A. Esmin, and S. Matwin, “HPSOM: a hybrid particle swarm optimization algorithm with genetic mutation,” International Journal of Innovative Computing, Information and Control, vol. 9, no. 5, pp. 1919–1934, 2013. View at Google Scholar · View at Scopus
  9. S. Mirjalili and S. Z. M. Hashim, “A new hybrid PSOGSA algorithm for function optimization,” in Proceedings of the International Conference on Computer and Information Application (ICCIA '10), pp. 374–377, Tianjin, China, November 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. J. R. Zhang, T. M. Lok, and M. R. Lyu, “A hybrid particle swarm optimization-back-propagation algorithm for feedforward neural network training,” Applied Mathematics and Computation, vol. 185, no. 2, pp. 1026–1037, 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. A. Ouyang, Y. Zhou, and Q. Luo, “Hybrid particle swarm optimization algorithm for solving systems of nonlinear equations,” in Proceedings of the IEEE International Conference on Granular Computing (GRC '09), pp. 460–465, Nanchang, China, August 2009. View at Publisher · View at Google Scholar · View at Scopus
  12. S. Yu, Z. Wu, H. Wang, Z. Chen, and H. Zhong, “A hybrid particle swarm optimization algorithm based on space transformation search and a modified velocity model,” International Journal of Numerical Analysis & Modeling, vol. 9, no. 2, pp. 371–377, 2012. View at Google Scholar · View at MathSciNet · View at Scopus
  13. X. Yu, J. Cao, H. Shan, L. Zhu, and J. Guo, “An adaptive hybrid algorithm based on particle swarm optimization and differential evolution for global optimization,” The Scientific World Journal, vol. 2014, Article ID 215472, 16 pages, 2014. View at Publisher · View at Google Scholar · View at Scopus
  14. S. M. Abd-Elazim and E. S. Ali, “A hybrid particles swarm optimization and bacterial foraging for power system stability enhancement,” Complexity, vol. 21, no. 2, pp. 245–255, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. K. Shankar and P. Eswaran, “A secure visual secret share (VSS) creation scheme in visual cryptography using elliptic curve cryptography with optimization technique,” Australian Journal of Basic & Applied Science, vol. 9, no. 36, pp. 150–163, 2015. View at Google Scholar
  16. E. Emary, H. M. Zawbaa, C. Grosan, and A. E. Hassenian, “Feature subset selection approach by gray-wolf optimization,” in Afro-European Conference for Industrial Advancement, vol. 334 of Advances in Intelligent Systems and Computing, pp. 1–13, Springer, Berlin, Germany, 2015. View at Publisher · View at Google Scholar · View at Scopus
  17. Y. Yusof and Z. Mustaffa, “Time series forecasting of energy commodity using grey wolf optimizer,” in Proceedings of the International Multi Conference of Engineers and Computer Scientists (IMECS '15), Hong Kong, March 2015.
  18. A. A. El-Fergany and H. M. Hasanien, “Single and multi-objective optimal power flow using grey wolf optimizer and differential evolution algorithms,” Electric Power Components and Systems, vol. 43, no. 13, pp. 1548–1559, 2015. View at Publisher · View at Google Scholar · View at Scopus
  19. V. K. Kamboj, S. K. Bath, and J. S. Dhillon, “Solution of non-convex economic load dispatch problem using Grey Wolf Optimizer,” Neural Computing and Applications, vol. 27, no. 8, pp. 1301–1316, 2016. View at Publisher · View at Google Scholar · View at Scopus
  20. G. M. Komaki and V. Kayvanfar, “Grey Wolf Optimizer algorithm for the two-stage assembly flow shop scheduling problem with release time,” Journal of Computational Science, vol. 8, pp. 109–120, 2015. View at Publisher · View at Google Scholar
  21. S. Gholizadeh, “Optimal design of double layer grids considering nonlinear ehavior by sequential grey wolf algorithm,” Journal of Optimization in Civil Engineering, vol. 5, no. 4, pp. 511–523, 2015. View at Google Scholar
  22. T.-S. Pan, T.-K. Dao, T.-T. Nguyen, and S.-C. Chu, “A communication strategy for paralleling grey wolf optimizer,” Advances in Intelligent Systems and Computing, vol. 388, pp. 253–262, 2015. View at Publisher · View at Google Scholar · View at Scopus
  23. J. Jayapriya and M. Arock, “A parallel GWO technique for aligning multiple molecular sequences,” in Proceedings of the International Conference on Advances in Computing, Communications and Informatics (ICACCI '15), pp. 210–215, IEEE, Kochi, India, August 2015. View at Publisher · View at Google Scholar · View at Scopus
  24. E. Emary, H. M. Zawbaa, and A. E. Hassanien, “Binary grey wolf optimization approaches for feature selection,” Neurocomputing, vol. 172, pp. 371–381, 2016. View at Publisher · View at Google Scholar · View at Scopus
  25. A. Zhu, C. Xu, Z. Li, J. Wu, and Z. Liu, “Hybridizing grey Wolf optimization with differential evolution for global optimization and test scheduling for 3D stacked SoC,” Journal of Systems Engineering and Electronics, vol. 26, no. 2, pp. 317–328, 2015. View at Publisher · View at Google Scholar · View at Scopus
  26. M. A. Tawhid and A. F. Ali, “A Hybrid grey wolf optimizer and genetic algorithm for minimizing potential energy function,” in Memetic Computing, pp. 1–13, 2017. View at Publisher · View at Google Scholar
  27. D. Jitkongchuen, “A hybrid differential evolution with grey Wolf optimizer for continuous global optimization,” in Proceedings of the 7th International Conference on Information Technology and Electrical Engineering (ICITEE '15), pp. 51–54, Chiang Mai, Thailand, October 2015. View at Publisher · View at Google Scholar · View at Scopus
  28. S. Zhang, Q. Luo, and Y. Zhou, “Hybrid grey wolf optimizer using elite opposition-based learning strategy and simplex method,” International Journal of Computational Intelligence and Applications, vol. 16, no. 2, Article ID 1750012, 2017. View at Publisher · View at Google Scholar
  29. N. Mittal, U. Singh, and B. S. Sohi, “Modified grey wolf optimizer for global engineering optimization,” Applied Computational Intelligence and Soft Computing, vol. 2016, Article ID 7950348, 16 pages, 2016. View at Publisher · View at Google Scholar
  30. S. Singh and S. B. Singh, “Mean grey wolf optimizer,” Evolutionary Bioinformatics, vol. 13, pp. 1–28, 2017. View at Google Scholar
  31. N. Singh and S. B. Singh, “A novel hyrid GWO-SCA approach for standard and real,” Engineering Science and Technology.
  32. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, December 1995. View at Scopus
  33. S. Mirjalili, S. M. Mirjalili, and A. Lewis, “Grey wolf optimizer,” Advances in Engineering Software, vol. 69, pp. 46–61, 2014. View at Publisher · View at Google Scholar · View at Scopus
  34. E.-G. Talbi, “A taxonomy of hybrid metaheuristics,” Journal of Heuristics, vol. 8, no. 5, pp. 541–564, 2002. View at Publisher · View at Google Scholar · View at Scopus