Abstract

A newly hybrid nature inspired algorithm called HPSOGWO is presented with the combination of Particle Swarm Optimization (PSO) and Grey Wolf Optimizer (GWO). The main idea is to improve the ability of exploitation in Particle Swarm Optimization with the ability of exploration in Grey Wolf Optimizer to produce both variants’ strength. Some unimodal, multimodal, and fixed-dimension multimodal test functions are used to check the solution quality and performance of HPSOGWO variant. The numerical and statistical solutions show that the hybrid variant outperforms significantly the PSO and GWO variants in terms of solution quality, solution stability, convergence speed, and ability to find the global optimum.

1. Introduction

In recent years, several numbers of nature inspired optimization techniques have been developed. These include Particle Swarm Optimization (PSO), Gravitational Search algorithm (GSA), Genetic Algorithm (GA), Evolutionary Algorithm (EA), Deferential Evolution (DE), Ant Colony Optimization (ACO), Biogeographically Based Optimization (BBO), Firefly algorithm (FA), and Bat algorithm (BA). The common goal of these algorithms is to find the best quality of solutions and better convergence performance. In order to do this, a nature inspired variant should be equipped with exploration and exploitation to ensure finding global optimum.

Exploitation is the convergence capability to the most excellent result of the function near a good result and exploration is the capability of a variant to find whole parts of function area. Finally the goal of all nature inspired variants is to balance the capability of exploration and exploitation capably in order to search best global optimal solution in the search space. As per Eiben and Schippers [1], exploitation and exploration in nature inspired computing are not understandable due to lack of a usually accepted opinion and on the other side, with increase in one capability, the other will weaken and vice versa.

As per the above, the existing nature inspired variants are capable of solving several numbers of test and real life problems. It has been proved that there is no population-based variant, which can perform generally enough to find the solution of all types of optimization problems [2].

The Particle Swarm Optimization is one of the most usually used evolutionary variants in hybrid techniques due to its capability of searching global optimum, convergence speed, and simplicity.

There are several studies in the text which have been prepared to combine Particle Swarm Optimization variant with other variants of metaheuristics such as hybrid Particle Swarm Optimization with Genetic Algorithm (PSOGA) [3, 4], Particle Swarm Optimization with Differential Evolution (PSODE) [5], and Particle Swarm Optimization with Ant Colony Optimization (PSOACO) [6]. These hybrid algorithms are aimed at reducing the probability of trapping in local optimum. Recently a newly nature inspired optimization technique is originated, namely, GSA [7]. The various types of hybrid variant of Particle Swarm Optimization had been discussed below.

Ahmed et al. [8] presented hybrid variant of PSO called HPSOM. The main idea of the HPSOM was to integrate the Particle Swarm Optimization (PSO) with Genetic Algorithm (GA) mutation technique. The performance of the hybrid variant is tested on several numbers of classical functions and, on the basis of results obtained, authors have shown that hybrid variant outperforms significantly the Particle Swarm Optimization variant in terms of solution quality, solution stability, convergence speed, and ability to find the global optimum.

Mirjalili and Hashim’s [9] newly hybrid population-based algorithm (PSOGSA) was proposed with the combination of PSO and Gravitational Search Algorithm (GSA). The main idea is to integrate the capability of exploitation in Particle Swarm Optimization with the capability of exploration in Gravitation Search Algorithm to synthesize both variants’ strength. The performance of hybrid variant was tested on several numbers of benchmark functions. On the basis of results obtained, authors have proven that the hybrid variant possesses a better capability to escape from local optimums with faster convergence than the PSO and GSA.

Zhang et al. [10] presented a hybrid variant combining PSO with back-propagation (BP) variant called PSO-BP algorithm. This variant can make use of not only strong global searching ability of the PSOA, but also strong local searching ability of the BP algorithm. The convergence speed and convergent accuracy performance of newly hybrid variant of PSO were tested on several numbers of classical functions. On the basis of experimental results, authors have shown that the hybrid variant is better than the BP and Adaptive Particle Swarm Optimization Algorithm (APSOA) and BP algorithm in terms of solution quality and convergence speed.

Ouyang et al. [11] presented a hybrid PSO variant, which combines the advantages of PSO and Nelder-Mead Simplex Method (SM) variant, is put forward to solve systems of nonlinear equations, and can be used to overcome the difficulty in selecting good initial guess for SM and inaccuracy of PSO due to being easily trapped into local optimum.

Experimental results show that the hybrid variant has precision, high convergence rate, and great robustness and it can give suitable results of nonlinear equations.

Yu et al. [12] proposed a newly hybrid Particle Swarm Optimization variant to solve several problems by combining modified velocity model and space transformation search. Experimental studies on eight classical test problems reveal that the hybrid PSO holds good performance in solving both multimodal and unimodal problems.

Yu et al. [13] proposed a novel algorithm, HPSO-DE, by developing a balanced parameter between PSO and DE. This quality of this hybrid variant has been tested on several numbers of benchmark functions. In comparison with the Particle Swarm Optimization, Differential Evolution, and HPSO-DE variants, the newly hybrid variant finds better quality solutions more frequently, is more effective in obtaining better quality solutions, and works in a more effective way.

Abd-Elazim and Ali [14] presented a newly hybrid variant combined with bacterial foraging optimization algorithm (BFOA) and PSO, namely, bacterial swarm optimization (BSO). In this hybrid variant, the search directions of tumble behavior for each bacterium are oriented by the global best location and the individual’s best location of Particle Swarm Optimization. The performance of new variant has been compared with the PSO variant and BFOA variant. On the basis of the obtained results, they have shown the validity of the hybrid variant in tuning SVC compared with other metaheuristics.

Grey Wolf Optimizer is recently developed metaheuristics inspired from the hunting mechanism and leadership hierarchy of grey wolves in nature and has been successfully applied for solving optimizing key values in the cryptography algorithms [15], feature subset selection [16], time forecasting [17], optimal power flow problem [18], economic dispatch problems [19], flow shop scheduling problem [20], and optimal design of double layer grids [21]. Several algorithms have also been developed to improve the convergence performance of Grey Wolf Optimizer that includes parallelized GWO [22, 23], binary GWO [24], integration of DE with GWO [25], hybrid GWO with Genetic Algorithm (GA) [26], hybrid DE with GWO [27], and hybrid Grey Wolf Optimizer using Elite Opposition Based Learning Strategy and Simplex Method [28].

Mittal et al. [29] developed a modified variant of the GWO called modified Grey Wolf Optimizer (mGWO). An exponential decay function is used to improve the exploitation and exploration in the search space over the course of generations. On the basis of the obtained results, authors proved that the modified variant benefits from high exploration in comparison to the standard Grey Wolf Optimizer and the performance of the variant is verified on several numbers of standard benchmark and real life NP hard problems.

S. Singh and S. B. Singh [30] present a newly modified approach of GWO called Mean Grey Wolf Optimizer (MGWO). This approach has been originated by modifying the position update (encircling behavior) equations of GWO. MGWO approach has been tested on various standard benchmark functions and the accuracy of existing approach has also been verified with PSO and GWO. In addition, authors have also considered five datasets classification that have been utilized to check the accuracy of the modified variant. The obtained results are compared with the results using many different metaheuristic approaches, that is, Grey Wolf Optimization, Particle Swarm Optimization, Population-Based Incremental Learning (PBIL), Ant Colony Optimization (ACO), and so forth. On the basis of statistical results, it has been observed that the modified variant is able to find best solutions in terms of high level of accuracy in classification and improved local optima avoidance.

N. Singh and S. B. Singh [31] present a new hybrid swarm intelligence heuristics called HGWOSCA that is exercised on twenty-two benchmark test problems, five biomedical dataset problems, and one sine dataset problem. Hybrid GWOSCA is combination of Grey Wolf Optimizer (GWO) used for exploitation phase and Sine Cosine Algorithm (SCA) for exploration phase in uncertain environment. The movement directions and speed of the grey wolf (alpha) are improved using position update equations of SCA. The numerical and statistical solutions obtained with hybrid GWOSCA approach are compared with other metaheuristics approaches such as Particle Swarm Optimization (PSO), Ant Lion Optimizer (ALO), Whale Optimization Algorithm (WOA), Hybrid Approach GWO (HAGWO), Mean GWO (MGWO), Grey Wolf Optimizer (GWO), and Sine Cosine Algorithm (SCA). Results demonstrate that newly hybrid approach can be highly effective in solving benchmark and real life applications with or without constrained and unknown search areas.

In this study, we present a newly hybrid variant combining PSO and GWO variants named HPSOGWO. We use twenty-three unimodal, multimodal, and fixed-dimension multimodal functions to compare the performance of hybrid variant with both standard PSO and standard GWO.

The rest of the paper is structured as follows. The Particle Swarm Optimization (PSO) and Grey Wolf Optimizer (GWO) algorithm are discussed in Sections 2 and 3. The HPSOGWO mathematical model and pseudocode (shown in Pseudocode 1) are also discussed in Section 4. The benchmark tested functions are presented in Section 5 and results and discussion are represented in Section 6, respectively. Finally, the conclusion of the work is offered in Section 7.

Initialization
Initialize , , and
         //
Evaluate the fitness of agents by using (5)
while ( no. of iter)
for each search agent
Update the velocity and position by using (6)
end for
Update , , and
Evaluate the fitness of all search agents
Update positon first three agents
end while
return // first best search agent position

2. Particle Swarm Optimization Variant

The PSO algorithm was firstly introduced by Kennedy and Eberhart in [32] and its fundamental judgment was primarily inspired by the simulation of the social behavior of animals such as bird flocking and fish schooling. While searching for food, the birds are either scattered or go together before they settle on the position where they can find the food. While the birds are searching for food from one position to another, there is always a bird that can smell the food very well; that is, the bird is aware of the position where the food can be found, having the correct food resource message. Because they are transmitting the message, particularly the useful message at any period while searching for the food from one position to another, the birds will finally flock to the position where food can be found.

This approach is learned from animal’s behavior to calculate global optimization functions/problems and every partner of the swarm/crowd is called a particle. In PSO technique, the position of each partner of the crowd in the global search space is updated by two mathematical equations. These mathematical equations are

3. Grey Wolf Optimizer (GWO)

The above literature shows that there are many swarm intelligence approaches originated so far, many of them inspired by search behaviors and hunting. But there is no swarm intelligence approach in the literature mimicking the leadership hierarchy of grey wolves, well known for their pack hunting. Motivated by various algorithms, Mirjalili et al. [33] presented a new swarm intelligence approach known as Grey Wolf Optimizer (GWO) inspired by the grey wolves and investigate its abilities in solving standard and real life applications. The GWO variant mimics the hunting mechanism and leadership hierarchy of grey wolves in nature. In GWO, the crowd is split into four different groups such as alpha, beta, delta, and omega which are employed for simulating the leadership hierarchy.

Grey wolf belongs to Canidae family. Grey wolves are measured as apex predators, meaning that they are at the top of the food chain. Grey wolves mostly prefer to live in a pack. The leaders are a female and a male known as alphas. The alpha () is generally liable for making decisions about sleeping, time to wake, hunting, and so on.

The second top level in the hierarchy of grey wolves is beta (). The beta () are subordinate wolves that help the first level wolf (alpha) in decision making or other pack actions. The second level wolf (beta) should respect the first level wolf (alpha) but orders the other lower level cipliner for the pack. The second level wolf (beta) reinforces the first level wolf (alpha’s) orders throughout the pack and gives feedback to the alpha.

The third level ranking grey wolf is omega (). This wolf plays the role of scapegoat. Third level grey wolves always have to submit to all the other dominant wolves. They are the last wolves that are permitted to eat. It may seem that the third level wolves do not have a significant personality in the pack, but it can be observed that the entire pack faces internal struggle and troubles in case of losing the omega. This is due to the venting of violence and frustration of all wolves by the omega (). This assists fulfilling the whole pack and maintaining the dominance structure.

If a wolf is not an alpha (), beta (), or omega (), she/he is known as a subordinate (or delta ()). Delta () wolves have to submit to alpha () and beta (), but they dominate the omega ().

In addition, three main steps of hunting, searching for prey, encircling prey, and attacking prey, are implemented to perform optimization.

The encircling behavior of each agent of the crowd is calculated by the following mathematical equations: The vectors and are mathematically formulated as follows:

3.1. Hunting

In order to mathematically simulate the hunting behavior, we suppose that the alpha (), beta (), and delta () have better knowledge about the potential location of prey. The following mathematical equations are developed in this regard:

3.2. Searching for Prey and Attacking Prey

is random value in gap . When random value , the wolves are forced to attack the prey. Searching for prey is the exploration ability and attacking the prey is the exploitation ability. The arbitrary values of are utilized to force the search to move away from the prey.

When , the members of the population are enforced to diverge from the prey.

4. A Newly Hybrid Algorithm

Many researchers have presented several hybridization variants for heuristic variants. According to Talbi [34], two variants can be hybridized in low level or high level with relay or coevolutionary techniques as heterogeneous or homogeneous.

In this text, we hybridize Particle Swarm Optimization with Grey Wolf Optimizer algorithm using low-level coevolutionary mixed hybrid. The hybrid is low level because we merge the functionalities of both variants. It is coevolutionary because we do not use both variants one after the other. In other ways, they run in parallel. It is mixed because there are two distinct variants that are involved in generating final solutions of the problems. On the basis of this modification, we improve the ability of exploitation in Particle Swarm Optimization with the ability of exploration in Grey Wolf Optimizer to produce both variants’ strength.

In HPSOGWO, first three agents’ position is updated in the search space by the proposed mathematical equations (5). Instead of using usual mathematical equations, we control the exploration and exploitation of the grey wolf in the search space by inertia constant. The modified set of governing equations are In order to combine PSO and GWO variants, the velocity and updated equation are proposed as follows:

5. Testing Functions

In this section, twenty-three benchmark problems are used to test the ability of HPSOGWO. These problems can be divided into three different groups: unimodal, multimodal, and fixed-dimension multimodal functions. The exact details of these test problems are shown in Tables 13.

6. Analysis and Discussion on the Results

The PSO, GWO, and HPSOGWO pseudocodes are coded in MATLAB R2013a and implemented on Intel HD Graphics, 15.6′′ 3 GB Memory, i5 Processor 430 M, 16.9 HD LCD, Pentium-Intel Core™, and 320 GB HDD. Number of search agents is 30, maximum number of iterations is 500, and , , and ; all these parameter settings are applied to test the quality of hybrid and other metaheuristics.

In this paper, our objective is to present the best suitable optimal solution as compared to other metaheuristics. The best optimal solutions and best statistical values achieved by HPSOGWO variant for unimodal functions are shown in Tables 4 and 5, respectively.

Firstly, we tested the ability of HPSOGWO, PSO, and GWO variant that were run 30 times on each unimodal function. The HPSOGWO, GWO, and PSO algorithms have to be run at least more than ten times to search for the best numerical or statistical solutions. It is again a general method that an algorithm is run on a test problem many times and the best optimal solutions, mean and standard deviation of the superior obtained results in the last generation, are evaluated as metrics of performance. The performance of proposed hybrid variant is compared to PSO and GWO variant in terms of best optimal and statistical results. Similarly the convergence performances of HPSOGWO, PSO, and GWO variant have been compared on the basis of graph; see Figures 1(a)1(g). On the basis of obtained results and convergence performance of the variants, we concluded that HPSOGWO is more reliable in giving superior quality results with reasonable iterations and avoids premature convergence of the search process to local optimal point and provides superior exploration of the search course.

Further, we noted that the unimodal problems are suitable for standard exploitation. Therefore, these results prove the superior performance of HPSOGWO in terms of exploiting the optimum.

Secondly, the performance of the proposed hybrid variant has been tested on six multimodal benchmark functions. In contrast to the multimodal problems, unimodal benchmark problems have many local optima with the number rising exponentially with dimension. This makes them appropriate for benchmarking the exploration capability of an approach. The numerical and statistical results obtained from HPSOGWO, PSO, and GWO algorithms are shown in Tables 6 and 7.

Experimental results show that the proposed variant finds a superior quality of solution without trapping in local maximum and to attain faster convergence performance; see Figures 2(a)2(f), respectively. This approach outperforms GWO and PSO variants on the majority of the multimodal benchmark functions. The obtained solutions also prove that the HPSOGWO variant has merit in terms of exploration.

Thirdly, the suitable solutions of fixed-dimension multimodal benchmark functions are illustrated in Tables 8 and 9. The fixed dimensional benchmark functions have many local optima with the number growing exponentially with dimension. This makes them fitting for benchmarking the exploration capacity of a variant. Experimental numerical and statistical solutions have shown that the proposed variant is able to find superior quality of results on maximum number of fixed dimensional multimodal benchmark functions as compared to PSO and GWO variants. Further, the convergence performance of these variants has been plotted in Figures 3(a)3(j). All numerical and statistical solutions demonstrate that the hybrid variant has merit in terms of exploration.

Finally, the accuracy of the newly hybrid approach has been verified using starting and ending time of the CPU (TIC and TOC), CPU time, and clock. These results are provided in Tables 1012, respectively. It may be seen that the hybrid algorithm solved most of the standard benchmark problems in the least time as compared to other metaheuristics.

To sum up, all simulation results assert that the HPSOGWO algorithm is very helpful in improving the efficiency of the PSO and GWO in terms of result quality as well as computational efforts.

7. Conclusion

In this article, a newly hybrid variant is proposed utilizing strengths of GWO and PSO. The main idea behind developing is to improve the ability of exploitation in Particle Swarm Optimization with the ability of exploration in Grey Wolf Optimizer to produce both variants’ strength. Twenty-three classical problems are used to test the quality of the hybrid variant compared to GWO and PSO. Experimental solutions proved that hybrid variant is more reliable in giving superior quality of solutions with reasonable computational iteration as compared to PSO and GWO.

Conflicts of Interest

The authors declare no conflicts of interest.