Applied Computational Intelligence and Soft Computing

Volume 2016, Article ID 7950348, 16 pages

http://dx.doi.org/10.1155/2016/7950348

## Modified Grey Wolf Optimizer for Global Engineering Optimization

^{1}Department of Electronics and Communication Engineering, Chandigarh University, Mohali, Punjab 140413, India^{2}Department of Electronics and Communication Engineering, Thapar University, Patiala, Punjab 147004, India

Received 30 November 2015; Accepted 3 April 2016

Academic Editor: Samuel Huang

Copyright © 2016 Nitin Mittal et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Nature-inspired algorithms are becoming popular among researchers due to their simplicity and flexibility. The nature-inspired metaheuristic algorithms are analysed in terms of their key features like their diversity and adaptation, exploration and exploitation, and attractions and diffusion mechanisms. The success and challenges concerning these algorithms are based on their parameter tuning and parameter control. A comparatively new algorithm motivated by the social hierarchy and hunting behavior of grey wolves is Grey Wolf Optimizer (GWO), which is a very successful algorithm for solving real mechanical and optical engineering problems. In the original GWO, half of the iterations are devoted to exploration and the other half are dedicated to exploitation, overlooking the impact of right balance between these two to guarantee an accurate approximation of global optimum. To overcome this shortcoming, a modified GWO (mGWO) is proposed, which focuses on proper balance between exploration and exploitation that leads to an optimal performance of the algorithm. Simulations based on benchmark problems and WSN clustering problem demonstrate the effectiveness, efficiency, and stability of mGWO compared with the basic GWO and some well-known algorithms.

#### 1. Introduction

Metaheuristic algorithms are powerful methods for solving many real-world engineering problems. The majority of these algorithms have been derived from the survival of fittest theory of evolutionary algorithms, collective intelligence of swarm particles, behavior of biological inspired algorithms, and/or logical behavior of physical algorithms in nature.

Evolutionary algorithms are those who mimic the evolutionary processes in nature. The evolutionary algorithms are based on survival of fittest candidate for a given environment. These algorithms begin with a population (set of solutions) which tries to survive in an environment (defined with fitness evaluation). The parent population shares its properties of adaptation to the environment to the children with various mechanisms of evolution such as genetic crossover and mutation. The process continues over a number of generations (iterative process) till the solutions are found to be most suitable for the environment. Some of the evolutionary algorithms are Genetic Algorithm (GA) [1], Evolution Strategies (ES) [2], Genetic Programming (GP) [3], Differential Evolution (DE) [4], and Biogeography-Based Optimization (BBO) [5–9].

The physical algorithms are inspired by physical processes such as heating and cooling of materials (Simulated Annealing [10]), discrete cultural information which is treated as in between genetic and culture evolution (Memetic Algorithm [11]), harmony of music played by musicians (Harmony Search [12, 13]), cultural behavior of frogs (Shuffled Frog-Leaping Algorithm [14]), Gravitational Search algorithm [15], Multiverse Optimizer (MVO) [16], and Chemical Reaction Optimization (CRO) [17].

Swarm intelligence is the group of natural metaheuristics inspired by the “collective intelligence” of swarms. The collective intelligence is built up through a population of homogeneous agents interacting with each other and with their environment. Example of such intelligence is found among colonies of ants, flocks of birds, schools of fish, and so forth. Particle Swarm Optimization [18] is developed based on the swarm behavior of birds. The firefly algorithm [19] is formulated based on the flashing behavior of fireflies. Bat Algorithm (BA) [20] is based on the echolocation behavior of bats. Ant Colony Optimization (ACO) [21, 22] is inspired by the pheromone trail laying behavior of real ant colonies. A new evolutionary optimization algorithm, Cuckoo Search (CS) Algorithm [23], is inspired by lifestyle of cuckoo birds. The major algorithms include Ant Colony Optimization (ACO) [21, 22], Particle Swarm Optimization (PSO) [18], Artificial Bee Colony (ABC) Algorithm [24], Fish Swarm Algorithm (FSA) [25], Glowworm Swarm Optimization (GSO) [26], Grey Wolf Optimizer (GWO) [27], Fruit Fly Optimization Algorithm (FFOA) [28], Bat Algorithm (BA) [20], Novel Bat Algorithm (NBA) [29], Dragonfly Algorithm (DA) [30], Cat Swarm Optimization (CSO) [31], Cuckoo Search (CS) Algorithm [23], Cuckoo Optimization Algorithm (COA) [32], and Spider Monkey Optimization (SMO) Algorithm [33].

The biologically inspired algorithms comprise natural metaheuristics derived from living phenomena and behavior of biological organisms. The intelligence derived with bioinspired algorithms is decentralized, distributed, self-organizing, and adaptive in nature under uncertain environments. The major algorithms in this field include Artificial Immune Systems (AIS) [34], Bacterial Foraging Optimization (BFO) [35], and Krill Herd Algorithm [36].

Because of their inherent advantages, such algorithms can be applied to various applications including power systems operations and control, job scheduling problems, clustering and routing problems, batch process scheduling, image processing, and pattern recognition problems.

GWO is recently developed heuristics inspired from the leadership hierarchy and hunting mechanism of grey wolves in nature and has been successfully applied for solving economic dispatch problems [37], feature subset selection [38], optimal design of double later grids [39], time forecasting [40], flow shop scheduling problem [41], optimal power flow problem [42], and optimizing key values in the cryptography algorithms [43]. A number of variants are also proposed to improve the performance of basic GWO that include binary GWO [44], a hybrid version of GWO with PSO [45], integration of DE with GWO [46], and parallelized GWO [47, 48].

Every optimization algorithm stated above needs to address the exploration and exploitation of a search space. In order to be successful, an optimization algorithm needs to establish a good ratio between exploration and exploitation. In this paper, a modified GWO (mGWO) is proposed to balance the exploration and exploitation trade-off in original GWO algorithm. Different functions with diverse slopes are employed to tune the parameters of GWO algorithm for varying exploration and exploitation combinations over the course of iterations. Increasing the exploration in comparison to exploitation increases the convergence speed and avoids the local minima trapping effect.

The rest of the paper is organized as follows. Section 2 gives the overview of original GWO. The proposed mGWO algorithm is explained in Section 3. The experimental results are demonstrated in Section 4. Section 5 solves the clustering problem in WSN for cluster head selection to demonstrate the applicability of the proposed algorithm. Finally, Section 6 concludes the paper.

#### 2. Overview of Grey Wolf Optimizer Algorithm

Grey Wolf Optimizer (GWO) is a typical swarm-intelligence algorithm which is inspired from the leadership hierarchy and hunting mechanism of grey wolves in nature. Grey wolves are considered as apex predators; they have average group size of 5–12. In the hierarchy of GWO, alpha () is considered the most dominating member among the group. The rest of the subordinates to are beta () and delta () which help to control the majority of wolves in the hierarchy that are considered as omega (). The wolves are of lowest ranking in the hierarchy.

The mathematical model of hunting mechanism of grey wolves consists of the following:(i)Tracking, chasing, and approaching the prey.(ii)Pursuing, encircling, and harassing the prey until it stops moving.(iii)Attacking the prey.

##### 2.1. Encircling Prey

Grey wolves encircle the prey during the hunt which can be mathematically written as [27]where indicates the current iteration, and are coefficient vectors, is the position vector of the prey, and indicates the position vector of a grey wolf.

The vectors and are calculated as follows:where components of are linearly decreased from 2 to 0 over the course of iterations and and are random vectors in .

##### 2.2. Hunting

Hunting of prey is usually guided by and , and will participate occasionally. The best candidate solutions, that is, , , and , have better knowledge about the potential location of prey. The other search agents () update their positions according to the position of three best search agents. The following formulas are proposed in this regard:

##### 2.3. Attacking Prey

In order to mathematically model for approaching the prey, we decrease the value of . The fluctuation range of is also decreased by . is a random value in the interval where is decreased linearly from 2 to 0 over the course of iterations. When random values of are in , the next position of a search agent can be in any position between its current position and the position of the prey. The value forces the wolves to attack the prey.

After the attack again they search for the prey in the next iteration, wherein they again find the next best solution among all wolves. This process repeats till the termination criterion is fulfilled.

#### 3. Modified GWO Algorithm

Finding the global minimum is a common, challenging task among all minimization methods. In population-based optimization methods, generally, the desirable way to converge towards the global minimum can be divided into two basic phases. In the early stages of the optimization, the individuals should be encouraged to scatter throughout the entire search space. In other words, they should try to explore the whole search space instead of clustering around local minima. In the latter stages, the individuals have to exploit information gathered to converge on the global minimum. In GWO, with fine-adjusting of the parameters and , we can balance these two phases in order to find global minimum with fast convergence speed.

Although different improvements of individual-based algorithms promote local optima avoidance, the literature shows that population-based algorithms are better in handling this issue. Regardless of the differences between population-based algorithms, the common approach is the division of optimization process to two conflicting milestones: exploration versus exploitation. The exploration encourages candidate solutions to change abruptly and stochastically. This mechanism improves the diversity of the solutions and causes high exploration of the search space. In contrast, the exploitation aims for improving the quality of solutions by searching locally around the obtained promising solutions in the exploration. In this milestone, candidate solutions are obliged to change less suddenly and search locally.

Exploration and exploitation are two conflicting milestones where promoting one results in degrading the other. A right balance between these two milestones can guarantee a very accurate approximation of the global optimum using population-based algorithms. On the one hand, mere exploration of the search space prevents an algorithm from finding an accurate approximation of the global optimum. On the other hand, mere exploitation results in local optima stagnation and again low quality of the approximated optimum.

In GWO, the transition between exploration and exploitation is generated by the adaptive values of and . In this, half of the iterations are devoted to exploration and the other half are used for exploitation , as shown in Figure 1(a). Generally, higher exploration of search space results in lower probability of local optima stagnation. There are various possibilities to enhance the exploration rate as shown in Figure 1(b), in which exponential functions are used instead of linear function to decrease the value of over the course of iterations. Too much exploration is similar to too much randomness and will probably not give good optimization results. But too much exploitation is related to too little randomness. Therefore, there must be a balance between exploration and exploitation.