Advances in Operations Research

Volume 2016 (2016), Article ID 7325263, 14 pages

http://dx.doi.org/10.1155/2016/7325263

## Extended Prey-Predator Algorithm with a Group Hunting Scenario

^{1}School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Private Bag X01, Scottsville, Pietermaritzburg 3209, South Africa^{2}School of Mathematical Sciences, Universiti Sains Malaysia, 11800 Pulau Pinang, Malaysia

Received 25 November 2015; Revised 29 March 2016; Accepted 24 April 2016

Academic Editor: Imed Kacem

Copyright © 2016 Surafel Luleseged Tilahun et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Prey-predator algorithm (PPA) is a metaheuristic algorithm inspired by the interaction between a predator and its prey. In the algorithm, the worst performing solution, called the predator, works as an agent for exploration whereas the better performing solution, called the best prey, works as an agent for exploitation. In this paper, PPA is extended to a new version called* nm*-PPA by modifying the number of predators and also best preys. In* nm*-PPA, there will be* n* best preys and* m* predators. Increasing the value of* n* increases the exploitation and increasing the value of* m* increases the exploration property of the algorithm. Hence, it is possible to adjust the degree of exploration and exploitation as needed by adjusting the values of* n* and* m*. A guideline on setting parameter values will also be discussed along with a new way of measuring performance of an algorithm for multimodal problems. A simulation is also done to test the algorithm using well known eight benchmark problems of different properties and different dimensions ranging from two to twelve showing that* nm*-PPA is found to be effective in achieving multiple solutions in multimodal problems and also has better ability to overcome being trapped in local optimal solutions.

#### 1. Introduction

Solving optimization problems using metaheuristic algorithms became useful in different applications [1–5]. Metaheuristic algorithms are algorithms which try to improve the quality of a solution set with iterations using an “educated” trial and error way [6]. Since the development of genetic algorithm in mid 1970s, many metaheuristic algorithms are developed and used. The development of new algorithm, the extension of existing algorithm, and using these algorithms to solve real problems are among the leading research issues along with merging algorithms to improve their performance [7–14]. Most of these algorithms are based on the assumption that the optimal solution for an optimization problem is found near the current best solution among the solutions in hand. Hence, this will play a role in focusing on the exploitation of the solution space more than exploration, as solution tends to do an intensified search around the current best rather than exploring the solution space, especially in swarm based algorithms. However, these approaches become weak for deceiving problems where their global optimum is far from local solutions and the landscape of the objective space gives very small information regarding the global solution. In solving such kind of problems, high degree of exploration needs to be used. In addition, appropriate degree of exploitation needs to be used so that the final result will be a good approximation of the global solution. Hence, balancing between exploration and exploitation property of an algorithm is one of the issues that researchers are trying to deal with [10, 15].

Prey-predator algorithm (PPA) is one of the newly introduced metaheuristic algorithms [6, 16]. Even though it is introduced recently, it has been successfully used in different applications [6, 17–19]. Comparison study also shows that it is a promising algorithm compared to existing algorithms [6, 20]. It is inspired by how a predator hunts its prey and how the prey tries to run away, hide, and survive in this situation. The algorithm still shares the assumption that optimal solution is found around the current best solution at hand. However, it also focuses on exploring the solution space with a given follow-up probability. Two of the randomly generated feasible solutions will be assigned to be the predator and the best prey. The predator is a solution with the worst performance and the best prey is the solution with the best performance. The predator focuses totally on exploration and forcing the other preys to explore the solution space whereas the best prey totally focuses on exploitation. The rest of the solutions are called ordinary prey and affected by either the predator or the best prey based on algorithm parameter, probability of follow-up. Perhaps it is wise to control the exploration and exploitation rate by adjusting the number of best preys and predators, as group hunting is also common in some species of animals.

Inspired by a group hunting behavior of some animals, in this paper, PPA will be enhanced and extended to another version called -prey-predator algorithm (-PPA). -PPA is a PPA with multiple predators and multiple best preys. In this version of PPA, one can adjust the degree of exploration and exploitation by adjusting the number of best preys and predators. Furthermore, it will be shown that indeed -PPA is promising for multimodal optimization problems by finding multiple solutions within a single run of the algorithm. Since parameter assignment is another challenging issue in implementing metaheuristic algorithms, a discussion on setting parameter values based on the problem given will be discussed. The other contribution of this paper is introducing a way of measuring the exploration and also exploitation performance of algorithms based on their success in finding multiple solutions for multimodal problems.

In Section 2, a brief introduction on prey-predator algorithm will be presented followed by the proposed version of the algorithm, -PPA. In Section 3, experimental results will be discussed based on selected eight benchmark problems. A conclusion which contains a summary and possible future works will be discussed in Section 4.

#### 2. Extended Prey-Predator Algorithm

##### 2.1. Prey-Predator Algorithm

Metaheuristic algorithms are inspired by different natural aspects. Learning from nature to solve problems became useful as nature finds solution for problems without being told. The interaction between predators and preys has evolved for long. It means that the predators have managed to capture the weak or unfortunate prey to survive and also the preys have managed to survive. There are different kinds of predators and prey interactions [21]. Among those interactions, the interaction in which the predators run after their prey inspires the prey-predator algorithm. These interactions have motivated different algorithms. For example, motivated by the interaction between a predator and its prey, Haynes and Sen [22] discussed evolving behavioral strategies in predator and preys based on the predator-prey pursuit. The paper considers how to surround a prey in order to capture it in a grid space. The other algorithm is spiral predator-prey approach for multiobjective optimization which was proposed by Laumanns et al. [23]. This algorithm is basically designed for multiobjective optimization in which it uses the concept of graph theory where each prey will be placed in the vertices of the graph and uses -evolutionary strategy. The prey-predator algorithm we are discussing is different from these algorithms, even though they are motivated by similar scenario and have similar names.

Prey-predator algorithm is a metaheuristic optimization algorithm introduced by Tilahun and Ong [6, 16]. In the algorithm, randomly generated solutions will be assigned with numerical value called survival value depending on their performance in the objective function. For a maximization problem, survival value of a solution is directly proportional to its functional value in the objective function. Once the survival value is computed for each solution, the solution with the least survival value will be assigned as a predator; a solution with the highest survival value will be assigned as a best prey and the rest as ordinary prey. The updating process of the predator, , is done by running after the weakest prey, where a weakest prey is an ordinary prey with least survival value, with a local search step length, and runs randomly with bigger step length, as given in the following: where and are algorithm parameters to determine the maximum exploitation and exploration step lengths, is a random direction, is a random number between 0 and one from a uniform distribution, and for the weak prey . An ordinary prey, , follows better prey in terms of survival value and does a local search by choosing a unit direction from random directions if the probability of follow-up is met; otherwise, it runs randomly away from the predator, as given in the following:where and is a local search direction given by , where are random directions for local search and is a random direction away from the predator.

The best prey, , will totally focus on local search. The local search is done by generating random directions and checking if any of these directions improve the performance of the best prey. If such a direction exists, the best prey will move in that direction; otherwise, it will remain in its current position, as given in the following: where for being the random directions and a zero vector.

This updating of solutions will continue until a termination criterion is met.

Hence, the predator works as an agent for exploration by scaring the other preys and forcing them to explore the feasible region. Unlike the predator, the best prey focuses totally on exploitation. The ordinary prey can tend to focus on either exploration or exploitation based on the probability of follow-up. If probability of follow-up is met, if a random number is at most the probability of follow-up, they will do a local search in addition to following better prey and if the probability of follow-up is not met they will explore the solution space by randomly running away from the predator. Hence, in addition to the predator, the probability of follow-up plays a role in the degree of exploration versus exploitation. This means that if the probability of the follow-up is met, then the ordinary prey tends to focus more on exploitation, otherwise on exploration. As for the best prey, it does only exploitation. However, by increasing the number of best preys and the predators, one can increase both the exploration and exploitation.

##### 2.2. -Prey-Predator Algorithm

One of the challenges in optimizing multimodal optimization problems is the problem of being trapped in local optimal solution while searching for global solution. For that purpose, the degree of exploration should be sufficiently large in heuristic search, but again making the degree of exploration too big will affect the convergence behavior of the algorithm. The degree of exploration versus exploitation needs a proper tuning based on the problem at hand.

In the previous section, Section 2.1, it is discussed that the predator not only focuses on exploration but also forces other preys to explore the solution space with appropriate value for follow-up probability. Having multiple predators increases the exploration property of the algorithm. On the other hand, the best prey focuses totally on exploitation. Hence, increasing the number of best preys increases the degree of exploitation. In general, the user can tune the degree of exploration by increasing the number of predators from 1 to and the degree of exploration by increasing the number of best preys from 1 to . One of the advantages of tuning the number of predators and best preys to control the degree of exploration and exploitation is that it is possible to increase both the exploration and the exploitation behavior of the algorithm by increasing and at the same time. A good explorative behavior with larger makes the algorithm suitable for deceiving problems and helps it to jump out of local solutions and a good explorative behavior with larger value for helps algorithm to find multiple local and also global solutions in multimodal problems and also better quality solutions will be generated.

The updating of the predator and the best prey, as well as the termination criterion, will be the same as that given in (1) and (5), respectively. The only modification in the updating process of the ordinary prey is that if the probability of follow-up, , is not met, it will run away from the predator with the worst performance. Hence, the basic steps of the proposed algorithm are as follows:(1)Set up parameter and generate random solutions.(2)Rank the solution according to their performance in the objective functions from the worst to the best; for exmple, where .(3)Categorize the solutions as predators (), ordinary prey (), and best prey ().(4)Move the predators randomly and towards .(5)Move the ordinary prey towards if for a random number ; otherwise, move randomly away from .(6)Move each best prey based on random and in an improving direction.(7)If termination criterion is met, end; otherwise, go back to step .The algorithm is summarized as in Figure 1.