Abstract

Prey-predator algorithm (PPA) is a metaheuristic algorithm inspired by the interaction between a predator and its prey. In the algorithm, the worst performing solution, called the predator, works as an agent for exploration whereas the better performing solution, called the best prey, works as an agent for exploitation. In this paper, PPA is extended to a new version called nm-PPA by modifying the number of predators and also best preys. In nm-PPA, there will be n best preys and m predators. Increasing the value of n increases the exploitation and increasing the value of m increases the exploration property of the algorithm. Hence, it is possible to adjust the degree of exploration and exploitation as needed by adjusting the values of n and m. A guideline on setting parameter values will also be discussed along with a new way of measuring performance of an algorithm for multimodal problems. A simulation is also done to test the algorithm using well known eight benchmark problems of different properties and different dimensions ranging from two to twelve showing that nm-PPA is found to be effective in achieving multiple solutions in multimodal problems and also has better ability to overcome being trapped in local optimal solutions.

1. Introduction

Solving optimization problems using metaheuristic algorithms became useful in different applications [15]. Metaheuristic algorithms are algorithms which try to improve the quality of a solution set with iterations using an “educated” trial and error way [6]. Since the development of genetic algorithm in mid 1970s, many metaheuristic algorithms are developed and used. The development of new algorithm, the extension of existing algorithm, and using these algorithms to solve real problems are among the leading research issues along with merging algorithms to improve their performance [714]. Most of these algorithms are based on the assumption that the optimal solution for an optimization problem is found near the current best solution among the solutions in hand. Hence, this will play a role in focusing on the exploitation of the solution space more than exploration, as solution tends to do an intensified search around the current best rather than exploring the solution space, especially in swarm based algorithms. However, these approaches become weak for deceiving problems where their global optimum is far from local solutions and the landscape of the objective space gives very small information regarding the global solution. In solving such kind of problems, high degree of exploration needs to be used. In addition, appropriate degree of exploitation needs to be used so that the final result will be a good approximation of the global solution. Hence, balancing between exploration and exploitation property of an algorithm is one of the issues that researchers are trying to deal with [10, 15].

Prey-predator algorithm (PPA) is one of the newly introduced metaheuristic algorithms [6, 16]. Even though it is introduced recently, it has been successfully used in different applications [6, 1719]. Comparison study also shows that it is a promising algorithm compared to existing algorithms [6, 20]. It is inspired by how a predator hunts its prey and how the prey tries to run away, hide, and survive in this situation. The algorithm still shares the assumption that optimal solution is found around the current best solution at hand. However, it also focuses on exploring the solution space with a given follow-up probability. Two of the randomly generated feasible solutions will be assigned to be the predator and the best prey. The predator is a solution with the worst performance and the best prey is the solution with the best performance. The predator focuses totally on exploration and forcing the other preys to explore the solution space whereas the best prey totally focuses on exploitation. The rest of the solutions are called ordinary prey and affected by either the predator or the best prey based on algorithm parameter, probability of follow-up. Perhaps it is wise to control the exploration and exploitation rate by adjusting the number of best preys and predators, as group hunting is also common in some species of animals.

Inspired by a group hunting behavior of some animals, in this paper, PPA will be enhanced and extended to another version called -prey-predator algorithm (-PPA). -PPA is a PPA with multiple predators and multiple best preys. In this version of PPA, one can adjust the degree of exploration and exploitation by adjusting the number of best preys and predators. Furthermore, it will be shown that indeed -PPA is promising for multimodal optimization problems by finding multiple solutions within a single run of the algorithm. Since parameter assignment is another challenging issue in implementing metaheuristic algorithms, a discussion on setting parameter values based on the problem given will be discussed. The other contribution of this paper is introducing a way of measuring the exploration and also exploitation performance of algorithms based on their success in finding multiple solutions for multimodal problems.

In Section 2, a brief introduction on prey-predator algorithm will be presented followed by the proposed version of the algorithm, -PPA. In Section 3, experimental results will be discussed based on selected eight benchmark problems. A conclusion which contains a summary and possible future works will be discussed in Section 4.

2. Extended Prey-Predator Algorithm

2.1. Prey-Predator Algorithm

Metaheuristic algorithms are inspired by different natural aspects. Learning from nature to solve problems became useful as nature finds solution for problems without being told. The interaction between predators and preys has evolved for long. It means that the predators have managed to capture the weak or unfortunate prey to survive and also the preys have managed to survive. There are different kinds of predators and prey interactions [21]. Among those interactions, the interaction in which the predators run after their prey inspires the prey-predator algorithm. These interactions have motivated different algorithms. For example, motivated by the interaction between a predator and its prey, Haynes and Sen [22] discussed evolving behavioral strategies in predator and preys based on the predator-prey pursuit. The paper considers how to surround a prey in order to capture it in a grid space. The other algorithm is spiral predator-prey approach for multiobjective optimization which was proposed by Laumanns et al. [23]. This algorithm is basically designed for multiobjective optimization in which it uses the concept of graph theory where each prey will be placed in the vertices of the graph and uses -evolutionary strategy. The prey-predator algorithm we are discussing is different from these algorithms, even though they are motivated by similar scenario and have similar names.

Prey-predator algorithm is a metaheuristic optimization algorithm introduced by Tilahun and Ong [6, 16]. In the algorithm, randomly generated solutions will be assigned with numerical value called survival value depending on their performance in the objective function. For a maximization problem, survival value of a solution is directly proportional to its functional value in the objective function. Once the survival value is computed for each solution, the solution with the least survival value will be assigned as a predator; a solution with the highest survival value will be assigned as a best prey and the rest as ordinary prey. The updating process of the predator, , is done by running after the weakest prey, where a weakest prey is an ordinary prey with least survival value, with a local search step length, and runs randomly with bigger step length, as given in the following: where and are algorithm parameters to determine the maximum exploitation and exploration step lengths, is a random direction, is a random number between 0 and one from a uniform distribution, and for the weak prey . An ordinary prey, , follows better prey in terms of survival value and does a local search by choosing a unit direction from random directions if the probability of follow-up is met; otherwise, it runs randomly away from the predator, as given in the following:where and is a local search direction given by , where are random directions for local search and is a random direction away from the predator.

The best prey, , will totally focus on local search. The local search is done by generating random directions and checking if any of these directions improve the performance of the best prey. If such a direction exists, the best prey will move in that direction; otherwise, it will remain in its current position, as given in the following: where for being the random directions and a zero vector.

This updating of solutions will continue until a termination criterion is met.

Hence, the predator works as an agent for exploration by scaring the other preys and forcing them to explore the feasible region. Unlike the predator, the best prey focuses totally on exploitation. The ordinary prey can tend to focus on either exploration or exploitation based on the probability of follow-up. If probability of follow-up is met, if a random number is at most the probability of follow-up, they will do a local search in addition to following better prey and if the probability of follow-up is not met they will explore the solution space by randomly running away from the predator. Hence, in addition to the predator, the probability of follow-up plays a role in the degree of exploration versus exploitation. This means that if the probability of the follow-up is met, then the ordinary prey tends to focus more on exploitation, otherwise on exploration. As for the best prey, it does only exploitation. However, by increasing the number of best preys and the predators, one can increase both the exploration and exploitation.

2.2. -Prey-Predator Algorithm

One of the challenges in optimizing multimodal optimization problems is the problem of being trapped in local optimal solution while searching for global solution. For that purpose, the degree of exploration should be sufficiently large in heuristic search, but again making the degree of exploration too big will affect the convergence behavior of the algorithm. The degree of exploration versus exploitation needs a proper tuning based on the problem at hand.

In the previous section, Section 2.1, it is discussed that the predator not only focuses on exploration but also forces other preys to explore the solution space with appropriate value for follow-up probability. Having multiple predators increases the exploration property of the algorithm. On the other hand, the best prey focuses totally on exploitation. Hence, increasing the number of best preys increases the degree of exploitation. In general, the user can tune the degree of exploration by increasing the number of predators from 1 to and the degree of exploration by increasing the number of best preys from 1 to . One of the advantages of tuning the number of predators and best preys to control the degree of exploration and exploitation is that it is possible to increase both the exploration and the exploitation behavior of the algorithm by increasing and at the same time. A good explorative behavior with larger makes the algorithm suitable for deceiving problems and helps it to jump out of local solutions and a good explorative behavior with larger value for helps algorithm to find multiple local and also global solutions in multimodal problems and also better quality solutions will be generated.

The updating of the predator and the best prey, as well as the termination criterion, will be the same as that given in (1) and (5), respectively. The only modification in the updating process of the ordinary prey is that if the probability of follow-up, , is not met, it will run away from the predator with the worst performance. Hence, the basic steps of the proposed algorithm are as follows:(1)Set up parameter and generate random solutions.(2)Rank the solution according to their performance in the objective functions from the worst to the best; for exmple, where .(3)Categorize the solutions as predators (), ordinary prey (), and best prey ().(4)Move the predators randomly and towards .(5)Move the ordinary prey towards if for a random number ; otherwise, move randomly away from .(6)Move each best prey based on random and in an improving direction.(7)If termination criterion is met, end; otherwise, go back to step .The algorithm is summarized as in Figure 1.

The termination criterion can be the maximum number of iterations; no improvement is recorded in consecutive generations or a given level of tolerance is archived if the optimal solution is known.

3. Experimental Results

A simulation is done on selected eight benchmark problems.

3.1. Benchmark Problems

The benchmark problems are selected from different categories such as being differentiable, nondifferentiable, continuous, discontinuous, separable, partially separable, nonseparable, scalable, and nonscalable and are also with dimensions from two up to twelve. All of the selected problems are multimodal and hence nonconvex.Shubert function [24] is as follows:It is a multimodal problem with 760 optimal solutions among which the eighteen global solutions are located at (7.0835, 4.8580), (−7.0835, −7.7083), (−1.4251, −7.0835), (1.4251, −0.8003), (−7.7083, −7.0835), (−7.7083, −0.8003), (0.8003, −7.7083), (−0.8003, 4.8580), (5.4828, −7.7083), (5.4828, −1.4251), (4.8580, −0.8003), (4.8580, −7.0835), (1.4251, 5.4828), (−0.8003, −1.4251), (−7.7083, 5.4828), (7.0835, −1.4251), (4.8580, 5.4828), and (5.4828, 4.8580) with optimum functional value of −186.7309.Tripod function [25] is as follows:where The global optimal value which is is found at .Problem three is as follows: We have constructed a discontinuous, nondifferentiable, multimodal test problem to be the third test problem, which is given in the following:where . The problem has infinitely many global solutions of with . The global optimal value is . Furthermore, the local solutions of the problem are found at for , with functional value of . For the simulation, we set ; hence, .Shekel 10 function [26] is as follows:where The global minimum is found at with approximate minimum functional value of −10.5319.Generalized price function is as follows: Based on price function 1 in [27], we construct a generalized price function as given in the following:where . The generalized price function has optimal solutions evenly spaced in the solution space and are found at , where . Here, , , and are set to be 5, 5, and 100, respectively.Hartman 6 function [28] is as follows:where The global solution is found at (0.201690, 0.150011, 0.476874, 0.275332, 0.311652, 0.657301) with functional value of −3.32236.Pathological function [29] is as follows:The global optimal solution is found at , where , with optimum functional value of 0. set to be 7.Deb 3 function [25] is as follows:The feasible region in the reference was given for each to be between −1 and 1; however, having a negative value for results in having a complex number involved. Hence, a modification needs to be made either on the feasible set, which means making nonnegative, or replacing by . If the second case is taken, we will have solutions, but for our case we choose to set all the variables as nonnegative. The problem has dimensions with optimal solutions located at , where , with functional value of zero. For the simulation, we set to be 12.All the problems are nonconvex and multimodal. The properties of the functions are summarized in Table 1.

The three-dimensional graphs of the functions, the function with two variables, are given in Figure 2. In drawing the graphs, in cases of the test problems having matrices and vectors, like in problem (6), the appropriate components are taken. For the fifth test problem, the feasible regions are set to be between negative ten and ten so that the behavior of the optimal solutions can be seen clearly.

In addition, the contour graph, which is the image of the objective function on the two-dimensional decision space, is presented in Figure 3. Since the contour numbers, which are the objective function values, are included in the contour graph, it properly demonstrates the landscape of the objective function based on the values in the decision space.

3.2. Simulation Results

To run -PPA for the selected problems, algorithm parameters need to be set as appropriate as possible, since the performance of the algorithm depends on the parameters. Choosing appropriate values for the parameters is an optimization problem which needs to be dealt with. For instance, if a problem has optimal solutions, both local and global, assigning best preys where will not help to find all the solutions, but if we assign , there is a possibility of getting the majority of the solutions, if not all. Higher dimension needs more exploration in general; hence, the probability of follow-up and the dimension will have an inverse relationship as . The number of initial solutions also needs to increase with the dimension and also with the size of the feasible region. The local and exploration step lengths should be determined based on the size of the feasible region, as too big step length causes the solutions to jump out of the feasible region and too short step lengths will affect the speed of convergence. Furthermore, the number of random directions, , should be directly proportional to the dimension of the problem. The number of best preys, , and predators, , should satisfy , and assigning these values depends on the number of solutions existing in the problem and the number of solutions we are interested to find. In addition to that, it also depends on the degree of explorations and exploitation that we are interested to apply. The termination criterion is set to be the maximum number of iterations, , and has been set to increase with the dimension, which again gives more chance for the solutions to explore the solution space.

Considering all of the ideas discussed and the recommendations in [6, 16], the algorithm parameters for the simulation have been set as given in Table 2.

3.2.1. Based on Single Trial

It is clear that metaheuristic algorithms are probabilistic solution methods and one can get different results in different runs of the algorithm, even with using the same parameter setting. However, in this section, we will demonstrate the results with a single run of the algorithm to demonstrate how the algorithm works. A detailed analysis with multiple runs will be given in the next section. Hence, after running the algorithm once for the specified number of iterations, the solutions for each of the problems have been recorded as follows:(1)First problem: after running -PPA with the specified values for the parameter, the best solution is found to be with functional value of −186.7301. Furthermore, thirty-six best preys converge to sixty out of eighty global solutions. The absolute difference between the best functional value and the list among the best solutions is 0.3078. The locations of the solutions at the beginning and at the end of the algorithm run are given in Figure 4.(2)Second problem: is found to be the best solution with a functional value of 0.0005. -PPA has also managed to approximate three solutions, one global optimum and two local solutions. Nine of the initial solutions converge around the global solution with a maximum absolute functional difference of 0.0009. Eight of the solutions approximate the local solution with best approximation of with a functional value of 1.1966. The other three of the best preys converge around the local solution with the best one being with a functional value of 2.1208. The locations of the solutions at the beginning and the end of the run of the algorithm are given in Figure 5.(3)Third problem: twenty of the solutions have converged to the flat surface which contains the global optimum solutions, , with functional value of −11.47608.(4)Fourth problem: the optimum solution after running -PPA is found at with functional value of −10.5356, which is even better than the recorded solutions on literatures. Furthermore, ten of the solutions converge around the global optimum which is with the functional value range of . One of the solutions converges to with functional value of −5.1752 which is almost the local minimum of −5.17518 found at . Eight of the other solutions converge to the local solution of with the best and worst functional values of −5.1752 and −5.1262, respectively.(5)Fifth problem: the best optimum solution found is at with functional value of 0.0008. In addition to this, twenty-four of the solutions converge to nineteen global solutions with an absolute maximum functional difference from the best solution with 0.000085.(6)Sixth problem: the optimum value is found to be at (0.2028, 0.1496, 0.4742, 0.2759, 0.3114, 0.6540) with a functional value of −3.3066. Furthermore, forty of the solutions are aggregated around the global solution with absolute functional value difference from the optimum ranging between 0.00001 and 0.002.(7)Seventh problem: the optimum functional value found is 0.0000009 at (0.00004, −0.00001, −0.00001, 0.00000, 0.00002, −0.00001, −0.00004). Furthermore, 250 of the solutions give solutions whose functional values are at most 0.00114 and are around the optimal solutions.(8)Eighth problem: the optimum functional value after running -PPA is found to be at (0.0183, 0.0182, 0.3442, 0.0188, 0.0188, 0.0181, 0.0182, 0.0183, 0.3451, 0.0181, 0.1564, 0.3435). Furthermore, the hundred solutions converge to the eighty-seven global solutions with a functional absolute error of .The performance and progress of the algorithm per iteration are given in Figure 6.

3.2.2. Based on Multiple Trial

In addition to the single trial, -PPA has run 100 times for each of the problems and the average and standard deviation of functional value and also number of optimal solutions discovered is recorded and studied. For problems with finite global and local solutions, the performance of the algorithm is measured on how many of the solutions it converges to. Finding multiple solution is a result of good exploration property of the algorithm and how good the solutions are approximated depends on the property of the exploitation. In order to measure the performance of the algorithm, let us define measurement metrics called global success rate and total success rate. The global success rate (GS) is the number of global solutions approximated per the total number of global solutions, as given in the following:where is the number of global solutions found under the run of the algorithm, is the number of best preys, and is the number of global solutions existing. Similarly, the total success rate (TS) can be measured using the following, where is the number of solutions, both local and global, found and is the total number of solutions existing, both global and local:Furthermore, under multiple trials, the success rate may vary from trial to trial; hence, the average success rate and also the standard deviation of these success rates will be discussed. However, the success rate can be subjective as “how near should a solution be to the global solution so that it will be counted in ?” may have different answers. Hence, the success rate should be accompanied with a tolerance so that if the absolute error of the approximate solution from the exact solution is at most the given tolerance (), then it will be counted as a success. For the simulation purpose, is set based on the single trial simulation results in the previous section. In addition, a criterion needs to be set to distinguish between two solutions under the tolerance and to take them as different solutions or the same approximate solution. This will be done by setting a distance tolerance between solutions, . If the distance between two solutions is under , then they will be considered as being approximating the same solution; however, if the distance between them is larger than , then the two solutions will be considered as being approximating different solutions. For the simulation purpose, is chosen based on the simulation in the previous section and also by considering the size of the feasible region.

Hence, based on this, Table 3 presents the simulation results of the 100 trials of the eight benchmark problems, where GS and TS are estimated based on the expected values.

From Table 3, the proposed approach manages to multiple global and local solutions. The success of finding multiple global solution is 70% implying that the algorithm manages to converge to a number of global solutions. For instance, in the first test problem, on average, it has converged to 15 of the 18 global solutions with about 18 local solutions. Hence, 33 of the best preys have converged to different solutions. In the second, the fourth, the sixth, and the seventh problems, the algorithm converges to all local as well as global solutions. In the case of the third problem, the 20 best preys have converged to 20 of the global solutions, on average. Similarly, in the fifth and eighth problems, 18 and 68 of the global solutions are found.

The termination criterion is changed from maximum number of iterations to level of achievement based on a given tolerance. This means that if a tolerance, , is given, then the algorithm will terminate when the functional value becomes at most , where is the functional value of the global solution. The simulation result, under this termination criterion with hundred trials using IntelCorei3-3110M laptop machine with 2.4 GHz 64-bit operating system, is given in Table 4.

The simulation results show that -PPA outperforms PPA in all the test problems with a slightly higher processing time. Since the number of best preys increases, the time needed to determine the best direction increases which in turn increases the overall time of the algorithm. In addition, -PPA converges with smaller iteration number compared to PPA in all the test problems. Hence, the proposed approach is a promising approach in escaping local solutions and also achieving multiple solutions within a single run.

4. Conclusion

In this paper, the recently introduced metaheuristic algorithm called prey-predator algorithm (PPA) has been extended to a new version called -PPA. PPA is inspired by how a predator hunts its prey and the prey runs away and tries to survive in this situation. This algorithm has been extended by incorporating group hunting behavior. Hence, in -PPA, there will be multiple predators and best preys as well. By adjusting the number of best preys and predators, it is possible to control the focus of the search mechanism between exploration and exploitation of the search space. Eight benchmark problems are selected with different properties as continuous, discontinuous, differentiable, nondifferentiable, separable, partially separable, nonseparable, scalable, and nonscalable. The simulation results show that ended -PPA is a promising algorithm for multimodal optimization problems and has an advantage of not being trapped in local solutions; rather, it achieves multiple solutions, both global and local, within a single run. In addition, the way of measuring the global as well as total success of algorithms is introduced and used. Global success measures the success of an algorithm in finding global optimum solution and total success measures the success of an algorithm in finding global as well as local solution of a problem. It is mentioned that the degree of exploration and exploitation can be adjusted by either varying the number of predators and best preys or adjusting the probability of follow-up. The advantage and disadvantage of these two methods need further study. The other possible future works are to diversify best preys so that they will be forced to converge to global as well as local solutions all over the solution space quickly. Parallel -PPA also needs exploration as there are higher dimensional and also challenging problems which need high degree of exploration and at the same time a good degree of exploitation. Modifying -PPA for multiobjective and multilevel optimization problems is also another possible issue along with testing real problems and also hybridization of the algorithm.

Competing Interests

The authors declare that they have no competing interests.