Table of Contents Author Guidelines Submit a Manuscript
Discrete Dynamics in Nature and Society
Volume 2012, Article ID 638275, 24 pages
http://dx.doi.org/10.1155/2012/638275
Research Article

An Algorithm for Global Optimization Inspired by Collective Animal Behavior

CUCEI Departamento de Electrónica, Universidad de Guadalajara, Avenida Revolución 1500, 44100 Guadalajara, JAL, Mexico

Received 21 September 2011; Revised 15 November 2011; Accepted 16 November 2011

Academic Editor: Carlo Piccardi

Copyright © 2012 Erik Cuevas et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A metaheuristic algorithm for global optimization called the collective animal behavior (CAB) is introduced. Animal groups, such as schools of fish, flocks of birds, swarms of locusts, and herds of wildebeest, exhibit a variety of behaviors including swarming about a food source, milling around a central locations, or migrating over large distances in aligned groups. These collective behaviors are often advantageous to groups, allowing them to increase their harvesting efficiency, to follow better migration routes, to improve their aerodynamic, and to avoid predation. In the proposed algorithm, the searcher agents emulate a group of animals which interact with each other based on the biological laws of collective motion. The proposed method has been compared to other well-known optimization algorithms. The results show good performance of the proposed method when searching for a global optimum of several benchmark functions.

1. Introduction

Global optimization (GO) is a field with applications in many areas of science, engineering, economics, and others, where mathematical modelling is used [1]. In general, the goal is to find a global optimum of an objective function defined in a given search space. Global optimization algorithms are usually broadly divided into deterministic and metaheuristic [2]. Since deterministic methods only provide a theoretical guarantee of locating a local minimum of the objective function, they often face great difficulties in solving global optimization problems [3]. On the other hand, metaheuristic methods are usually faster in locating a global optimum than deterministic ones [4]. Moreover, metaheuristic methods adapt better to black-box formulations and extremely ill-behaved functions whereas deterministic methods usually rest on at least some theoretical assumptions about the problem formulation and its analytical properties (such as Lipschitz continuity) [5].

Several metaheuristic algorithms have been developed by a combination of rules and randomness mimicking several phenomena. Such phenomena include evolutionary processes, for example, the evolutionary algorithm proposed by Fogel et al. [6], De Jong [7], and Koza [8], the genetic algorithm (GA) proposed by Holland [9] and Goldberg [10] and the artificial immune systems proposed by de Castro and Von Zuben [11]. On the other hand, physical processes consider the simulated annealing proposed by Kirkpatrick et al. [12], the electromagnetism-like algorithm proposed by İlker et al. [13], the gravitational search algorithm proposed by Rashedi et al. [14], and the musical process of searching for a perfect state of harmony, which has been proposed by Geem et al. [15], Lee and Geem [16], and Geem [17].

Many studies have been inspired by animal behavior phenomena for developing optimization techniques. For instance, the particle swarm optimization (PSO) algorithm which models the social behavior of bird flocking or fish schooling [18]. PSO consists of a swarm of particles which move towards best positions, seen so far, within a searchable space of possible solutions. Another behavior-inspired approach is the ant colony optimization (ACO) algorithm proposed by Dorigo et al. [19], which simulates the behavior of real ant colonies. Main features of the ACO algorithm are the distributed computation, the positive feedback, and the constructive greedy search. Recently, a new metaheuristic approach which is based on the animal behavior while hunting has been proposed in [20]. Such algorithm considers hunters as search positions and preys as potential solutions.

Just recently, the concept of individual-organization [21, 22] has been widely referenced to understand collective behavior of animals. The central principle of individual-organization is that simple repeating interactions between individuals can produce complex behavioral patterns at group level [21, 23, 24]. Such inspiration comes from behavioral patterns previously seen in several animal groups. Examples include ant pheromone trail networks, aggregation of cockroaches, and the migration of fish schools, all of which can be accurately described in terms of individuals following simple sets of rules [25]. Some examples of these rules [24, 26] are keeping the current position (or location) for best individuals, local attraction or repulsion, random movements, and competition for the space within a determined distance.

On the other hand, new studies [2729] have also shown the existence of collective memory in animal groups. The presence of such memory establishes that the previous history of the group structure influences the collective behavior exhibited in future stages. According to such principle, it is possible to model complex collective behaviors by using simple individual rules and configuring a general memory.

In this paper, a new optimization algorithm inspired by the collective animal behavior is proposed. In this algorithm, the searcher agents emulate a group of animals that interact with each other based on simple behavioral rules which are modeled as mathematical operators. Such operations are applied to each agent considering that the complete group has a memory storing their own best positions seen so far, by using a competition principle. The proposed approach has been compared to other well-known optimization methods. The results confirm a high performance of the proposed method for solving various benchmark functions.

This paper is organized as follows. In Section 2, we introduce basic biological aspects of the algorithm. In Section 3, the novel CAB algorithm and its characteristics are both described. Section 4 presents the experimental results and the comparative study. Finally, in Section 5, conclusions are given.

2. Biologic Fundamentals

The remarkable collective behavior of organisms such as swarming ants, schooling fish, and flocking birds has long captivated the attention of naturalists and scientists. Despite a long history of scientific research, the relationship between individuals and group-level properties has just recently begun to be deciphered [30].

Grouping individuals often have to make rapid decisions about where to move or what behavior to perform in uncertain and dangerous environments. However, each individual typically has only a relatively local sensing ability [31]. Groups are, therefore, often composed of individuals that differ with respect to their informational status and individuals are usually not aware of the informational state of others [32], such as whether they are knowledgeable about a pertinent resource or about a threat.

Animal groups are based on a hierarchic structure [33] which considers different individuals according to a fitness principle called dominance [34] which is the domain of some individuals within a group that occurs when competition for resources leads to confrontation. Several studies [35, 36] have found that such animal behavior lead to more stable groups with better cohesion properties among individuals.

Recent studies have begun to elucidate how repeated interactions among grouping animals scale to collective behavior. They have remarkably revealed that collective decision-making mechanisms across a wide range of animal group types, from insects to birds (and even among humans in certain circumstances) seem to share similar functional characteristics [21, 25, 37]. Furthermore, at a certain level of description, collective decision-making by organisms shares essential common features such as a general memory. Although some differences may arise, there are good reasons to increase communication between researchers working in collective animal behavior and those involved in cognitive science [24].

Despite the variety of behaviors and motions of animal groups, it is possible that many of the different collective behavioral patterns are generated by simple rules followed by individual group members. Some authors have developed different models, one of them, known as the self-propelled particle (SPP) model, attempts to capture the collective behavior of animal groups in terms of interactions between group members which follow a diffusion process [3841].

On the other hand, following a biological approach, Couzin and krauze [24, 25] have proposed a model in which individual animals follow simple rules of thumb: (1) keep the current position (or location) for best individuals, (2) move from or to nearby neighbors (local attraction or repulsion), (3) move randomly, and (4) compete for the space within of a determined distance. Each individual thus admits three different movements: attraction, repulsión, or random and holds two kinds of states: preserve the position or compete for a determined position. In the model, the movement, which is executed by each individual, is decided randomly (according to an internal motivation). On the other hand, the states follow a fixed criteria set.

The dynamical spatial structure of an animal group can be explained in terms of its history [36]. Despite such a fact, the majority of studies have failed in considering the existence of memory in behavioral models. However, recent research [27, 42] have also shown the existence of collective memory in animal groups. The presence of such memory establishes that the previous history of the group structure influences the collective behavior which is exhibited in future stages. Such memory can contain the location of special group members (the dominant individuals) or the averaged movements produced by the group.

According to these new developments, it is possible to model complex collective behaviors by using simple individual rules and setting a general memory. In this work, the behavioral model of animal groups inspires the definition of novel evolutionary operators which outline the CAB algorithm. A memory is incorporated to store best animal positions (best solutions) considering a competition-dominance mechanism.

3. Collective Animal Behavior Algorithm (CAB)

The CAB algorithm assumes the existence of a set of operations that resembles the interaction rules that model the collective animal behavior. In the approach, each solution within the search space represents an animal position. The “fitness value” refers to the animal dominance with respect to the group. The complete process mimics the collective animal behavior.

The approach in this paper implements a memory for storing best solutions (animal positions) mimicking the aforementioned biologic process. Such memory is divided into two different elements, one for maintaining the best locations at each generation () and the other for storing the best historical positions during the complete evolutionary process ().

3.1. Description of the CAB Algorithm

Following other metaheuristic approaches, the CAB algorithm is an iterative process that starts by initializing the population randomly (generated random solutions or animal positions). Then, the following four operations are applied until a termination criterion is met (i.e., the iteration number ).(1)Keep the position of the best individuals.(2)Move from or to nearby neighbors (local attraction and repulsion).(3)Move randomly.(4)Compete for the space within a determined distance (update the memory).

3.1.1. Initializing the Population

The algorithm begins by initializing a set of animal positions (). Each animal position is a -dimensional vector containing parameter values to be optimized. Such values are randomly and uniformly distributed between the prespecified lower initial parameter bound and the upper initial parameter bound, with and being the parameter and individual indexes, respectively. Hence, is the th parameter of the th individual.

All the initial positions are sorted according to the fitness function (dominance) to form a new individual set , so that we can choose the best positions and store them in the memory and . The fact that both memories share the same information is only allowed at this initial stage.

3.1.2. Keep the Position of the Best Individuals

Analogous to the biological metaphor, this behavioral rule, typical from animal groups, is implemented as an evolutionary operation in our approach. In this operation, the first elements (), of the new animal position set , are generated. Such positions are computed by the values contained inside the historical memory, considering a slight random perturbation around them. This operation can be modeled as follows: where while represents the -element of the historical memory . is a random vector with a small enough length.

3.1.3. Move from or to Nearby Neighbors

From the biological inspiration, animals experiment a random local attraction or repulsion according to an internal motivation. Therefore, we have implemented new evolutionary operators that mimic such biological pattern. For this operation, a uniform random number is generated within the range . If is less than a threshold , a determined individual position is attracted/repelled considering the nearest best historical position within the group (i.e., the nearest position in); otherwise, it is attracted/repelled to/from the nearest best location within the group for the current generation (i.e., the nearest position in). Therefore such operation can be modeled as follows: where, and represent the nearest elements of and to , while is a random number between . Therefore, if , the individual position is attracted to the position or , otherwise such movement is considered as a repulsion.

3.1.4. Move Randomly

Following the biological model, under some probability , one animal randomly changes its position. Such behavioral rule is implemented considering the next expression: with and a random vector defined in the search space. This operator is similar to reinitializing the particle in a random position, as it is done by (3.1).

3.1.5. Compete for the Space within a Determined Distance (Update the Memory)

Once the operations to keep the position of the best individuals, such as moving from or to nearby neighbors and moving randomly, have been applied to all animal positions, generating new positions, it is necessary to update the memory.

In order to update memory , the concept of dominance is used. Animals that interact within the group maintain a minimum distance among them. Such distance, which is defined as in the context of the CAB algorithm, depends on how aggressive the animal behaves [34, 42]. Hence, when two animals confront each other inside such distance, the most dominant individual prevails meanwhile other withdraw. Figure 1 depicts the process.

638275.fig.001
Figure 1: Dominance concept as it is presented when two animals confront each other inside of a distance.

In the proposed algorithm, the historical memory is updated considering the following procedure.(1)The elements of and are merged into ().(2)Each element of the memoryis compared pairwise to the remaining memory elements (). If the distance between both elements is less than, the element getting a better performance in the fitness function prevails meanwhile the other is removed.(3)From the resulting elements of (from Step 2), it is selected the best value to build the new.

The use of the dominance principle in CAB allows considering as memory elements those solutions that hold the best fitness value within the region which has been defined by the distance.

The procedure improves the exploration ability by incorporating information regarding previously found potential solutions during the algorithm’s evolution. In general, the value of depends on the size of the search space. A big value of improves the exploration ability of the algorithm although it yields a lower convergence rate.

In order to calculate the value, an empirical model has been developed after considering several conducted experiments. Such model is defined by following equation: whereand represent the prespecified lower and upper bound of the -parameter respectively, within an -dimensional space.

3.1.6. Computational Procedure

The computational procedure for the proposed algorithm can be summarized as follows:

Step 1. Set the parameters , , , , and .

Step 2. Generate randomly the position set using (3.1).

Step 3. Sort according to the objective function (dominance) to build.

Step 4. Choose the first positions of and store them into the memory.

Step 5. Update according to Section 3.1.5 (during the first iteration:).

Step 6. Generate the first positions of the new solution set .Such positions correspond to the elements of making a slight random perturbation around them, being a random vector of a small enough length.

Step 7. Generate the rest of the elements using the attraction, repulsion, and random movements.

 for  :   if then  attraction and repulsion movement   {if then     else if    }   else  random movement    {     } end

where

Step 8. If is completed, the process is finished; otherwise, go back to Step 3.

The best value in represents the global solution for the optimization problem.

4. Experimental Results

4.1. Test Suite and Experimental Setup

A comprehensive set of 31 functions that have been collected from [4354], they are used to test the performance of the proposed approach. Tables 1217 in the appendix present the benchmark functions used in our experimental study. Such functions are classified into four different categories: unimodal test functions (Table 12), multimodal test functions (Table 13), multimodal test functions with fixed dimensions (Tables 14 and 15), and GKLS test functions (Tables 16 and 17). In such tables, is the dimension of function, is the minimum value of the function, and is a subset of . The optimum location () for functions in Tables 12 and 13 fall into , except for , and with falling into and in . A detailed description of all functions is given in the appendix.

To study the impact of parameters and (described in Sections 3.1.3 and 3.1.4) over the performance of CAB, different values have been tested on 5 typical functions. The maximum number of iterations is set to 1000. and are fixed to 50 and 10, respectively. The mean best function values () and the standard deviations () of CAB, averaged over 30 runs, for the different values of and are listed in Tables 1 and 2, respectively. The results suggest that a proper combination of different parameter values can improve the performance of CAB and the quality of solutions. Table 1 shows the results of an experiment which consist in fixing and varying from 0.5 to 0.9. On a second test, the experimental setup is swapped, that is, and varies from 0.5 to 0.9. The best results in the experiments are highlighted in both tables. After the best value in parameters and has been experimentally determined (with a value of 0.8), it is kept for all tests throughout the paper.

tab1
Table 1: Results of CAB with variant values of parameter over 5 typical functions, with .
tab2
Table 2: Results of CAB with variant values of parameter over 5 typical functions, with .

In order to demonstrate that the CAB algorithm provides a better performance, it has been compared to other optimization approaches such as metaheuristic algorithms (Section 4.2) and continuous methods (Section 4.3). The results of such comparisons are explained in the following sections.

4.2. Performance Comparison with Other Metaheuristic Approaches

We have applied CAB to 31 test functions in order to compare its performance to other well-known metaheuristic algorithms such as the real genetic algorithm (RGA) [55], the PSO [18], the gravitational search algorithm (GSA) [56], and the differential evolution method (DE) [57]. In all cases, population size is set to 50. The maximum iteration number is 1000 for functions in Tables 12 and 13, and 500 for functions in Table 14 and 16. Such stop criteria have been chosen as to keep compatibility to similar works which are reported in [14] and [58].

Parameter settings for each algorithm in the comparison are described as follows.(1)RGA: according to [55], the approach uses arithmetic crossover, Gaussian mutation, and roulette wheel selection. The crossover and mutation probabilities have been set to 0.3 and 0.1, respectively.(2)PSO: In the algorithm, while the inertia factor () is decreasing linearly from 0.9 to 0.2.(3)In GSA, is set to 100 and is set to 20; is the total number of iterations (set to 1000 for functions and to 500 for functions). Besides, is set to 50 (total number of agents) and is decreased linearly to 1. Such values have been found as the best configuration set according to [56].(4)DE: the DE/Rand/1 scheme is employed. The parameter settings follow the instructions in [57]. The crossover probability is and the weighting factor is .

Several experimental tests have been developed for comparing the performance of the CAB algorithm against other metaheuristic algorithms. The experiments have been developed considering the following function types.(1)Unimodal test functions (Table 12).(2)Multimodal test functions (Table 13).(3)Multimodal test functions with fixed dimensions (Tables 14 and 15).(4)GKLS test functions (Tables 16 and 17).

4.2.1. Unimodal Test Functions

In this test, the performance of the CAB algorithm is compared to RGA, PSO, GSA and DE, considering functions with only one minimum/maximum. Such function type is represented by functions to in Table 12. The results, over 30 runs, are reported in Table 3 considering the following performance indexes: the average best-so-far solution, the average mean fitness function, and the median of the best solution in the last iteration. The best result for each function is boldfaced. According to this table, CAB provides better results than RGA, PSO, GSA, and DE for all functions. In particular, the results show considerable precision differences which are directly related to different local operators at each metaheuristic algorithm. Moreover, the good convergence rate of CAB can be observed from Figure 2. According to this figure, CAB tends to find the global optimum faster than other algorithms and yet offer the highest convergence rate.

tab3
Table 3: Minimization result of benchmark functions in Table 12 with . Maximum number of iterations = 1000.
fig2
Figure 2: Performance comparison of RGA, PSO, GSA, DE, and CAB for minimization of (a) and (b) considering.

In order to statistically analyze the results in Table 3, a non-parametric significance proof known as the Wilcoxon’s rank test has been conducted [59, 60], which allows assessing result differences among two related methods. The analysis is performed considering a 5% significance level over the “average best-so-far” data. Table 4 reports the values produced by Wilcoxon’s test for the pairwise comparison of the “average best so-far” of four groups. Such groups are formed by CAB versus RGA, CAB versus PSO, CAB versus GSA, and CAB versus DE. As a null hypothesis, it is assumed that there is no significant difference between mean values of the two algorithms. The alternative hypothesis considers a significant difference between the “average best-so-far” values of both approaches. All values reported in the table are less than 0.05 (5% significance level) which is a strong evidence against the null hypothesis, indicating that the CAB results are statistically significant and that it has not occurred by coincidence (i.e., due to the normal noise contained in the process).

tab4
Table 4: values produced by Wilcoxon’s test comparing CAB versus RGA, PSO, GSA, and DE over the “average best-so-far” values from Table 3.
4.2.2. Multimodal Test Functions

Multimodal functions, in contrast to unimodal, have many local minima/maxima which are, in general, more difficult to optimize. In this section the performance of the CAB algorithm is compared to other metaheuristic algorithms considering multimodal functions. Such comparison reflects the algorithm’s ability to escape from poor local optima and to locate a near-global optimum. We have done experiments onto of Table 13 where the number of local minima increases exponentially as the dimension of the function increases. The dimension of these functions is set to 30. The results are averaged over 30 runs, reporting the performance indexes in Table 5 as follows: the average best-so-far solution, the average mean fitness function and, the median of the best solution in the last iteration (the best result for each function is highlighted). Likewise, values of the Wilcoxon signed-rank test of 30 independent runs are listed in Table 6.

tab5
Table 5: Minimization of benchmark functions in Table 13 with . Maximum number of iterations = 1000.
tab6
Table 6: values produced by Wilcoxon’s test comparing CAB versus RGA, PSO, GSA, and DE over the “average best-so-far” values from Table 5.

For , , and , CAB yields a much better solution than the others. However, for functions and , CAB produces similar results to RGA and GSA, respectively. The Wilcoxon rank test results, presented in Table 6, show that CAB performed better than RGA, PSO, GSA, and DE considering the four problems , whereas, from a statistical viewpoint, there is not difference in results between CAB and RGA for and between CAB and GSA for. Evolutions of the “average best-so-far” solutions over 30 runs for functions and are shown in Figure 3.

fig3
Figure 3: Performance comparison of RGA, PSO, GSA, DE, and CAB for minimization of (a) and (b) considering .
4.2.3. Multimodal Test Functions with Fixed Dimensions

In the following experiments the performance of the CAB algorithm is compared to RGA, PSO, GSA, and DE considering functions which are extensively reported in the metaheuristic-based optimization literature [4954]. Such functions, represented by to in Tables 14 and 15, are all multimodal with fixed dimensions. Table 7 shows the outcome of such process. Results, presented in Table 7, show how metaheuristic algorithms maintain a similar average performance when they are applied to low-dimensional functions [58]. The results show that RGA, PSO, and GSA have similar solutions and performances that are nearly the same as it can be seen in Figure 4.

tab7
Table 7: Minimization result of benchmark functions in Table 14 with . Maximum number of iterations = 500.
fig4
Figure 4: Performance comparison of RGA, PSO, GSA, DE, and CAB for minimization of (a) and (b).
4.2.4. GKLS Test Functions

This section considers GKLS functions which are built using the GKLS-generator described in [54]. In the construction, the generator uses a set of user-defined parameters for building a multimodal function with known local and global minima. For conducting the numerical experiments, eight GKLS functions been employed which are defined byto. Details of their characteristics and parameters for their construction are listed in Tables 16 and 17. Results, over 30 runs, are reported in Table 8 (the best result for each function test is boldfaced). According to this table, CAB provides better results than RGA, PSO, GSA, and DE for all GKLS functions, in particular for functions holding bigger dimensions (). Such performance is directly related to a better tradeoff between exploration and exploitation which is produced by CAB operators. Likewise, as it can be observed from Figure 5, the CAB algorithm possesses better convergence rates in comparison to other metaheuristic algorithms.

tab8
Table 8: Minimization result of GKLSfunctions in Table 16. Maximum number of iterations = 500.
fig5
Figure 5: Performance comparison of RGA, PSO, GSA, DE and CAB for minimization of the GKLS-functions: (a) and (b).

In order to statistically validate the results of Table 8, the Wilcoxon’s test has been conducted. Table 9 shows the values obtained after applying such analysis over 30 independent executions. Since all values, presented in Table 9, are less than 0.05, it indicates that the CAB results are statistically better.

tab9
Table 9: values produced by Wilcoxon’s test comparing CAB versus RGA, PSO, GSA, and DE over the “average best-so-far” values from Table 8.
4.3. Comparison to Continuous Optimization Methods

Finally, the CAB algorithm is also compared to continuous optimization methods by considering some functions of the appendix. Since the BFSG algorithm [61] is one of the most effective continuous methods for solving unconstrained optimization problems, it has been considered as a basis for the algorithms used in the comparison.

In order to compare the performance of CAB to continuous optimization approaches, two different tests have been conducted. The first one tests the ability of BFGS and CAB to face unimodal optimization tasks (see Section 4.3.1) is evaluated. The second experiment analyzes the performance of CAB and one BFGS-based approach, when they are both applied to multimodal functions (review Section 4.3.2).

4.3.1. Local Optimization

In the first experiment, the performance of algorithms BFGS and CAB over unimodal functions is compared. In unimodal functions, the global minimum matches the local minimum. Quasi-Newton methods, such as the BFGS, have a fast rate of local convergence although it depends on the problem’s dimension [62, 63]. Considering that not all unimodal functions of Table 12 fulfill the requirements imposed by the gradient-based approaches (i.e., and are not differentiable meanwhile is nonsmooth), we have chosen the Rosenbrock function () as a benchmark.

In the test, both algorithms (BFGS and CAB) are employed to minimize, considering different dimensions. For the BFGS implementation, is considered as initial matrix. Likewise, parameters and are set to 0.1 and 0.9 respectively. Although several performance criteria may define a comparison index, most can be applied to only one method timely (such as the number of gradient evaluations). Therefore, this paper considers the elapsed time and the iteration number (once the minimum has been reached) as performance indexes in the comparison. In the case of BFGS, the termination condition is assumed as, withbeing the gradient of . On the other hand, the stopping criterion of CAB considers when no more changes to the best element in memory are registered. Table 10 presents the results of both algorithms considering several dimensions () of. In order to assure consistency, such results represent the averaged elapsed time (AET) and the averaged iteration number (AIN) over 30 different executions. It is additionally considered that at each execution both methods are initialized in a random point (inside the search space).

tab10
Table 10: Performance comparison between the BFGS and the CAB algorithm, considering different dimensions over the Rosenbrock function. The averaged elapsed time (AET) is referred in seconds.

From Table 10, we can observe that the BFGS algorithm produces shorter elapsed times and fewer iterations than the CAB method. However, from , the CAB algorithm contend with similar results. The fact that the BFGS algorithm outperforms the CAB approach cannot be deemed as a negative feature considering the restrictions imposed to the functions by the BFGS method.

4.3.2. Global Optimization

Since the BFGS algorithm exploits only local information, it may easily get trapped into local optima restricting its use for global optimization. Thus, several methods based on continuous optimization approaches have been proposed. One of the most widely used techniques is the so-called multistart [64] (MS). In MS a point is randomly chosen from a feasible region as initial solution and subsequently a continuous optimization algorithm (local search) starts from it. Then, the process is repeated until a near global optimum is reached. The weakness of MS is that the same local minima may be found over and over again, wasting computational resources [65].

In order to compare the performance of the CAB approach to continuous optimization methods in the context of global optimization, the MS algorithm ADAPT [66] has been chosen. ADAPT uses as local search method the BFGS algorithm, which is iteratively executed. Thus, ADAPT possess two different stop criteria, one for the local procedure BFGS and other for the complete MS approach. For the comparison, the ADAPT algorithm has been implemented as suggested in [66].

In the second experiment, the performance of the ADAPT and the CAB algorithms is compared over several multimodal functions described in Tables 13 and 14. The study considers the following performance indexes: the elapsed time, the iteration number, and the average best so-far solution. In case of the ADAPT algorithm, the iteration number is computed as the total iteration number produced by all the local search procedures as the MS method operates. The termination condition of the ADAPT local search algorithm (BFGS) is assumed when, being the gradient of . On the other hand, the stopping criterion for the CAB and the ADAPT algorithms is considered when no more changes in the best solution are registered. Table 11 presents results from both algorithms considering several multimodal functions. In order to assure consistency, results ponder the averaged elapsed time (AET), the averaged iteration number (AIN) and the average best-so-far solution (ABS) over 30 different executions. In Table 11, the averaged number of local searches (ALS) executed by ADAPT during the optimization is additionally considered.

tab11
Table 11: Performance comparison between the ADAPT and the CAB algorithm considering different multimodal functions. The averaged elapsed time (AET) is referred in the format M’s (Minute’second).
tab12
Table 12: Unimodal test functions.
tab13
Table 13: Multimodal test functions.
tab14
Table 14: Multimodal test functions with fixed dimensions.
tab15
Table 15: Optimum locations of Table 14.
tab16
Table 16: Used GKLSfunctions.
tab17
Table 17: Optimum locations of the used GKLS functions.

Table 11 provides a summarized performance comparison between the ADAPT and the CAB algorithms. although both algorithms are able to acceptably locate the global minimum for both cases, there exist significant differences in the required time for reaching it. When comparing the averaged elapsed time (AET) and the averaged iteration number (AIN) in Table 11, CAB uses significantly less time and fewer iterations to reach the global minimum than the ADAPT algorithm.

5. Conclusions

This article proposes a novel metaheuristic optimization algorithm that is called the collective animal behavior algorithm (CAB). In CAB, the searcher agents emulates a group of animals that interact with each other considering simple behavioral rules which are modeled as mathematical operators. Such operations are applied to each agent considering that the complete group has a memory storing the best positions seen so far by using a competition principle.

The CAB algorithm presents two important characteristics: (1) CAB operators allow a better tradeoff between exploration and exploitation of the search space; (2) the use of its embedded memory incorporates information regarding previously found local minima (potential solutions) during the evolution process.

CAB has been experimentally tested considering a challenging test suite gathering 31 benchmark functions. In order to analyze the performance of the CAB algorithm, it has been compared to other well-known metaheuristic approaches. The experiments, statistically validated, have demonstrated that CAB generally outperforms other metaheuristic algorithms for most of the benchmark functions regarding the solution quality. In this study, the CAB algorithm has also been compared to algorithms based on continuous optimization methods. The results have shown that althogh continuous-based approaches outperform CAB for local optimization tasks, they face great difficulties in solving global optimization problems.

Appendix

List of Benchmark Functions

For more details see Tables 12, 13, 14, 15, 16, and 17.

References

  1. P. M. Pardalos, H. E. Romeijn, and H. Tuy, “Recent developments and trends in global optimization,” Journal of Computational and Applied Mathematics, vol. 124, no. 1-2, pp. 209–228, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  2. C. A. Floudas, I. G. Akrotirianakis, S. Caratzoulas, C. A. Meyer, and J. Kallrath, “Global optimization in the 21st century: advances and challenges,” Computers and Chemical Engineering, vol. 29, no. 6, pp. 1185–1202, 2005. View at Publisher · View at Google Scholar · View at Scopus
  3. Y. Ji, K.-C. Zhang, and S.-J. Qu, “A deterministic global optimization algorithm,” Applied Mathematics and Computation, vol. 185, no. 1, pp. 382–387, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  4. A. Georgieva and I. Jordanov, “Global optimization based on novel heuristics, low-discrepancy sequences and genetic algorithms,” European Journal of Operational Research, vol. 196, no. 2, pp. 413–422, 2009. View at Publisher · View at Google Scholar · View at Scopus
  5. D. Lera and Ya. D. Sergeyev, “Lipschitz and Hölder global optimization using space-filling curves,” Applied Numerical Mathematics, vol. 60, no. 1-2, pp. 115–129, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  6. L. J. Fogel, A. J. Owens, and M. J. Walsh, Artificial Intelligence through Simulated Evolution, John Wiley & Sons, Chichester, UK, 1966.
  7. K. De Jong, Analysis of the behavior of a class of genetic adaptive systems, Ph.D. thesis, University of Michigan, Ann Arbor, Mich, USA, 1975.
  8. J. R. Koza, “Genetic programming: a paradigm for genetically breeding populations of computer programs to solve problems,” Tech. Rep. STAN-CS-90-1314, Stanford University, Calif, USA, 1990. View at Google Scholar
  9. J. H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, Mich, USA, 1975.
  10. D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley, Boston, Mass, USA, 1989.
  11. L. N. de Castro and F. J. Von Zuben, “Artificial immune systems: part I—basic theory and applications,” Tech. Rep. TR-DCA 01/99, 1999. View at Google Scholar
  12. S. Kirkpatrick, C. D. Gelatt, Jr., and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, 1983. View at Google Scholar · View at Zentralblatt MATH
  13. B. İlker, Ş. Birbil, and S.-C. Fang, “An electromagnetism-like mechanism for global optimization,” Journal of Global Optimization, vol. 25, no. 3, pp. 263–282, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  14. E. Rashedi, H. Nezamabadi-Pour, and S. Saryazdi, “Filter modeling using gravitational search algorithm,” Engineering Applications of Artificial Intelligence, vol. 24, no. 1, pp. 117–122, 2011. View at Publisher · View at Google Scholar · View at Scopus
  15. Z. W. Geem, J. H. Kim, and G. V. Loganathan, “A new heuristic optimization algorithm: harmony search,” Simulation, vol. 76, no. 2, pp. 60–68, 2001. View at Google Scholar · View at Scopus
  16. K. S. Lee and Z. W. Geem, “A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice,” Computer Methods in Applied Mechanics and Engineering, vol. 194, no. 36–38, pp. 3902–3933, 2005. View at Publisher · View at Google Scholar · View at Scopus
  17. Z. W. Geem, “Novel derivative of harmony search algorithm for discrete design variables,” Applied Mathematics and Computation, vol. 199, no. 1, pp. 223–230, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  18. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the 1995 IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, December 1995.
  19. M. Dorigo, V. Maniezzo, and A. Colorni, “Positive feedback as a search strategy,” Tech. Rep. 91-016, Politecnico di Milano, 1991. View at Google Scholar
  20. R. Oftadeh, M. J. Mahjoob, and M. Shariatpanahi, “A novel meta-heuristic optimization algorithm inspired by group hunting of animals: hunting search,” Computers and Mathematics with Applications, vol. 60, no. 7, pp. 2087–2098, 2010. View at Publisher · View at Google Scholar · View at Scopus
  21. D. J. T. Sumpter, “The principles of collective animal behaviour,” Philosophical Transactions of the Royal Society B, vol. 361, no. 1465, pp. 5–22, 2006. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  22. O. Petit and R. Bon, “Decision-making processes: the case of collective movements,” Behavioural Processes, vol. 84, no. 3, pp. 635–647, 2010. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  23. A. Kolpas, J. Moehlis, T. A. Frewen, and I. G. Kevrekidis, “Coarse analysis of collective motion with different communication mechanisms,” Mathematical Biosciences, vol. 214, no. 1-2, pp. 49–57, 2008. View at Publisher · View at Google Scholar · View at PubMed · View at Zentralblatt MATH
  24. I. D. Couzin, “Collective cognition in animal groups,” Trends in Cognitive Sciences, vol. 13, no. 1, pp. 36–43, 2009. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  25. I. D. Couzin and J. Krause, “Self-organization and collective behavior in vertebrates,” Advances in the Study of Behavior, vol. 32, pp. 1–75, 2003. View at Publisher · View at Google Scholar · View at Scopus
  26. N. W. F. Bode, D. W. Franks, and A. Jamie Wood, “Making noise: emergent stochasticity in collective motion,” Journal of Theoretical Biology, vol. 267, no. 3, pp. 292–299, 2010. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  27. I. D. Couzin, J. Krause, R. James, G. D. Ruxton, and N. R. Franks, “Collective memory and spatial sorting in animal groups,” Journal of Theoretical Biology, vol. 218, no. 1, pp. 1–11, 2002. View at Publisher · View at Google Scholar · View at MathSciNet
  28. I. Couzin, “Collective minds,” Nature, vol. 445, no. 7129, p. 715, 2007. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  29. S. Bazazi, J. Buhl, J. J. Hale et al., “Collective motion and cannibalism in locust migratory bands,” Current Biology, vol. 18, no. 10, pp. 735–739, 2008. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  30. N. W. F. Bode, A. J. Wood, and D. W. Franks, “The impact of social networks on animal collective motion,” Animal Behaviour, vol. 82, no. 1, pp. 29–38, 2011. View at Publisher · View at Google Scholar
  31. B. H. Lemasson, J. J. Anderson, and R. A. Goodwin, “Collective motion in animal groups from a neurobiological perspective: the adaptive benefits of dynamic sensory loads and selective attention,” Journal of Theoretical Biology, vol. 261, no. 4, pp. 501–510, 2009. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  32. M. Bourjade, B. Thierry, M. Maumy, and O. Petit, “Decision-making processes in the collective movements of Przewalski horses families Equus ferus Przewalskii: influences of the environment,” Ethology, vol. 115, pp. 321–330, 2009. View at Google Scholar
  33. A. Bang, S. Deshpande, A. Sumana, and R. Gadagkar, “Choosing an appropriate index to construct dominance hierarchies in animal societies: a comparison of three indices,” Animal Behaviour, vol. 79, no. 3, pp. 631–636, 2010. View at Publisher · View at Google Scholar · View at Scopus
  34. Y. Hsu, R. L. Earley, and L. L. Wolf, “Modulation of aggressive behaviour by fighting experience: mechanisms and contest outcomes,” Biological Reviews, vol. 81, no. 1, pp. 33–74, 2006. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  35. M. Broom, A. Koenig, and C. Borries, “Variation in dominance hierarchies among group-living animals: modeling stability and the likelihood of coalitions,” Behavioral Ecology, vol. 20, no. 4, pp. 844–855, 2009. View at Publisher · View at Google Scholar · View at Scopus
  36. K. L. Bayly, C. S. Evans, and A. Taylor, “Measuring social structure: a comparison of eight dominance indices,” Behavioural Processes, vol. 73, no. 1, pp. 1–12, 2006. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  37. L. Conradt and T. J. Roper, “Consensus decision making in animals,” Trends in Ecology and Evolution, vol. 20, no. 8, pp. 449–456, 2005. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  38. A. Okubo, “Dynamical aspects of animal grouping: swarms, schools, flocks, and herds,” Advances in Biophysics, vol. 22, pp. 1–94, 1986. View at Google Scholar · View at Scopus
  39. C. W. Reynolds, “Flocks, herds and schools: a distributed behavioural model,” Computer Graphics, vol. 21, no. 4, pp. 25–34, 1987. View at Google Scholar · View at Scopus
  40. S. Gueron, S. A. Levin, and D. I. Rubenstein, “The dynamics of herds: from individuals to aggregations,” Journal of Theoretical Biology, vol. 182, no. 1, pp. 85–98, 1996. View at Publisher · View at Google Scholar · View at Scopus
  41. A. Czirók and T. Vicsek, “Collective behavior of interacting self-propelled particles,” Physica A, vol. 281, no. 1, pp. 17–29, 2000. View at Publisher · View at Google Scholar · View at Scopus
  42. M. Ballerini, N. Cabibbo, R. Candelier et al., “Interaction ruling animal collective behavior depends on topological rather than metric distance: evidence from a field study,” Proceedings of the National Academy of Sciences of the United States of America, vol. 105, no. 4, pp. 1232–1237, 2008. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  43. M. M. Ali, C. Khompatraporn, and Z. B. Zabinsky, “A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems,” Journal of Global Optimization, vol. 31, no. 4, pp. 635–672, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  44. R. Chelouah and P. Siarry, “Continuous genetic algorithm designed for the global optimization of multimodal functions,” Journal of Heuristics, vol. 6, no. 2, pp. 191–213, 2000. View at Publisher · View at Google Scholar · View at Scopus
  45. F. Herrera, M. Lozano, and A. M. Sánchez, “A taxonomy for the crossover operator for real-coded genetic algorithms: an experimental study,” International Journal of Intelligent Systems, vol. 18, no. 3, pp. 309–338, 2003. View at Publisher · View at Google Scholar · View at Scopus
  46. M. Laguna and R. Martí, “Experimental testing of advanced scatter search designs for global optimization of multimodal functions,” Journal of Global Optimization, vol. 33, no. 2, pp. 235–255, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  47. M. Lozano, F. Herrera, N. Krasnogor, and D. Molina, “Real-coded memetic algorithms with crossover hill-climbing,” Evolutionary Computation, vol. 12, no. 3, pp. 273–302, 2004. View at Publisher · View at Google Scholar · View at PubMed · View at Scopus
  48. J. J. Moré, B. S. Garbow, and K. E. Hillstrom, “Testing unconstrained optimization software,” Association for Computing Machinery. Transactions on Mathematical Software, vol. 7, no. 1, pp. 17–41, 1981. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  49. D. Ortiz-Boyer, C. Hervás-Martínez, and N. García-Pedrajas, “CIXL2: a crossover operator for evolutionary algorithms based on population features,” Journal of Artificial Intelligence Research, vol. 24, pp. 1–48, 2005. View at Google Scholar · View at Scopus
  50. K. V. Price, R. M. Storn, and J. A. Lampinen, Differential Evolution: A Practical Approach to Global Optimization, Springer, New York, NY, USA, 2005.
  51. R. S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, “Opposition-based differential evolution,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 1, pp. 64–79, 2008. View at Publisher · View at Google Scholar · View at Scopus
  52. D. Whitley, S. Rana, J. Dzubera, and K. E. Mathias, “Evaluating evolutionary algorithms,” Artificial Intelligence, vol. 85, no. 1-2, pp. 245–276, 1996. View at Google Scholar · View at Scopus
  53. X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 2, pp. 82–102, 1999. View at Publisher · View at Google Scholar · View at Scopus
  54. M. Gaviano, D. E. Kvasov, D. Lera, and Y. D. Sergeyev, “Software for generation of classes of test functions with known local and global minima for global optimization,” Association for Computing Machinery. Transactions on Mathematical Software, vol. 29, no. 4, pp. 469–480, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  55. C. Hamzaçebi, “Improving genetic algorithms' performance by local search for continuous function optimization,” Applied Mathematics and Computation, vol. 196, no. 1, pp. 309–317, 2008. View at Publisher · View at Google Scholar · View at Scopus
  56. E. Rashedi, H. Nezamabadi-pour, and S. Saryazdi, “GSA: a gravitational search algorithm,” Information Sciences, vol. 179, no. 13, pp. 2232–2248, 2009. View at Publisher · View at Google Scholar · View at Scopus
  57. R. Storn and K. Price, “Differential evolution-a simple and efficient adaptive scheme for global optimisation over continuous spaces,” Tech. Rep. TR-95–012, ICSI, Berkeley, Calif, 1995. View at Google Scholar
  58. D. Shilane, J. Martikainen, S. Dudoit, and S. J. Ovaska, “A general framework for statistical performance comparison of evolutionary computation algorithms,” Information Sciences, vol. 178, no. 14, pp. 2870–2879, 2008. View at Publisher · View at Google Scholar · View at Scopus
  59. F. Wilcoxon, “Individual comparisons by ranking methods,” Biometrics, vol. 1, pp. 80–83, 1945. View at Google Scholar
  60. S. García, D. Molina, M. Lozano, and F. Herrera, “A study on the use of non-parametric tests for analyzing the evolutionary algorithms' behaviour: a case study on the CEC'2005 special session on real parameter optimization,” Journal of Heuristics, vol. 15, no. 6, pp. 617–644, 2009. View at Publisher · View at Google Scholar · View at Scopus
  61. M. Al-Baali, “On the behaviour of a combined extra-updating/self-scaling BFGS method,” Journal of Computational and Applied Mathematics, vol. 134, no. 1-2, pp. 269–281, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  62. M. J. D. Powell, “How bad are the BFGS and DFP methods when the objective function is quadratic?” Mathematical Programming, vol. 34, no. 1, pp. 34–47, 1986. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  63. E. Hansen and G. W. Walster, Global Optimization Using Interval Analysis, CRC Press, 2004.
  64. L. Lasdon and J. C. Plummer, “Multistart algorithms for seeking feasibility,” Computers & Operations Research, vol. 35, no. 5, pp. 1379–1393, 2008. View at Publisher · View at Google Scholar
  65. F. V. Theos, I. E. Lagaris, and D. G. Papageorgiou, “PANMIN: sequential and parallel global optimization procedures with a variety of options for the local search strategy,” Computer Physics Communications, vol. 159, no. 1, pp. 63–69, 2004. View at Publisher · View at Google Scholar · View at Scopus
  66. C. Voglis and I. E. Lagaris, “Towards “ideal multistart”. A stochastic approach for locating the minima of a continuous function inside a bounded domain,” Applied Mathematics and Computation, vol. 213, no. 1, pp. 216–229, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH