Abstract

A novel dynamic multistage hybrid swarm intelligence optimization algorithm is introduced, which is abbreviated as DM-PSO-ABC. The DM-PSO-ABC combined the exploration capabilities of the dynamic multiswarm particle swarm optimizer (PSO) and the stochastic exploitation of the cooperative artificial bee colony algorithm (CABC) for solving the function optimization. In the proposed hybrid algorithm, the whole process is divided into three stages. In the first stage, a dynamic multiswarm PSO is constructed to maintain the population diversity. In the second stage, the parallel, positive feedback of CABC was implemented in each small swarm. In the third stage, we make use of the particle swarm optimization global model, which has a faster convergence speed to enhance the global convergence in solving the whole problem. To verify the effectiveness and efficiency of the proposed hybrid algorithm, various scale benchmark problems are tested to demonstrate the potential of the proposed multistage hybrid swarm intelligence optimization algorithm. The results show that DM-PSO-ABC is better in the search precision, and convergence property and has strong ability to escape from the local suboptima when compared with several other peer algorithms.

1. Introduction

Optimization can be viewed as one of the major quantitative tools in network of decision making, in which decisions have to be taken to optimize one or more objectives in some prescribed sets of circumstances. Typical single objective bound constrained optimization problems can be expressed as where is the number of variables (the dimension of the search space) and the and are the upper and lower bounds of the search space. In view of the practical utility of optimization problems, there is a need for efficient and robust computational algorithms, which can numerically solve in computers the mathematical models of medium as well as large size optimization problems arising in different fields. Evolutionary algorithms have emerged as a revolutionary approach for solving complex search and optimization problems. The success of most of the Heuristic optimization algorithms depends to a large extent on the careful balance of two conflicting goals, exploration (diversification) and exploitation (intensification). While exploration is important to ensure that every part of the solution domain is searched enough to provide a reliable estimate of the global optimum, exploitation, on the other hand, is important to concentrate the search effort around the best solutions found so far by searching their neighbourhoods to reach better solutions. The search algorithms achieve these two goals by using local search methods, global search approaches, or an integration of both global and local strategies: these algorithms are commonly known as hybrid methods. Hybrid algorithms are chosen as the topic of the present paper because they are a growing area of intelligent systems research, which aims to combine the desirable properties of different approaches to mitigate their individual weaknesses.

In the recent years, the hybridization technique based on particle swarm optimization (PSO) and artificial bee colony (ABC) was hotly researched. El-Abd [1] proposed a hybridization approach between ABC and SPSO. This is achieved by incorporating an ABC component into SPSO, which updates the pbest information of the particles in every iteration using the ABC update Equation; penalty guided support vector machines based on hybrid of particle swarm optimization and artificial bee colony algorithm to mine financial distress trend data [2]; Shi et al. [3] developed a hybrid swarm intelligent algorithm based on particle swarm optimization (PSO) and artificial bee colony (ABC). Turanoğlu and Özceylan [4] used particle swarm optimization and artificial bee colony to optimize single input-output fuzzy membership functions; the obtained results show that PSO and ABC methods are capable and effective to find optimal values of fuzzy membership functions in a reasonable time.

Honey bees are among the most closely studied social insets. Their foraging behaviour, learning, memorizing, and information sharing characteristics have recently been one of the most interesting research areas in swarm intelligence [5]. According to many researched results on the local variants of the PSO [6], PSO with small neighbourhoods performs better on complex problems. Ballerini pointed out that the interaction ruling animal collective behavior depends on topological rather than metric distance [79] and proposed a new model for self-organized dynamics and its flocking behaviour. In this paper, we propose a new optimization method, called dynamic multistage hybrid swarm intelligence optimization algorithm (DM-PSO-ABC).

The rest of the paper is organized as follows. Section 2 briefly introduces the original PSO algorithm and ABC algorithm and describes the related hybrid techniques of the particle swarm optimization (PSO) and artificial bee colony algorithm (ABC) in recent years. Section 3 discusses a new method named dynamic multistage DM-PSO-ABC algorithm. Section 4 tests the algorithms on the benchmarks, and the results obtained are presented and discussed. Finally, conclusions are given in Section 5.

2. The Original Algorithm

2.1. Particle Swarm Optimization (PSO)

Particle swarm optimization (PSO) is inspired by social behaviour simulation and was originally designed and developed by Kennedy and Eberhart [10]. It is a population-based search algorithm that was on the basis of the simulation of the social behaviour of birds within a flock. In the PSO, individuals are particles and are “flown” through hyperdimensional search space. They simulated birds’ swarm behaviour and made each particle in the swarm move according to its experience and the best experience of particle. Each particle represents a potential solution to the problem and searches around in a multidimensional search space. All particles fly through the D-dimensional parameter space of the problem while learning from the historical information gathered during the search process. The particles have a tendency to fly towards better search regions over the course of search process. The velocity and position updates of the th dimension of the th particle are presented below: where and are the acceleration constants and and are two uniformly distributed random numbers in . is the position of the th particle, represents the velocity of the th particle, is the best previous position yielding the best fitness value for the th particle, is the best position discovered by the whole population, and is the inertia weight used to balance between the global and local search abilities.

There are two main models of the PSO algorithm, called global model and local model. The two models differ in the way of defining the neighborhood for each particle. In the global model, the neighborhood of a particle consists of the particles in the whole swarm, which share information between each other. On the contrary, in the local model, the neighborhood of a particle is defined by several particles. The two models give different performances on different problems. van den Bergh and Engelbrecht [11] and Poli et al. [12] pointed out that the global model has a faster convergence speed but also has a higher probability of getting stuck in local optima than the local model. On the contrary, the local model is less vulnerable to the attraction of local optima but has a slower convergence speed than the global model. In order to give a standard form for PSO, Bratton and Kennedy proposed a standard version of PSO (SPSO) [13]. In SPSO a local ring population topology is used, and the experimental results have shown that the local model is more reliable than the global model on many test problems. The velocity update of the local PSO is

The population topology has a significant effect on the performance of PSO. It determines the way particles communicate or share information with each other. Population topologies can be divided into static and dynamic topologies. For static topologies, communication structures of circles, wheels, stars, and randomly assigned edges were tested [14], showing that the performance of algorithms is different in different problems depending on the topology used. Then, Kennedy and Mendes [15] have tested a large number of aspects of the social-network topology on five test functions. After that, a fully informed PSO (FIPS) algorithm was introduced by Mendes et al. [16]. In FIPS, a particle uses a stochastic average of pbests from all of its neighbors instead of using its own pbest position and the gbest position in the update equation. A recent study [17] showed that PSO algorithms with a ring topology are able to locate multiple global or local optima if a large enough population size is used.

For dynamic topologies, Suganthan [18] suggested a dynamically adjusted neighbor model, where the search begins with an lbest model and is gradually increased until the gbest model is reached. Janson and Middendorf [19] proposed a dynamic hierarchical PSO (HPSO) to define the neighborhood structure where particles move up or down the hierarchy depending on the quality of their pbest solutions. Liang et al. [20] developed a comprehensive learning PSO (CLPSO) for multimodal problems. In CLPSO, a particle uses different particles’ historical best information to update its velocity, and for each dimension, a particle can potentially learn from a different exemplar.

2.2. Artificial Bee Colony Algorithm (ABC)

Recently, by simulating the behaviour of honey bee swarm intelligence, an efficient bee colony (ABC) algorithm is proposed [21, 22]. Due to its simplicity and ease of implementation, the ABC algorithm has gained more and more attention and has been used to solve many practical engineering problems. In the basic ABC algorithm [1821], it classifies foraging artificial bees into three groups, namely, employed bees, onlookers, and scouts. An employed bee is responsible for flying to and making collections from the food source which the bee swarm is exploiting. An onlooker waits in the hive and decides on whether a food source is acceptable or not. This is done by watching the dances performed by the employed bees. A scout randomly searches for new food sources by means of some internal motivation or possible external clue. In the ABC algorithm, each solution to the problem under consideration is called a food source and represented by an -dimensional real-valued vector where the fitness of the solution corresponds to the nectar amount of the associated food resource. As with other intelligent swarm-based approaches, the ABC algorithm is an iterative process. The approach begins with a population of randomly generated solutions (or food sources); then, the following steps are repeated until a termination criterion is met [23, 24].

In the employed bees phase, artificial employed bees search for new food sources having more nectar within the neighbourhoods of the food source in their memory. They find a neighbour food source as defined in (2.4), providing that its nectar is higher than that of the previous one; the bee memorizes the new position and forgets the old one. Then evaluate its fitness as defined in (2.5). After producing the new food source, its fitness is calculated, and a greedy selection is applied between it and its parent. After that, employed bees share their food source information with onlooker bees waiting in the hive by dancing on the dancing area: where , BN is the number of food sources which is equal to the number of employed bees in each subgroup, and are randomly chosen indexes. Although is determined randomly, it has to be different from . is a random number between [−1, 1]. It controls the production of a neighbour food source position around and the modification represents the comparison of the neighbour food positions visually by the bee. Equation (2.4) shows that as the difference between the parameters of the and decreases, the perturbation on the position decreases, too:

In the onlooker bees’ phase, artificial onlooker bees probabilistically choose their food sources depending on the information provided by the employed bees as defined in (2.6) and (2.7). For this purpose, a fitness-based selection technique can be used, such as the roulette wheel selection method. After a food source for an onlooker bee is probabilistically chosen, a neighbourhood source is determined, and its fitness value is computed. As in the employed bees phase, a greedy selection is applied between two sources:

In the scout bees’ phase, employed bees whose solutions cannot be improved through a predetermined number of trials, called “limit”, become scouts, and their solutions are abandoned. Then, the scouts start to search for new solutions, randomly using (2.8). Hence, those sources which are initially poor or have been made poor by exploitation are abandoned, and negative feedback behaviour arises to balance the positive feedback:

3. A Novel Multistage Hybrid Swarm Intelligence Optimization Algorithm

As mentioned in the previous sections, researchers confirm that the PSO algorithm should be taken into account as a powerful technique for handling various kinds of optimization problems,but it is vulnerable to premature convergence and low stability in the process of evolution. According to many researches that PSO with small neighborhoods performs better on complex problems. The particles will enhance their diversity with a randomized regrouping schedule by dynamically changing neighborhood structures. We allow maximum information exchange among the particles to enhance the diversity of the particles. Cooperative ABC algorithm has the mechanism of labor’s division and cooperation, the different search strategies can cooperate together to achieve global optimization, and it has strong ability in global optimization, but when close to the global optimal solution, the search speed slowed, so the population diversity reduced and particles were apt to be trapped in local optimal solution. In order to make full use of and balance the exploration of the solution search equation of ABC and the exploitation of the proposed solution search equation of PSO, we propose a novel dynamic multistage hybrid DM-PSO-ABC based on the compensation by combining the evolution ideas of the PSO and ABC algorithm.

In the proposed hybrid DM-PSO-ABC algorithm, the different strategies in the three phases collaborate to cope with different situations in the search space. Firstly, we used local version of PSO with a new neighborhood topology of the small neighborhoods to maintain the population diversity; secondly, we adjusted the initial allocation of pheromone in the cooperative ABC algorithm based on a series of suboptimal solutions; obtained in the fore stage, we make use of these advantages of the parallel, positive feedback and high accuracy of solution of cooperation ABC to implement solving of the whole problem. In the third stage, we make use of the PSO global model that has a faster convergence speed to enhance the global convergence. In Pseudocode 1, the main steps of DM-PSO-ABC algorithm are given.

A dynamic Multi-stage hybrid swarm intelligence optimization algorithm
: Each swarm’s population size
: Swarms’ number
: Regrouping period
Max_gen: Max generations, stop criterion
Step 1 Generate initial particles and set up parameters for each particle;
   Initialize the position of all particles , and their fitnesses, and
   the velocity of all particles ( ); the best local position of all particles
    ;
Step 2 Update all particles using local version PSO with Dynamic multi-group
   For
      Update each swarm using (2.2), (2.3) local version PSO, pbests and lbests updating
   If
   Regroup the swarms randomly
   End
Step 3 local search carried out in each small swarm by the artificial bee colony
   the population of food sources (solutions) is initialized by the current lbests in each
   sub-swarm
   For each component
Employed Bees’ Phase
 For each employed bee
  Replace the component of the lbest by using the component of bee
  Calculate the [ _newlbest (lbest1, lbest2 , , )]
  If ( _newlbest_ better than _lbest)
Then newlbest replaced lbest
  For employed bee produce new food source positions by using (2.4)
  Calculate the value fitness by using (2.5)
  Apply greedy selection mechanism
  End For.
End For
  Calculate the probability values for the solutions by (2.6) and (2.7) using the
  roulette wheel selection rule;
Onlooker Bees’ Phase
 For each onlooker bee
 Chooses a food source depending on
 Replace the component of the lbest by using the     component of bee
 Calculate the [newlbest](lbest1, lbest2 , )
 If (newlbest) better than (lbest)
 Then newlbest replaced lbest
 For onlooker bee produce new food source positions by using (2.4)
 Calculate the value fitness
 Apply greedy selection mechanism
 End For
End For
Scout Bees’ Phase
 If there is an employed bee becomes scout
 Then replace it with a new random source positions by using (2.8)
 Memorize the best solution achieved so far
 Compare the best solution with lbest and Memorize the better one.
Step 4 Update all particles using global version PSO
 For
 Update all particles using global version PSO, pbests and gbest updating
End

3.1. Rough Searching by the Multiswarm PSO with a Randomized Regrouping Schedule

In the first stage of the DM-PSO-ABC, small neighbourhoods are used. The population is divided into small-sized swarms. Each subswarm uses its own members to search for better regions in the search space. The small-sized swarms used their own best historical information in the searching phase; they can easily converge to a local optimum because of PSO’s speedy convergence behaviour. Hence, a randomized regrouping schedule is introduced so that the particles will enhance their diversity by dynamically changing neighbourhoods’ structures. For every generation, the population is regrouped randomly and starts searching using a new configuration of small subswarm. Here is called the regrouping period. In this way, the information obtained by each subswarm is exchanged among the whole swarms. Simultaneously the diversity of the population is also increased. In DM-PSO-ABC, in order to constrain a particle within the search range, the fitness value of a particle is calculated, and the corresponding pbest is updated only if the particle is within the search range. Since all pbests and lbests are within the search bounds, all particles will eventually return within the search bounds.

In Figure 1, we use three swarms with ten particles in each swarm to show the regrouping schedule. First, the thirty particles are divided into three swarms randomly. Then the three swarms use their own particles to search for better solutions. In this period, they may converge to near a local optimum. Then the whole population is regrouped into new swarms. The new swarms begin their search. This process is continued until a stop criterion is satisfied. With the randomly regrouping schedule, particles from different swarms are grouped in a new configuration so that each small swarms search space is enlarged, and better solutions are possible to be found by the new small swarms.

In order to achieve better results on complex problems, the dynamic multigroup particle swarm optimizer is designed to make the particles have a large diversity, and consequently the convergence speed will be slow. Even after the global region is found, the particles will not converge to the global optimization very fast in order to avoid premature convergence. How to maintain the diversity and get the good result at the same time is a problem. Hence, In order to alleviate this weakness and give a better search in the better local areas, an ABC local search is added into the dynamic multiswarm particle swarm optimizer. For every generation, the pbests of ten randomly chosen particles will be used as the starting points of cooperation ABC local search. We calculate the fitness values of all the pbests for each refined solution and replace the nearest ones with the refined solutions if the refined solution is better.

3.2. Detailed Searching in Each Small Swarm by the Cooperative Artificial Bee Colony

In the second stage, we present an extended ABC algorithm, namely, the cooperative article bee colony, which significantly improves the original ABC in solving complex optimization problems. In the ABC algorithm, the goal of each individual bee is to produce the best solution. From expression (2.4), we can see that the new food source is produced by a random neighborhood of current food position and a random single dimension of -dimensional vector. This will bring about a problem that an individual may have discovered a good dimension, but the fitness of the individual is computed by using D-dimensional vector; hence we know it is very probable that the individual is not the best solution in the end, and the good dimension which the individual has found will be abandoned. To produce a good solution vector, all the populations must cooperate. And the information from all the populations needs to be used. Therefore, we apply cooperative search to solve the problem in the ABC algorithm and propose the cooperative ABC algorithm. We set a super best solution vector, namely, lbest, and its each component of D-dimensional is the best in each subswarm. For lbest: (lb1, lb2lbd), lbi corresponds to the th component of the lbest. In the initialization phase, we evaluate the fitness of the initial food source positions and set the position which has the best fitness as the initial lbest. In the employed bees’ and onlooker bees’ phase, we use the component of each individual to replace the corresponding component of the lbest to find the best position of the component. lbests do not influence the employed and onlooker bees finding new food sources. It is a virtual bee. It just saves the best one of each component. After all phases, the best solution achieved by all individuals and the lbest will be compared.

In this stage, the population of food sources (solutions) is initialized by each small swarm’s lbest which is generated in the first stage called nectar information of the food sources (solutions); the employed bee is to perform a neighbourhood search around a given food source. Therefore, the employed bee takes the exploitation search of the algorithm.

An illustration for the cooperative ABC local search phase for a swarm of 10 particles is given in Figure 2. Five pbests “” pbest1, pbest3, pbest5, pbest7, and pbest9 are randomly chosen as the start points for the local search, and 3 local optima , , and are achieved after the local search. The nearest three pbests “” pbest2, pbest4, pbest6, pbest8, pbest10 are replaced by , , and ”, respectively, provided the refined solutions are better.

3.3. Rapid Convergence by the Global Version of PSO

The success of PSO in solving one specific problem crucially depends on the choice of suitable strategies; the particles can play different roles (exploitation and exploration) during the two-stage search progress before; in the third stage, we make use of the PSO global model in dealing with global optimization problems including the improved capability of high convergence speed and good generality for the whole problem.

3.4. The Framework of DM-PSO-ABC

In order to achieve better results on multimodal problems, Liang et al. [20] designed an improved algorithm in such a way that the particles have a larger diversity by sacrificing the convergence speed of the global PSO. Even after the globally optimal region is found, the particles will not converge rapidly to the globally optimal solution. Hence, maintaining the diversity and obtaining good solutions rapidly at the same time are a challenge which is tackled by integrating the neighbourhood search phase of artificial bee colony in the DMS-PSO to obtain the DM-PSO-ABC; the population of food sources (solutions) is initialized by the current lbest in each subswarm; the position of a food source represents a possible solution to the problem, and the nectar amount of a food source corresponds to the quality (fitness) of the associated solution. Thereafter, the nectar of food sources is exploited by employed bees and onlooker bees, and this continual exploitation will ultimately cause them to become exhausted. Except the ABC phase in each subswarm, the process of the paper [16] is also retained. In this way, the strong exploration abilities of the basic PSO and the exploitation abilities of the ABC can be fully exploited. The flowchart of the proposed DM-PSO-ABC is presented in Figure 3.

4. Experimental Results and Discussions

4.1. Test Function

To investigate how DM-PSO-ABC performs in different environments, we chose 16 diverse benchmark problems [2527]: 2 unimodal problems, 6 unrotated multimodal problems, 6 rotated multimodal problems, and 2 composition problems. All problems are tested with 10 and 30 dimensions. The properties and the formulas of these functions are presented briefly described in Table 1.

Note that the rotated functions are particularly challenging for many existing optimization algorithms. In case of rotations, when one dimension in the original vector is changed, all dimensions of the rotated vector will be affected. To rotate a function, first an orthogonal matrix should be generated. The original variable is left multiplied by the orthogonal matrix to get the new rotated variable . This variable is used to calculate the fitness value . and , then.

When one dimension in vector is changed, all dimensions in vector will be affected. Hence, the rotated function cannot be solved by just one-dimensional searches. In this paper, we used Salomon’s method to generate the orthogonal matrix.

Composition functions are constructed using some basic benchmark functions to obtain more challenging problems with a randomly located global optimum and several randomly located deep local optima. The Gaussian function is used to combine the simple benchmark functions and blur the function’s structures. The composition functions are asymmetrical multimodal problems, with different properties in different areas. The details of how to construct this class of functions and six composition functions are presented in [27]. The composition functions are characterized by nonseparable search variables, rotated coordinates, and strong multimodality due to a huge number of local optima. They blend together the characteristics of different standard benchmarks.

4.2. Parameter Settings for the Involved Algorithms

Experiments were conducted to compare five algorithms including the proposed DM-PSO-ABC algorithm on the 16 test problems with ten dimensions and 30 dimensions. The algorithms and parameters settings are listed below:(i)DMS-PSO [25];(ii)ABC [21];(iii) fully informed particle swarm (FIPS) [16];(iv)CLPSO [20];(v)DM-PSO-ABC. The DM-PSO-ABC parameters are set as follow: , , . restricts particles’ velocities and is equal to 20% of the search range. To solve these problems, the number of subswarms is set at 10 which is also the same setting as in the DMS-PSO [25]. To tune the remaining parameters, nine selected test functions are used to investigate the impact of them. They are 10-dimensional test functions: , , , , , , , , . Experiments were conducted on these nine 10-dimensional test function, and the mean values of 30 runs are presented. The population size is set at 100, and the max iteration is set at 2000.

(1) Subswarm
For each subswarm, the results of investigation on the selected test problems are shown in Table 2. In this table, the mean values of nine problems with different parameter settings are given. Based on the comparison of the results, the best setting is 10 particles for each subswarm. This is also the setting for the ABC population size. Hence, in DM-PSO-ABC the population size is 100 as there are 10 subswarms.

(2) Regrouping Iterations
For the regrouping iterations , it should not be very small because we need to allow enough number of iterations for each subswarm to search. It should not also be too large because function evaluations will be wasted when the subswarm could not further improve. Table 3 presents the results of tuning . Based on the results, the best value for is 5.

4.3. Experimental Results and Discussions
4.3.1. Comparison Regarding Mean and Variance Values

For each function, the DMS-PSO-ABC, the DMS-PSO, the FIPS, the CLPSO, and the ABC are run 30 times. The maximum function evaluations Max_Fes are set at 100,000 for 10D, and 200,000 for 30D. The computer system is Windows XP (SP1) with Pentium () 4 3.00 GHz CPU, 4 GB RAM running the Matlab 7.1. For each function, we present the mean (the standard deviation) of the 30 runs in Tables 4 and 5.

Table 4 presents the means and variances of the 30 runs of the five algorithms on the 10D 16 test functions. The best results among the five algorithms are shown in bold. From the results, we observe that for the Group A unimodal problems, since DM-PSO-ABC has local search by ABC (exploit), it converged faster than other algorithms; we observe that DM-PSO-ABC has good performance in the multimodal groups. The is a good example, as it traps all other algorithms in local optima. The DM-PSO-ABC successfully avoids falling into the deep local optimum which is far from the global optimum. And the DM-PSO-ABC achieved the same best result as the CLPSO and the DMS-PSO on functions 5, 6, 7, and 8. The DM-PSO-ABC performs much better on rotated multimodal problems than others. On the two composition functions with randomly distributed local and global optima, DM-PSO-ABC performs the best.

From the results in Table 5, all 30D functions become more difficult than their 10D counterparts, and the results are not as good as in 10D cases, although we increased the maximum number of iterations from 2000 to 5000. The results of composition functions are not affected much. DM-PSO-ABC surpasses all other algorithms on functions 1, 2, 3, 5, 6, 7, 10, 12, 13, 15, and 16 and especially significantly improves the results on functions 5, 6, and 7. This implies that the DM-PSO-ABC is more effective in solving problems with dynamic multiswarm, and a local selection process of ABC may learn from different exemplars. Due to this, the DM-PSO-ABC explores a larger search space than the DMS-PSO and ABC algorithms. The larger search space is not achieved randomly. Instead, it is based on the historical search experience. Because of this, the DM-PSO-ABC performs comparably to or better than many algorithms on most of the problems experimented in this paper.

Comparing the results and the convergence graphs in Figure 4, among these five algorithms, DMS-PSO converges fast. But DM-PSO-ABC’s local search ability is better than DMS-PSO. The ABC’s performance is seriously affected after rotation, especially in the end of evolution; the population’s diversity is rapidly reduced. CLPSO does not perform the best for unimodal and simple multimodal problems ; FIPSO with a U-ring topology of local versions; it presents good performance on some unrotated multimodal problems and converges faster when compared to DM-PSO-ABC. However, although DM-PSO-ABC’s performance is also affected by the rotation, it still performs the best on four rotated problems. It can be observed that all PSO variants and ABC failed on the rotated Schwefel function, as it becomes much harder to solve after applying rotation.

4.3.2. Comparison Regarding the -Test Results

By analyzing the results on 10D and 30D problems, one can conclude that the DM-PSO-ABC benefits from both the DMS-PSO and the ABC and the global PSO algorithms by integrating the faster convergent speed of the ABC as well as the stronger exploration ability of the DMS-PSO to tackle diverse problems in 16 functions. Therefore, it performs significantly better than the two constituent approaches. Furthermore, most results are markedly improved by the proposed DM-PSO-ABC. The DMS-PSO-ABC can perform robustly with respect to scaling of the dimensions, rotation, and composition of difficult multimodal test problems.

Because a lot of experimental data were produced in this group contrast test, resulting in analyzing the performance of these algorithms with many difficulties, so we adopt bilateral -testing method to analyze the experimental data; hence, we can objectively evaluate the difference between DM-PSO-ABC algorithm and the other algorithms. Reference [28] is conducted at the 5% significance level in order to judge whether the results obtained with the best performing algorithm differ from the final results of the rest of the competitors in a statistically significant way. Set the significance level , because each function run 30 times, so the number of degrees of freed . From the -distribution table, we can get . , Is the mean of Inspection data differential, the standard error of , the result of in Table 6, , and mark “+” indicates that DM-PSO-ABC performs statistically better than the corresponding algorithm. On the other hand, the “−” mark indicates that the corresponding algorithm is better than DM-PSO-ABC. The last line is the sum of benchmark function whose optimization results have significant differences.

As revealed by Table 6, DM-PSO-ABC performs statistically better than the DMS-PSO in , , , , , , , ; DM-PSO-ABC performs statistically better than the ABC , , , , , , , , , , , , , , ; DM-PSO-ABC performs statistically better than the CLPSO , , , . There are 7 functions DM-PSO-ABC performs statistically better than the FIPS in , , , , , , , , . We also note that, in all benchmark instances, the performance of DM-PSO-ABC is statistically superior to the performance of the four other state-of-the-art algorithms. This difference in performance must be attributed to multistage search mechanisms, a fact that substantiates the usefulness of the modifications incorporated in DM-PSO-ABC.

5. Conclusion

This paper proposes a hybridization of dynamic multigroup particle swarm optimizer with the artificial bee colony (DM-PSO-ABC). Using the dynamic multi-swarm particle swarm optimizer with a randomized regrouping schedule and a local search of ABC algorithm, we periodically generate the nectar information of the food sources (solutions) based on the current pbests in each subswarm after particles’ positions have been updated. The nearest pbest is replaced by the new nectar if the new nectar vector has better fitness by ABC algorithm. The DM-PSO-ABC attempts to take merits of the DMS-PSO and the ABC in order to avoid all particles getting trapped in inferior local optimal regions. The DM-PSO-ABC enables the particles to have more diverse exemplars to learn from as we frequently regroup the subswarms. From the analysis of the experimental results, we observe that the proposed DM-PSO-ABC makes good use of the information in past solutions more effectively to generate frequently better quality solutions when compared to the other peer algorithms. The novel configuration of DM-PSO-ABC does not introduce additional complex operations beyond the original PSO and ABC. In fact, the DM-PSO-ABC eliminates the parameters in the original PSO, which normally need to be adjusted according to the properties of the test problems. In addition, the DM-PSO-ABC is simple and easy to implement.

Acknowledgments

The authors would like to thank all the reviewers for their constructive comments. The authors thank Mr. Suganthan for his thoughtful help and constructive suggestion. This research was supported by the National Natural Science Foundation of China (no. 70971020), the Open Project Program of Artificial Intelligence Key Laboratory of Sichuan Province (Sichuan University of Science and Engineering), China (no. 2012RYJ03), the fund Project of Hunan province, China (no. 11C1096).