Abstract

Particle Swarm Optimization (PSO) is an effective tool in solving optimization problems. However, PSO usually suffers from the premature convergence due to the quick losing of the swarm diversity. In this paper, we first analyze the motion behavior of the swarm based on the probability characteristic of learning parameters. Then a PSO with double learning patterns (PSO-DLP) is developed, which employs the master swarm and the slave swarm with different learning patterns to achieve a trade-off between the convergence speed and the swarm diversity. The particles in the master swarm and the slave swarm are encouraged to explore search for keeping the swarm diversity and to learn from the global best particle for refining a promising solution, respectively. When the evolutionary states of two swarms interact, an interaction mechanism is enabled. This mechanism can help the slave swarm in jumping out of the local optima and improve the convergence precision of the master swarm. The proposed PSO-DLP is evaluated on 20 benchmark functions, including rotated multimodal and complex shifted problems. The simulation results and statistical analysis show that PSO-DLP obtains a promising performance and outperforms eight PSO variants.

1. Introduction

Particle Swarm Optimization (PSO) [1, 2], firstly proposed by Kennedy and Eberhart in 1995, was inspired by the simulation of simplified social behaviors including fish schooling and bird flocking. Similar to genetic algorithm, it is also a population-based algorithm, but it has no evolutionary operations such as crossover, mutation, or selection. PSO finds the global best solution by adjusting the trajectory of each particle not only towards its personal best particle pbest but also towards the historically global best particle gbest [3]. Recently, PSO has been successfully applied to optimization problems in many fields [47].

In the basic PSO [1], each particle in the swarm learns from pbest and gbest. During the evolutionary process, gbest is the only shared information acquired by the whole swarm, which finally leads to all particles converging to the same destination and the diversity losing quickly. If gbest is a local optimum far from the global one, the swarm is easy to be trapped in local optimum. The learning mechanism of the basic PSO can cause a fast convergence rate, but it easily leads to the premature convergence when solving multimodal optimization problems. In order to overcome this problem, researchers proposed many strategies to improve it.

An adaptive strategy of the learning parameter [3, 818] is an effective way to improve the PSO performance. Shi and Eberhart [8] proposed a linearly decreasing inertia weight (LDIW) to balance the local search and global search. Ratnaweera et al. [3] proposed a time-varying acceleration coefficient (TVAC), which is beneficial to enhancing the exploration ability of particles in the early evolutionary phase and improving the local searching ability of particles in the late phase. In [3], the two variants of the PSO-TVAC were developed, namely, the PSO-TVAC with mutation (MPSO-TVAC) and the self-organizing hierarchical PSO-TVAC (HPSO). Zhan et al. [9] proposed an adaptive PSO in which the learning parameters were adaptively adjusted with the change of the evolutionary states of the swarm. Kundu et al. [10] proposed a nonlinearly time-varying acceleration coefficient and an aging guideline to avoid the premature convergence. They also suggested a mean learning strategy to enhance the exploitation search.

To increase the swarm diversity, auxiliary techniques are introduced into PSO’s framework, such as genetic operators [3, 12, 13, 19], differential evolution [20], and artificial bee colony (ABC) [21, 22]. Mahmoodabadi et al. [22] combined the multicrossover and the bee colony mechanism to improve the exploration capability of PSO. In [9], an elitist learning strategy, similar to the mutation operation, is developed to help the gbest particle jump out of the local optima.

The topological structure of the swarm has a significant effect on the performance of PSO [2326]. Kennedy [23] pointed out that small neighborhood is fit for complex problems, while large neighborhood is used for simple problems. Parsopoulos and Vrahatis [24] integrated the benefits of the global PSO and the local PSO and then proposed a unified PSO (UPSO). Mendes et al. [25] proposed a fully informed PSO (FIPSO) in which the updating velocity depends on neighborhoods of each particle instead of gbest and pbest. Bratton and Kennedy [26] proposed a standard version of PSO (SPSO) in which a local ring topology is employed. Experimental results indicated that the local model is more effective than the global model on many test problems.

The design of learning strategies improves the performance of PSO in complex multimodal problems [2731]. In the basic PSO, each particle learns from gbest. Hence, the swarm diversity loses easily in the initial evolutionary process. Zhou et al. [27] developed a random position PSO (RPPSO). In [27], if the randomly generated number is smaller than the acceptance probability, a random position is used to guide the particle. Liang et al. [28] proposed a comprehensive learning PSO (CLPSO) in which each particle can select its pbest or other’s pbest as the learning exemplar according to the given probability. Li et al. [29] developed a self-learning PSO which contains four learning strategies: exploitation, jumping out, exploration, and convergence. Huang et al. [30] proposed an example-based learning PSO (ELPSO) algorithm that uses multiple global best positions as elite examples to retain the swarm diversity. Chen et al. [31] proposed a PSO with an aging leader and challengers (ALC-PSO). In [31], an aging mechanism is designed to promote a suitable leader to lead the evolution of the swarm.

Multiswarm PSO (MS-PSO) [3236] is developed to maintain the balance of the exploration/exploitation search. In a homogenous MS-PSO, each swarm adopts a similar learning strategy. On the contrary, each swarm uses different learning strategy to implement the different search task in a heterogeneous MS-PSO. Niu et al. [32] presented a multiswarm cooperative optimizer (MCPSO) where the population consists of a master swarm and several slave swarms. Each slave swarm searches the better solution independently, while the master swarm collects the best particles from the slaves to refine the global optimization. Sun and Li [33] presented a two-swarm cooperative PSO (TCPSO) where one swarm is used to concentrate around the local optimum and accelerate the convergence, and the particles of the other one are dispersed around the search interval to keep the diversity.

Local topological structures and dynamical exemplar strategies can keep the swarm diversity to efficiently prevent the premature convergence, but the convergence rate is slow. The heterogeneous multiswarm method is powerful in balancing the local search and the global search by taking advantage of different learning strategies of the subswarms. In this method, it is important to design learning strategies which directly influence the performance of the algorithm. In order to develop efficient learning strategies, this paper analyzes the motion behavior of the swarm based on the probability characteristic of the learning parameters. Meanwhile, we point out that the probability characteristic of the learning parameters has an influence on the search space of the particles. Then, we propose a PSO with double learning patterns (PSO-DLP) to improve both the convergence rate and the accuracy of PSO. PSO-DLP adopts the master swarm and the slave swarm to obtain a good balance between the exploitation and exploration search. The master swarm is used in the exploration search, and the slave swarm is employed to carry out the exploitation search and to accelerate the convergence. The two swarms fulfill their tasks by adjusting the probability characteristics of the learning parameters. An interaction mechanism between two swarms is developed, which can help the slave swarm in jumping out of the premature convergence and improve the convergence precision of the master swarm. The simulation results show that PSO-DLP has powerful ability of global search and fast convergence speed. Experimental studies on 20 well-known benchmark functions show that the proposed PSO-DLP obtains a promising performance in terms of the accuracy and the convergence speed.

The rest of this paper is organized as follows. Section 2 describes the basic PSO. Section 3 presents the behavior analysis of the basic PSO. Section 4 presents the methodologies of PSO-DLP in detail. Section 5 provides the experimental settings and the results. Finally, Section 6 concludes this work.

2. Basic PSO

PSO is a population-based algorithm, which consists of a group of particles. Each particle is represented by two vectors, namely, a position vector and a velocity vector . The position of each particle in the search space is treated as a potential solution. Each particle updates its velocity and position with the following equations: where and represent the th dimension of the position and the velocity of particle at the th iteration. (pbest) is the personal best experience of the th particle and (gbest) is the group best experience found by the whole swarm. is an inertia weight; and are acceleration coefficients reflecting the weighting of the stochastic acceleration terms that pull each particle toward pbest and gbest, respectively. Random factors and are two independent random numbers in the range of .

The first item of (1) (i.e., ) is the previous velocity, which provides the necessary momentum for particles to roam around the search space. The second item (i.e., ), known as the “cognitive’’ component, represents the personal thinking of each particle. The cognitive component encourages the particles to move towards pbest. The third item (i.e., ), regarded as the “social’’ component, expresses the collaborative effect of the particles in finding the global optimal solution. The social component always pulls the particles to gbest.

3. Behavior Analysis of PSO

In the basic PSO, each particle toward the optimum solution is guided by the cognitive component and the social component. Therefore, proper control of these two components is very important to find the optimum solution accurately and efficiently [3]. Hence, researchers have proposed various strategies, such as the linearly varying acceleration coefficients [3], the nonlinearly varying acceleration coefficients [911], and the time-varying acceleration coefficients with the evolutionary state [16, 17]. In addition, Krohling [14] presented a Gaussian PSO in which random factors and acceleration coefficients are instead of the two positive random numbers generated according to the Gaussian distribution. Similar to the Gaussian PSO, Richer and Blackwell [15] introduced the Lèvy distribution to replace the uniform distribution of random factors. The previous works show that the setting of acceleration coefficients and the probability distribution of random factors can affect the performance of PSO.

In order to facilitate the analysis, the velocity updating equation needs to be transformed. Substituting (1) into (2), we obtainAccording to (3), we can also writeLet . Then (4) can be simplified as follows:Let and   ; then from (5), we getwhere and are functions with respect to the random factors , and the undetermined parameter . Hence and are correlative random variables in . From (6), the movement of the particles from the th iteration to the th iteration can be divided into two parts. The particles firstly enter into a search space, defined by the second term of (6), and then make an inertia motion decided by the third term. Let , where is a point on the line connected by and . Given a fixed , the term represents a line segment from to , namely, one-step search space. The acceleration coefficient and the variable can influence the size of the one-step search space, and can also decide the distribution of the location of particles in this space. The variable is the weighting coefficient, which reflects the differences of exploitation of pbest and gbest. When , the particle learns from pbest; when , the particle learns from gbest. Variables and are the learning factor and the distribution factor.

In order to analyze the effect of the two factors on the one-step search space, we need to calculate the probabilistic characteristics of the factors. Considering a general situation, that is, (i.e., ), we obtain

and are two independent uniform random numbers in the range of . Hence we calculate the density function () of the learning factor and the joint density () of the learning factor and the distribution factor (see the Appendix). The density function and the joint density are given byUsing (8) and (9), we can calculate the conditional probability density of given :

We can see from (8) that is a unimodal function and it is symmetrical at equal to 0.5 (Figure 1(a)). If the interval of the learning factor is divided into three smaller ones, that is, (0, 0.25), (0.25, 0.75), and (0.75, 1), the probability of the learning factor lying in the interval is the same as that in interval . When the learning factor is located in the interval , we consider the particle emphasis on learning from gbest. If, on the contrary, the learning factor is located in the interval , the particle pays attention to learning from pbest.

From (10), we can see that the range of the distribution factordepends on the value of the learning factor. The relationship between the learning factor and the distribution factor is shown in Figure 1(b). When the learning factor equals 0.5, the value of the distribution factor ranges from 0 to 1. When the learning factor equals 0 or 1, the value of the distribution factor ranges from 0 to 0.5. If given a fixed , increases with the rising of the distribution factor. This means that the particles tend to the large value of the distribution factor.

In the basic PSO, the probability characteristics of the learning factor and the distribution factor may bring forth the clustering of the swarm. During an iteration, each particle emphasizes learning from gbest with the probability that ; at the same time, the value range of the distribution factor is restricted because the learning factor is in the interval of , which means that the one-step space of the particle is lessened. This situation will eventually lead to the swarm clustering move toward gbest. The conditional probability density of the distribution factor is an increasing function with the increased distribution factor, which means that the particle trends to a “long-distance flying’’ in the one-step space. This flying may accelerate the clustering of the particles. When the learning factor equals 0.5, the value of the distribution factor falls into its maximum range from 0 to 1. But the probability of the learning factor around 0.5 is 0.182,   . In other words, the one-step space of the particle is shrunk with the probability of 0.818. All of the above reasons may bring about the quick clustering of the swarm.

4. PSO with Double Learning Patterns (PSO-DLP)

In this section, we describe PSO-DLP in detail. According to the analysis of the basic PSO, we know that the probability characteristics of the learning parameters can influence the search behavior of particles. PSO-DLP takes advantage of the probability characteristics of the leaning parameters to achieve the global search.

4.1. Learning Patterns

In PSO-DLP, we develop two learning patterns, called a uniform exploration and an enhanced exploitation. And two swarms, namely, a master swarm and a slave swarm, are also employed. The master swarm adopts the uniform exploration to avoid the premature convergence and the slave swarm uses the enhanced exploitation to accelerate the convergence speed.

In the uniform exploration learning, we present three novel strategies. Firstly, the learning factor and the distribution factor are independent. The value of the distribution factor varies from 0 to 1, which enlarges the search space of particles efficiently. In the basic PSO, the value of the distribution factor reaches its maximum value of 1 only when the value of the learning factor equals 0.5. Secondly, the distribution factor is subject to the uniform distribution in , which is beneficial for preserving the swarm diversity. Thirdly, the learning factor decreases with iteration, which helps the particles in emphasizing the exploration in the earlier stage of the evolution and enhancing the convergence in the later stage. For particle in the master swarm, its velocity is updated as follows:where , ;   is the number of fitness evaluations (); and is the maximum defined by the user. is the gbest of the master swarm.

The purpose of the enhanced exploitation learning is to make the search focus on a region for refining a promising solution. In this pattern, the learning factor and the distribution factor are independent. The learning factor is a uniform random number in [0, 0.5], which can make the search concentrate around gbest. The distribution factor is also a random number generated in the interval of uniformly, which shrinks the search space to accelerate the convergence.

For particle in the master swarm, its velocity updates as follows:where is the gbest of the slave swarm.

4.2. Interaction between Swarms

In PSO-DLP, two learning patterns play different roles in the evolutionary process. The master swarm is used for the global search, while the slave swarm is employed for the local search. But particles in the slave swarm get easily trapped in the local optima. To improve the convergence precision of the slave swarm, the interaction between two swarms is necessary. This interaction is unidirectional and the information flow is from the master swarm to the slave swarm. Particles in the master swarm do not receive information from the slave swarms in order to keep the ability to perform the global search. When the best particle in the slave swarm is not improved from successive or the fitness value of is less than the one of the best particle in the slave swarm (the higher the fitness value, the better the performance of the particle), particles in the slave swarm can learn from the best particle in the master swarm   . Too small values and too large ones of are not desirable, as the former tends to weaken the exploitation capability of particles in the slave swarm and the latter leads to wasting computation resources (as the slave swarm may suffer from the premature convergence). In this study, we set .

4.3. PSO-DLP Procedure

PSO-DLP is developed based on the two learning patterns and the interaction between the swarms discussed above. The pseudocode of the PSO-DLP is given in Algorithm 1. As no additional operation is introduced into PSO-DLP, the computational complexity and the storage memory are the same as the basic PSO. Therefore, PSO-DLP keeps the simplicity of the basic PSO.

Input:
 Master swarm size (), slave swarm size (), the dimensionality of the problem space
 (), maximum number of fitness evolution (iterMax), objective function ()
(1) Randomly initialize the position and velocity of all particles in the master swarm.
(2) Randomly initialize the position and velocity of all particles in the slave swarm.
(3) Calculate the fitness values of and ,
(4) Set and to be and for each
particle of the master swarm and slave swarm, respectively.
(5) Set the particle with the best fitness of the master swarm to be , and set the particle
with the best fitness of the slave swarm to be .
(6) Set generation , the counter .
(7) while   ≤ iterMax  do
(8)  for    do
(9)  for    do
(10)    Update the dimensional velocity according to (11), and Update the dimensional
    position according to (2).
(11)  end for
(12)  Calculate the fitness value ;
(13)  if    then
(14)      ; ;
(15)      if    then
(16)       ; ;
(17)      end if
(18)  end if
(19)  end for
(20)  for    do
(21)  for    do
(22)    Update the dimensional velocity according to (14), and Update the dimensional
    position according to (2);
(23)  end for
(24)  Calculate the fitness value ;
(25)  if    then
(26)      ; ;
(27)      if    then
(28)       ; ; ;
(29)      else
(30)       ;
(31)      end if
(32)  end if
(33)  end for
(34)  if    then
(35)     ;
(36)  end if
(37)  ;
(38) end while
(39) Set the global best position
(40) return  Gb

5. Experimental Setup and Simulation Results

5.1. Benchmark Functions

The 20 scalable benchmark functions are used to investigate the performance of the proposed algorithm, including unimodal functions, multimodal functions, rotated functions, and shifted functions. These functions are widely adopted in [3, 813, 1940]. These problems are minimization problems and the expressions of the functions are given in Table 1. The shifting and rotating methods used in the test functions are from [28, 39]. In Table 1, denotes the orthogonal matrix, and denotes the shifted global optimum and denotes the shifted fitness value. All functions are evaluated with 30 variables.

5.2. Parameter Settings for the Involved PSO Variants

For the comprehensive comparison with PSO-DLP, eight PSO variants are employed in this paper. They are PSO-W [8], HPSO [3], FIPS [25], CLPSO [28], SPSO [26], HEPSO [22], COMPSO [32], and TS-CPSO [33]. The parameter settings of each peer algorithm are extracted from the corresponding literatures and they are given in Table 2.

The swarm size is set to 40 [28] for eight algorithms expect for COMPSO. For COMPSO, the population size is set at 80, as the suggestion proposed by its author. COMPSO has four subswarms, and TS-CPSO and PSO-DLP have two subswarms. In the three algorithms, each subswarm has the same size which is set to 20. To ensure a fair comparison, the maximum number of fitness evaluations is set at . Each algorithm was run 30 times independently to reduce random discrepancy.

5.3. Performance Metrics

In this study, we adopted the fitness mean (Fm), success rate (SR), and success performance (SP) to assess the accuracy, the reliability, and the efficiency of PSO, respectively [39]. Fm is the mean difference between the best fitness value of the algorithm and the global optimum ( and represent the global solution and the best solution found by an algorithm, resp.). SR denotes the consistency of an algorithm to achieve the solution with a predefined . SP denotes the number of required by an algorithm to solve the problem with a predefined . The Wilcoxon test [41, 42] is applied to perform pairwise comparison between PSO-DLP and its peers. The confidence level is fixed at 0.95. If the performance of PSO-DLP is better than its peer, the value is denoted by “+’’; if the value is “’’ or “−’’, this indicates that the performance of PSO-DLP is almost the same as or is significantly worse than its peers, respectively. The average ranking can be calculated to undertake multiple comparisons [41, 42].

5.4. Parameter Sensitivity Analysis

The effect of parameter on the performance of PSO-DLP was investigated. The seven benchmarks are selected, namely, , , , , , , , and . The value of PSO-DLP was set to an integer value varying from 10 to 100. Table 3 presents the mean values and the standard deviation obtained by PSO-DLP with different .

From Table 3, we can see that the experimental results of functions and decrease with the increasing of   because and are unimodal and the higher value of is beneficial to the exploitation search. For multimodal functions , , , , , and , the performance of PSO-DLP tends to deteriorate when is set too low (i.e., = 10, 20, and 30) or too high (i.e., = 90, 100). When is set too low, the interaction between subswarms is overemphasized such that the slave swarm receives the best solution from the master swarm so frequently that the search of the slave swarm is not sufficient to refine the search. When is set too high, the interaction between swarms is not enough such that the slave swarm may get into the premature convergence for solving multimodal functions. If the interaction interval is long, the master swarm could not help the slave swarm in getting out of the local optimum. From the above experimental findings, the parameter value of at 50 had the promising performance of PSO-DLP. Hence, this parameter setting was adopted in the following experiments.

5.5. Experimental Results and Discussions
5.5.1. Comparison among the Fm Results

Table 4 presents the results of the fitness means Fm and standard deviation SD of the nine algorithms on the conventional problems, rotated problems, and shifted problems. The best results among the nine algorithms are shown in bold.

From Table 4, we obverse that PSO-DLP obtains the best searching accuracy in the eight conventional problems, except for the function . To be specific, PSO-DLP finds the global optima of , , , and . In addition, PSO-DLP is the only algorithm to achieve the accuracy level of 10−19 for the function . For the function , COMPSO obtains the best result and PSO-DLP ranks the third.

The functions in group 2 () are multimodal functions with coordinate rotations. As can be seen from Table 3, all the algorithms perform worse for the rotated functions than the unrotated counterparts. PSO-DLP performs the best on functions , , and , that is, 3 out of 5 functions, in terms of the Fm. For function , TS-CPSO obtains the best fitness value, followed by PSO-DLP.

From Table 4, we observe that all involved algorithms cannot find the global optima for shifted problems () and complex problems (-), except for function 15. Function 15 is an easy shifted problem; only PSO-DLP achieves the global optima (). PSO-DLP surpasses all other algorithms on the four functions. This implies that PSO-DLP is the least sensitive one to the shifted problem. Complex problems are rotating and shifting operations (-). PSO-DLP obtains the best solutions on complex functions.

5.5.2. Comparison of Nonparametric Statistical Test Results

The Wilcoxon test results between PSO-DLP and peers are reported in Table 4. From the test results, PSO-DLP significantly outperforms all other algorithms on functions , , , , , , , , , , , and . COMPSO performs the best on function . For function , PSO-DLP and COMPSO do not show significant difference according to their statistical test values. PSO-DLP obtains the global solution on functions and . For functions and , the performance of HEPSO, TS-CPSO, and CLPSO is almost the same as that of PSO-DLP as the Wilcoxon test results show.

To compare the performance of multiple algorithms, we used the average ranking of the Friedman test to evaluate the effectiveness of PSO-DLP comprehensively. The algorithm with the better convergence accuracy is assigned to a smaller rank value. Table 5 shows the average ranking of nine PSO variants on functions . According to the result of the average ranking, nine algorithms can be sorted into the following order: PSO-DLP, HEPSO, COMPSO, CLPSO, HPSO, TS-CPSO, PSO-W, SPSO, and FIPS. PSO-DLP obtains the best performance.

5.5.3. Comparisons of the SR Value and the SP Value

The SR values and the SP values of all algorithms are shown in Table 6. If an algorithm is unable to completely solve the problems, then the value of SR is set at zero (i.e., ) and the value of SP is assigned an infinity value (i.e., SP =“inf”). As shown in Table 6, we observe that PSO-DLP has more superior search reliability among the compared peers. It can completely solve all conventional problems and rotated problems except for functions , , and . For functions , , and , none of the compared algorithms can solve the problem completely at the predefined and PSO-DLP can achieve the best result on functions and from Table 3. For the shifted problems, PSO-DLP completely solves four out of six functions at the predefined . Meanwhile, none of the algorithms are able to solve functions and . Nevertheless, PSO-DLP can obtain the smallest Fm value for functions and .

As reported in Table 6, PSO-DLP can get seven best SP values in conventional problems and rotated problems. The results show that PSO-DLP costs the least computation to solve these problems at the predefined . Figure 2 shows the convergence curves of the nine PSO algorithms on all functions. It can be seen that the convergence curves for functions , , , , and (Figures 2(b), 2(d), 2(j), 2(k), and 2(t)) show that the PSO-DLP has the competitive convergence speed among its peers. For functions , , and (Figures 2(i), 2(q), and 2(r)), PSO-DLP converges slowly at the beginning and achieves with fast convergence in later stage of the evolution. Particularly, the convergence speed of PSO-DLP in functions , , and (Figures 2(f), 2(g), and 2(h)) is significantly faster than its peers in the middle stage of the evolution. Overall, PSO-DLP has better control of the convergence speed and accuracy than the involved PSO variants.

5.5.4. Effect of Different Strategies

The proposed PSO-DLP employs two learning patterns: the uniform exploration leaning (UEL) and the enhanced exploitation learning (EEL). To investigate the effects of them, we studied the performance of PSO-W, PSO with UEL (PSO-UEL), PSO with EEL (PSO-EEL), and the complete PSO-DLP. PSO-UEL and PSO-EEL adopt one swarm so that there is no interaction between the subswarms. The Fm values of all involved algorithms are presented in Table 7.

From Table 7, PSO-UEL achieves better results than PSO-W on 15 functions, and it successfully solves multimodal functions and , which means that PSO-UEL has strong exploration ability. Meanwhile, we observe that PSO-EEL performed well in unimodal functions (, , , and ) and Rosenbrock functions (, , and ), but it fails to solve the other multimodal functions. This observation means that PSO-EEL is easier to be trapped in the local optima for multimodal functions. By integrating the two learning patterns, PSO-DLP obtains excellent performance.

5.5.5. Performance Comparisons between PSO-DLP and Other Evolutionary Algorithms

In this section, we compare PSO-DLP with the covariance matrix adaptation evolution strategy (CMA-ES) [43], self-adaptive DE (SaDE) [44], and adaptive DE with optional external archive (JADE) [45]. The CMA-ES algorithm is a powerful stochastic optimization technique, where the multivariate normal distribution has a mean and a covariance matrix constantly updated during the evolutionary process. Differential evolution (DE) has been shown to be an efficient stochastic search algorithm for solving optimization problems. SaDE provides a mutation strategy pool, where the strategy is adaptively selected according to its previous performance. The JADE is another adaptive DE-variant, where a novel mutation strategy is developed and an external archive is employed to store information of progress direction. Simulation results showed that JADE has competitive performances.

Twenty benchmark functions as presented in Table 1 are used to test the performance of four algorithms. The parameter values of CMA-ES and DEs are selected according to the recommendations of their respective references, as listed in Table 3. The maximum number of fitness evaluations is set at and population size is set at 40 ( for CMA-ES) for all algorithms. All functions are tested on 30 dimensions (). All algorithms are run independently 30 times to ensure the fair assessment.

The mean (Fm) and the standard deviation (SD) of the results obtained by each algorithm for all algorithms are summarized in Table 8. From Table 8, we observed that PSO-DLP has better performance on eleven functions and has the same performance with DEs or CMA-ES on six functions. From the final rank, we can obtain the order of the involved algorithms: PSO-DLP, JADE, CMA-ES, and SADE. PSO-DLP has competitive performance among these algorithms on the tested functions.

6. Conclusion

In this paper we study the motion behavior of the swarm and point out that the probabilistic characteristics of the learning factor and the distribution factor in the updating equation can affect the searching behavior of particles. We develop a Particle Swarm Optimization with double learning patterns (PSO-DLP), which uses two swarms with different searching ability and the interaction mechanism between swarms to control exploration and exploitation searches. In the PSO-DLP, the master swarm encourages the exploration process while the slave swarm concentrates on a small region in order to find its best local optimum easily and accelerate its convergence. The two swarms fulfill their searching tasks by adopting two different learning patterns in which the searching behavior is controlled by adjusting the probabilistic characteristics of the learning factor and the distribution factor.

The significant feature of PSO-DLP is that it has a better balance between high precision and fast convergence. Another attractive property of PSO-DLP is that it employs the probabilistic characteristics of the learning parameters to control the swarm diversity so that it does not need any additional operation.

In our future work, we will study the effect of the topological structure of the uniform exploration learning pattern. And PSO-DLP will be extended to the image segmentation and multiobjective problems.

Appendix

Considering a case where , we calculate the joint probability distribution of and :Since and are uniform random variables on , we can findBy the joint probability distribution of and , the joint density of and isUsing the joint density of and , we can calculate the marginal density of and the marginal density of .

As the marginal density of isThe marginal density of isWe can get the conditional distribution of given using the following equation:So the conditional distribution of given isThe conditional density of given is

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant no. 61300059 and Provincial Project of Natural Science Research for Anhui Colleges of China under Grants nos. KJ2012Z031 and KJ2012Z024.