Stochastic Systems: Modeling, Optimization, and ApplicationsView this Special Issue
Analysis of Population Diversity of Dynamic Probabilistic Particle Swarm Optimization Algorithms
In evolutionary algorithm, population diversity is an important factor for solving performance. In this paper, combined with some population diversity analysis methods in other evolutionary algorithms, three indicators are introduced to be measures of population diversity in PSO algorithms, which are standard deviation of population fitness values, population entropy, and Manhattan norm of standard deviation in population positions. The three measures are used to analyze the population diversity in a relatively new PSO variant—Dynamic Probabilistic Particle Swarm Optimization (DPPSO). The results show that the three measure methods can fully reflect the evolution of population diversity in DPPSO algorithms from different angles, and we also discuss the impact of population diversity on the DPPSO variants. The relevant conclusions of the population diversity on DPPSO can be used to analyze, design, and improve the DPPSO algorithms, thus improving optimization performance, which could also be beneficial to understand the working mechanism of DPPSO theoretically.
Particle Swarm Optimization (PSO) is a kind of bionic evolutionary algorithm proposed by Kennedy and Eberhart in 1995 . PSO, like other evolutionary algorithms such as Genetic Algorithm and Evolutionary Programming, is a type of population based method and finds the optimal solution by evolving individuals in a population. Currently, PSO has already been widely used in the field of engineering optimization [2–5].
In the population based evolutionary algorithms, premature convergence of population is one of the common problems, which is also widely concerned. In the practical engineering application of evolutionary algorithms, it is a very important research direction on how to avoid individual falling into the local optimum and its premature convergence. One important reason for premature convergence is that the population diversity declines relatively faster during the process of evolution. And maintaining a certain diversity in population can help individual keep the exploration capability to the unexplored space during the process of evolution.
In an evolutionary algorithm, population diversity is commonly used to show the difference among individuals. And maintaining the population diversity can help to reduce the possibility of convergence to local optimum. Therefore, it has a great significance for finding a final satisfactory solution by maintaining the population diversity during the evolution.
As for those well-researched evolutionary algorithms, like Genetic Algorithm, there have already been achievements about analyzing the population diversity . However, PSO has a quite difference from other evolutionary algorithms. There are also some preliminary results about the population diversity of PSO. Blackwell proposed a mechanism which helps to increase the population diversity in a dynamic environment on the basis of analyzing the population diversity of PSO, and through this mechanism, the performance has been improved in a simple dynamic environment . Shi and Eberhart proposed a method to measure the population diversity based on the velocity of particles . Chong et al. investigated the relationship between the generalization capability and the diversity in the evolutionary computation methods . Cheng et al. described the relationship between the information dissemination and the population diversity, and their paper focuses on studying the change of population diversity led by the information dissemination caused by different population topology . Ismail and Engelbrecht researched the population diversity of a kind of cooperative PSO and explained the reason why this kind of algorithm had a greater performance .
The above researches indicate that one of the important improvement strategies to the classic PSO is to adjust and control the population diversity. Kennedy proposed a kind of PSO without the velocity attribute [12, 13], Ni and Deng also did some further research [14, 15] and systematically integrated a kind of PSO variant, Dynamic Probabilistic Particle Swarm Optimization Algorithm (DPPSO), and many variants of DPPSO showed better solving performance. As for this kind of PSO, there are not so many researches in population diversity, population topology, and parameter settings. And there is still not a comprehensive analysis on the population diversity of DPPSO. In fact, the analysis on the population diversity would have a great significance to the comprehension of the improvement and the working mechanism of PSO. In this paper, based on three kinds of population diversity indicators, we will do a systematic research on the population diversity of DPPSO. And the research conclusions could have a great significance to the improvement of DPPSO, settings of population topology, settings of important parameters, and theoretical study of the working mechanism of PSO.
This paper is organized as follows. Section 2 describes the fundamental principle of DPPSO. In Section 3, we introduce the population diversity measures used in this paper. Section 4 analyzes the evolution of population diversity of several DPPSO variants. And Section 5 makes a conclusion to this paper.
2. Variants of Dynamic Probabilistic Particle Swarm Optimization
In the former and typical PSO algorithms, we usually regard a particle as a point in the solution space which has velocity and position and flies with its velocity to update its position. The typical PSO variants include PSO with inertia weight  and PSO with compression factor .
Kennedy discussed the necessity of the velocity attribute of particles on the basis of the working mechanism of PSO  and designed a first PSO algorithm in which particles have no velocity attribute . Ni and Deng took further study and proposed more variants of this kind of PSO algorithms which could generally be called Dynamic Probabilistic Particle Swarm Optimization (DPPSO) . Unlike typical PSO algorithms, particles in DPPSO do not have velocity but only position; they update their positions by means of probability based on individual and social experience. In DPPSO algorithms, the position of a particle is calculated as (1), (2), and (3) :
Table 1 shows the meaning of corresponding symbols in the position update equation.
In (1), (2), and (3) of DPPSO, and are -dimensional vectors and determined by (3) and (2), respectively. , , and are important parameters which are usually set to positive constants. is a random number generator that usually satisfies a specific distribution and has a direct impact on the sampling method in the solution space.
The execution process of DPPSO algorithm is as Algorithm 1. There are different DPPSO variants according to various . Different DPPSO variants usually have different advantages . DPPSO-Gaussian has a fast convergence speed in the early stage of evolution. DPPSO-Cauchy performs well in certain benchmark problems. DPPSO-Logistic and DPPSO-Hyperbolic secant have a good ability of exploration even in the later stage of evolution which can guarantee particles to escape from local optima with strong ability. The different advantages of these DPPSO variants will be beneficial for investigators to design appropriate methods to solve practical engineering problems. The population diversity plays an important role in the evolutionary process; there are no complete analyses for the evolution of population diversity of DPPSO variants.
3. Measure Methods of Population Diversity
Previous researches demonstrate that, during an evolutionary algorithm, the maintenance of population diversity is an important premise of a continuous evolution. Therefore, researches about the population diversity in an evolutionary algorithm can help to get further comprehension about the mechanism of an algorithm. At present, researchers have explored the population diversity of algorithms from several angles.
In this paper, according to the characteristics of DPPSO algorithms, based on population fitness standard deviation, population position standard deviation and population entropy, we adopted several measure methods for the population diversity of DPPSO. The relative definitions are as follows.
Definition 1 (population fitness standard deviation). If particles of a population , get their fitness value at generation , let , and define the population fitness standard deviations of generation of PSO algorithm as
Definition 2 (population position standard deviation). If particles of a population , get their positions at generation and can be expressed as a vector , , let , and . The population position standard deviation for generation can be computed by
Definition 3 (population entropy). Population entropy is set as one of the population diversity measures, and it is defined as follows in generation : where is the total number of particle fitness categories and is the proportion of particles which have the fitness category .
In general, population entropy can be estimated through the following methods.(1)Compare the fitness values of particles in the population , , find the minimum and maximum of them, divide the interval equally into parts, and count the number of particles in each part. can be usually set to (the population scale).(2)Count the number of nonzero elements of , which is noted as , and calculate , .(3) Put into (6), and the population entropy in generation can be calculated.
As can be seen in the definition of population entropy, when all the particles in the population have the same fitness value, , and population entropy reaches the minimum , and when the fitness values are distributed more evenly, population entropy will be greater.
Based on the above definitions, we set population fitness standard deviation (DiversityA), the Manhattan form of population position standard deviation vector (DiversityB), and population entropy (DiversityC) as the measure indicators of the population diversity of DPPSO. The greater values of indicators mean that there are more particles of different types in the population, and the diversity is more obvious.
According to the definition of three population diversity indicators, DiversityB (Manhattan form of population position standard deviation vector) detects population diversity from the view of particles distribution in the solution space; DiversityA (population fitness standard deviation) and DiversityC (population entropy), on the other hand, are related to the optimization problems and detect population diversity by the particles’ fitness values. And DiversityC shows the distribution characteristics of different types of particles.
4. Analysis of Population Diversity of DPPSO Variants
4.1. Experiment Setting
Population diversity is an important guarantee of sustaining evolution of particles in PSO algorithm. This paper mainly focused on the performance of population diversity of those important variants of DPPSO. The measurement of population diversity is introduced based on the definitions in Section 3, which are DiversityA (population fitness standard deviation), DiversityB (Manhattan norm of population position standard deviation vector), and DiversityC (population entropy).
In DPPSO, the following four variants are relatively representative, which are DPPSO-Gaussian, DPPSO-Cauchy, DPPSO-Hyperbolic secant, and DPPSO-Cauchy. This paper examined the evolutions of population diversity of these four variants when solving benchmark functions. These benchmark functions include Sphere, Schaffer F6, Rosenbrock, and Griewank, which are defined in Table 2. Settings of parameters are as follows: population scale , number of evolutionary generation is 3000, the population topology is the fully connected topology, , , and . The experiment is repeated 100 times and the three types of population diversity measures are taken into account.
4.2. Analysis of Results
In this paper, we applied the four mentioned DPPSO algorithms to solve benchmark functions and analyzed the evolution of the optimal fitness value and the three indicators of population diversity. And to describe the evolutionary trend more specifically, samples of diversity indicators of Figures 1, 2, 3, and 4 in this section were taken every 10 generations. DiversityC generally reflects the concentration degree of particles in solution space; statistical information of DiversityC (Table 3) will also be discussed in this section.
(a) Evolutionary Trend
(a) Evolutionary Trend
(a) Evolutionary Trend
(a) Evolutionary Trend
4.2.1. Sphere Function
For Sphere function, the evolutionary trends of the optimal fitness value are shown in Figure 1(a), and the evolutionary trends of three diversity indicators are shown in Figures 1(b), 1(c), and 1(d), respectively.
As can be seen from Figure 1(a), DPPSO-Logistic provides better results in the four algorithms while DPPSO-Hyperbolic secant comes second; according to Figure 1(c), the DiversityB tends to slow down in the later stage of evolution; according to Figure 1(d), the DiversityC indicator of the DPPSO-Logistic and DPPSO-Hyperbolic secant persists at a high level even in the later stage. The median and mean of the DPPSO-Logistic and DPPSO-Hyperbolic secant are greater than the other two algorithms and the standard deviation gets better performance in DiversityC indicator considering data in Table 3. As may be gathered from this, the DiversityB and DiversityC can well explain the difference in performance for these algorithms.
4.2.2. Schaffer F6 Function
For Schaffer F6 function, the evolutionary trends of the optimal fitness value are shown in Figure 2(a), and the evolutionary trends of three diversity indicators are shown in Figures 2(b), 2(c), and 2(d), respectively.
We can see from Figure 2(a) that these algorithms have similar performances while DPPSO-Cauchy performs a little worse. At the same time, DPPSO-Logistic and DPPSO-Hyperbolic secant provide better results in the later stage of evolution. According to Figures 2(c) and 2(d), the DiversityA and DiversityC persist at a relative high level in the early and middle stage of evolution while the two indicators dropped slowly and persisted at a lower level in the later stage. According to Table 3, the median and mean of the DPPSO-Cauchy algorithm are relatively lower which explains the worse performance of this algorithm at some degree. We can conclude that for functions which can hardly be optimized like Schaffer F6, it is necessary to keep the population diversity at a higher level in the early and middle stage of evolution while in the later stage it should be kept at a lower level for a more careful search. This phenomenon could be well explained by the evolution of DiversityA and DiversityC.
4.2.3. Rosenbrock Function
For Rosenbrock function, the evolutionary trends of the optimal fitness value are shown in Figure 3(a), and the evolutionary trends of three diversity indicators are shown in Figures 3(b), 3(c), and 3(d), respectively.
As we can see in Figure 3(a), these algorithms have similar performances in the later stage of evolution while DPPSO-Gaussian performs a little worse, and DPPSO-Hyperbolic secant and DPPSO-Cauchy show a better performance which can be explained by the DiversityA and DiversityB indicators from Figures 3(b) and 3(c) in which DPPSO-Gaussian drops fast in both indicators and keeps lower than the other three algorithms. Considering the DiversityC in Figure 3(d), DPPSO-Gaussian shows a sudden drop in the middle stage which illustrates that a quick drop of the population diversity would lead to a worse performance of the algorithm.
4.2.4. Griewank Function
For Griewank function, the evolutionary trends of the optimal fitness value are shown in Figure 4(a), and the evolutionary trends of three diversity indicators are shown in Figures 4(b), 4(c), and 4(d), respectively.
We can see from Figures 4(b) and 4(c) that the four algorithms have similar performance in the DiversityA and DiversityB indicators, and DPPSO-Hyperbolic secant and DPPSO-Cauchy drop more slowly in the two indicators. In the later stage of evolution all algorithms show similar performance in the two indicators, but we need to point out that the value of indicators fluctuates strongly for DPPSO-Cauchy. Considering Figure 4(a), the solving performance of the four algorithms in Rastrigin function is similar to the evolution of the two indicators.
Considering Figure 4(d) and Table 3, the DiversityC of DPPSO-Logistic and DPPSO-Hyperbolic secant persists at a higher level in the evolutionary process, and the mean and median keep greater than DPPSO-Cauchy and DPPSO-Gaussian even in the later stage which shows a good population diversity in the later stage. It enables the algorithm a good ability of exploration to keep searching the resolution space. Considering Figure 4(a), DPPSO-Logistic and DPPSO-Hyperbolic secant tend to keep moving to the optimal solution in theory, which can be explained by DiversityC. We also need to notice that the DiversityC of DPPSO-Cauchy fluctuates strongly in the evolutionary process.
As can be seen from the above experimental data, in PSO algorithms, population diversity is an important factor which could influence the global search capability. During the evolution, a high level of population diversity reflects a strong ability of exploration for individual, but it will reduce the convergence speed of the algorithm if kept at a high level for a long time; a low level of population diversity reflects a good ability of exploitation for individuals to do careful search in known regions.
In this paper, based on population fitness standard deviation, population position standard deviation, and population entropy, we designed corresponding indicators to measure the population diversity of DPPSO and analyzed the evolution of these indicators during the runtime of typical variants of DPPSO.
Considering the analysis in Section 4, the population diversity of DPPSO algorithm can be fully measured by the three introduced indicators. A quick drop of population diversity will often lead to trapping into a local optima; it will make contributions to improve the ability of exploration for the DPPSO to keep a high level of population diversity in the early and middle stage of evolution; in the later stage of evolution, it will help to explore solution space further if keeping DiversityC at a high level, whereas it will help to do careful search in known regions if keeping it low. Therefore, it is important to keep the population diversity to a certain degree during the evolutionary process of DPPSO. When applying DPPSO to engineering practice, based on the characteristics of different variants of DPPSO, we can design reasonable control strategies to avoid local optima and improve the performance through controlling the population diversity.
The work in the next stage includes further study on the relationship between population topology and the population diversity further analysis of the working mechanisms of DPPSO.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This paper is supported by Laboratory of Military Network Technology, PLA University of Science and Technology (LMNT2012-1), Nanjing, China, and NSFC (Grant nos. 60803061 and 61170164).
J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, IEEE, December 1995.View at: Google Scholar
Y. Shi and R. Eberhart, “A modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Computational Intelligence, pp. 69–73, IEEE, May 1998.View at: Google Scholar
S. Cheng, Y. Shi, and Q. Qin, “Population diversity based study on search information propagation in particle swarm optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '12), pp. 1–8, IEEE, 2012.View at: Google Scholar
A. Ismail and A. P. Engelbrecht, “Measuring diversity in the cooperative particle swarm optimizer,” in Swarm Intelligence, pp. 97–108, Springer, 2012.View at: Google Scholar
Q. Ni and J. Deng, “Two improvement strategies for logistic dynamic particle swarm optimization,” in Adaptive and Natural Computing Algorithms, pp. 320–329, Springer, 2011.View at: Google Scholar