Journal of Optimization

Journal of Optimization / 2013 / Article

Research Article | Open Access

Volume 2013 |Article ID 356420 | https://doi.org/10.1155/2013/356420

Wang Yong, Li Jing-yang, Li Chun-lei, "Double Flight-Modes Particle Swarm Optimization", Journal of Optimization, vol. 2013, Article ID 356420, 8 pages, 2013. https://doi.org/10.1155/2013/356420

Double Flight-Modes Particle Swarm Optimization

Academic Editor: Jein-Shan Chen
Received20 Feb 2013
Revised29 Sep 2013
Accepted30 Sep 2013
Published16 Dec 2013

Abstract

Getting inspiration from the real birds in flight, we propose a new particle swarm optimization algorithm that we call the double flight modes particle swarm optimization (DMPSO) in this paper. In the DMPSO, each bird (particle) can use both rotational flight mode and nonrotational flight mode to fly, while it is searching for food in its search space. There is a King in the swarm of birds, and the King controls each bird’s flight behavior in accordance with certain rules all the time. Experiments were conducted on benchmark functions such as Schwefel, Rastrigin, Ackley, Step, Griewank, and Sphere. The experimental results show that the DMPSO not only has marked advantage of global convergence property but also can effectively avoid the premature convergence problem and has good performance in solving the complex and high-dimensional optimization problems.

1. Introduction

Particle swarm optimization (PSO) was developed by Kennedy and Eberhart in 1995 [1], based on the swarm behavior of birds in searching for food. Since then, PSO has got more and more attention from the researchers in the domain of information and has generated much wider interests, because of its simplicity of implementation, and less domain knowledge required. However, the original PSO still has the phenomenon of the premature convergence problem, which exists in most of the stochastic optimization algorithms. In order to improve the performance of the PSO, many scholars have proposed various approaches to improve the performance of the PSO such as listed in the paper [222]. The methods presented by the authors mentioned in the paper [222] can be summed up into two strategies. The first strategy is to add the group quantity of information through increasing the population size of swarm, in order to achieve the purpose of improving the performance of algorithm. However, this strategy cannot fundamentally overcome the premature convergence problem and will certainly lead to the increase in running time of computation. The second strategy is, under the condition of not increasing the population size of swarm, to excavate or to increase every particle’s latent capacity to achieve the goal of improving the performance of algorithm. Although these approaches mentioned in the paper [222] can improve the performance of the PSO to some extent but cannot fundamentally solve the premature convergence problem which exists in the original PSO.

In this paper, we intend to present a new particle swarm optimization, namely, the double flight modes particle swarm optimization (DMPSO for short), based on the flight characteristics of birds. The rest of this paper is organized as follows. In Section 2, we briefly introduce the original PSO, the PSO-W, and the CLPSO. Section 3 describes the double flight-modes particle swarm optimization. In Section 4, we conduct our simulation experiments on some test functions for each algorithm and compare the performance of the DMPSO with that of the original PSO, with that of the PSO-W, and with that of the CLPSO. We give our conclusions in Section 5.

2. Particle Swarm Optimizers

2.1. The Original PSO

Particle swarm optimization (PSO) was developed by Kennedy and Eberhart [1]. PSO emulates the swarm behavior and the individuals are treated as points in the -dimensional search space. Each individual is named as a “particle” which represents a potential solution to a problem. The position and the velocity of the th particle are represented as , and respectively. The best previous position (the position yielding the best fitness value) of the th particle is represented as . The best position discovered by the whole population is represented as . Then the vector and the position of the th particle are updated according to the following equation [1]: where and are the acceleration coefficients reflecting the weighting of stochastic acceleration terms that pull each particle toward and positions, respectively, and and are two random numbers in the range .

2.2. Some Variants PSO

Since PSO was introduced by Kennedy and Eberhart [1], many researchers have worked on improving its performance in various ways and deriving many interesting variants. One of the variant PSOs [2] introduces a parameter called inertia weight into the original PSO as follows:

in which the inertia weight plays the role of balancing the global and local search. A large inertia weight facilitates a global search, while a small inertia weight facilitates a local search. In (2), if its inertia weight is a linearly decreasing weight over the course of search, then the variant PSO [2] is usually represented as PSO-W.

Another variant PSO [16], called the comprehensive learning particle swarm optimizer (CLPSO), presents a new learning strategy. In the CLPSO, the velocity updating equation is changed to

in which defines which particles’ the th particle should follow. can be the corresponding dimension of any particle’s (including its own ), and the decision depends on probability , referred to as the learning probability, which can take different values for different particles. We first generate a random number for each dimension of the th particle. If this random number is larger than , then the corresponding dimension will learn from its own ; otherwise, it will learn from another particle’s .

3. The Double Flight Modes Particle Swarm Optimization

3.1. The Flight Characteristics of Birds

Through careful observation, we have found that (1) most of birds have superb flight-skills. They can use various flight modes, such as rotational flight mode and non-rotational flight mode, to fly in their search space, can avoid the attack of their natural enemies and various obstacles and can avoid themselves being immersed into blind alley and (2) there is usually a King of birds (a flight commander) in most of the swarms of birds; the King controls or directs every bird’s flight mode and flying direction while the swarms of birds are searching for food in the search space. Therefore, we think if a bird uses only one flight mode to fly in its search space all the time, it will be unable to avoid the attack of its natural enemies and various obstacles and will easily be immersed into blind alley. If there is not a King of birds controlling the flying direction of the swarm, then the swarm will be fallen into being scattered and disunited. In most cases, if a bird has superb flight skills, it usually can find more food when it is foraging for food in its search space.

For the sake of simplicity, we use the following idealized rules.(1)Each bird only uses rotational flight mode and non-rotational flight mode to fly while it is searching for food in its search space.(2)There is a King of birds among a swarm of birds. The King controls or directs every bird’s flight behavior in accordance with certain rules and directs each bird’s flight mode and flying direction while a swarm of birds is searching for food in its search space.(3)The flight speed of a bird has something to do with the distance between the bird and its flying destination. We can say to a certain degree that the farther the distance between the bird and its flying destination, the faster the speed flying to the destination.

If we idealize the flight characteristics of a swarm of birds according to the previous description, then we can redevelop a new particle swarm optimization inspired by the real birds in flight. In simulations, we use virtual birds (particles) naturally.

3.2. The Flight Modes of Birds

Let and be the position and the velocity of particle , respectively, be the best previous position yielding the best fitness value for the th particle, and be the best position discovered by the whole population.

We first give the conceptions of rotational flight mode and non-rotational flight mode, respectively.

Definition 1. Let be the position of particle at the time instant and be the best position discovered by the whole population.

(1) We call that particle uses rotational flight mode to fly to the position , if particle flies to the position according to the following equation: where the number is a random integer in the set .

We can use a diagrammatic sketch to depict that a group of birds is using rotational flight mode to fly to the as in Figure 1.

We can foresee if a group of birds is using rotational flight mode to fly to the position at the time instant , then to some extent, the group will gather around the position at the time instant .

(2) We call that particle uses non-rotational flight mode to fly to the position , if particle flies to the position according to the following equation: where is an increasing function about the variable (the distance between the th component of and the th component of ), and are the acceleration coefficients, and both and are two uniformly distributed random numbers in the range .

In our simulations of this paper, we select as the increasing function in (6), where and , .

We can use a diagrammatic sketch to depict that particle is using non-rotational flight mode to fly to the position as in Figure 2.

3.3. The Flight-Control Approach of the King

Since the King of birds controls each bird’s flight behavior in accordance with certain rules; therefore, we look at the King as a flight commander, and we think that every bird’s flight behavior is controlled by the King. Following that, we will set up a flight-command rule for the King, and the King uses this rule to control each bird’s flight behavior. We first give the conception of flight command as follows.

Definition 2. Let be the population size of the swarm and be the order of particle in the swarm according to the ascending sort of the fitness value at the time instant . Then the conception of flight command is defined as the following equation: in which, if ( accordingly), then the fitness value of particle will be the best one in the swarm at the time instant . Meanwhile, if ( accordingly), then the fitness value of particle will be the worst one in the swarm at the time instant .

The King controls each particle’s flight mode according to the following approach.

Step 1. The King first gives an instruction randomly, where is a normal random distribution in the range .

Step 2. Each particle chooses its flight mode according the following rule: That is to say, if , then particle will choose non-rotational flight mode to fly to next step. Otherwise, particle will choose rotational flight mode to fly to next step.

The procedure of the DMPSO can be simply described as in Procedure 1.

  Objective function ,
  Initialize each particle’s position and velocity randomly and assign to
the at the same time ( )
while (The stop criterion is not satisfied) do {
   For   , do
   {
     if ( )
     calculate the fitness value of particle   
     end if
     }
    Rank the swarm according to the ascending sort of the fitness value and get
    
    Assign each particle’s flight-mode according to the Rule (8)
   Update
    }
   
  end while
  output

4. Validation and Comparison

In order to test the performance of the DMPSO, we have tested it against the original PSO [1], the PSO-W [2], and the CLPSO [16]. For the ease of visualization, we have implemented our simulations using Matlab for various test functions.

4.1. Benchmark Functions

For the sake of having a fair and reasonable comparison between the DMPSO and the PSO, the DMPSO and the PSO-W, or the DMPSO and the CLPSO, we have chosen six well-known high-dimensional functions as our optimization simulation tests. All functions are tested on 50 dimensions. The properties and the formula of these functions are presented as follows.

(a) Schwefel’s function: It is a unimodal function and has a global minimum at . The complexity of Schwefel’s function is due to its deep local optima being far from the global optimum. It will be hard to find the global optimum if many particles fall into one of the deep local optima.

(b) Rastrigin’s function: The function is a complex multimodal function, and the number of its local minima shows an exponential increase with the problem dimension and its peak shape is up and down in jumping. When attempting to solve Rastrigin’s function, algorithms may easily fall into a local optimum. Therefore, an algorithm capable of maintaining a larger diversity is likely to yield better results, so Rastrigin is viewed as a typical function used for testing global search performance of algorithm. It has a global minimum at .

(c) Step function: in which is a bracket function (Gaussian function) and ,  . Step function is a discontinuous function and has a global minimum in the domain .

(d) Ackley’s function: The function has one narrow global optimum basin and many minor local optima and has a global optimum at .

(e) Sphere function: It is a unimodal function, and its global minimum at .

(f) Griewank’s function: The search space is relatively large in this optimization problem; Griewank’s function has a component causing linkages among variables, thereby making it difficult to reach the global optimum. Therefore, it is generally regarded as a complex multimodal problem that is hard to optimize. The function has a global minimum at .

4.2. Comparison of Experimental Results and Discussions

There are many means to carry out the comparison of algorithm performance. We can use such ways to compare the number of function evaluations (FEs) for a given accuracy or to compare their accuracies for a fixed number of function evaluations, and so forth. In our simulations, we use the two ways mentioned previously, and we set up a running-stopping condition for each algorithm. If a run has found the global optimal solution of optimization problem or has reached a fixed number of function evaluations, then it will stop running. We run each algorithm for 50 times so that we can do reasonable and meaningful analysis.

In order to ensure the comparability of the experimental results, the parameters settings are set as consistent as possible for the DMPSO, the PSO, the PSO-W, and the CLPSO. The parameters settings are listed as follows: the population size and the acceleration coefficients for all simulations. As for the test functions , , and , we set the maximum iterations at 500 (or the maximum number of function evaluations at 15000 correspondingly). As for the test functions , , and , we set the maximum iterations at 2000 (or the maximum number of function evaluations at 60000 correspondingly). On the other hand, we set the inertia weight for the CLPSO, and set for the DMPSO. As for the PSO-W, the linearly decreasing inertia weight is used which starts at 0.9 and ends at 0.4, and .

In our experiments, we choose the best fitness value (BFV), the worst fitness value (WFV), the mean value (Mean), and the number of function evaluations in the form of mean (MFEs) ± standard deviation (STDEV) (success rate of algorithm in finding the global optima) as the evaluation indicators of optimization ability for the four algorithms mentioned previously. These evaluation indicators cannot only reflect the optimization ability but also indicate the computing cost. We have got the experimental results being listed in Table 1.


f AlgorithmsBFVWFVMeanMFEs STDEV

DMPSO0 (56%)
PSO9.73649461.56703531.111268 (0%)
CLPSO 0.001325 (0%)
PSO-w4.429573 80.584602 (0%)

DMPSO049.7479533.979836 (64%)
PSO (0%)
CLPSO21.15330447.51682531.762105 (0%)
PSO-w (0%)

DMPSO000 (100%)
PSO35111160 (0%)
CLPSO01224.22 (48%)
PSO-w45311 (0%)

DMPSO (0%)
PSO0.07933314.4767253.692153 (0%)
CLPSO 2.2006751.082862 (0%)
PSO-w0.00360011.9166790.656150 (0%)

DMPSO0 (28%)
PSO 26.2178014.196887 (0%)
CLPSO (0%)
PSO-w (0%)

DMPSO0 (82%)
PSO0.04331 20.290841 (0%)
CLPSO 0.0922640.014715 (0%)
PSO-w 90.7519661.826534 (0%)

In order to more easily contrast the convergence characteristics of the four algorithms, Figure 3 presents the convergence characteristics in term of the best fitness value of the mean run of each algorithm for each test function.

Discussions. From the results listed in Table 1 and the convergence curve simulation diagram in Figure 3, we can see that (1) the DMPSO performs much better than the original PSO, the CLPSO, and the PSO-W and (2) the DMPSO is much superior to the original PSO, the CLPSO, and the PSO-W in terms of global convergence property, accuracy, and efficiency. So, we conclude that the performance of the DMPSO is much better than that of the original PSO, that of the CLPSO, and that of the PSO-W.

5. Conclusions

This paper presents a novel particle swam optimization with double flight modes that we call the double flight modes particle swam optimization (DMPSO). In the optimization algorithm, each bird (particle) can use both rotational flight mode and non-rotational flight mode to fly while it is foraging for food in the search space. By using its superb flight skills, each bird (particle) has much improved its searching efficiency. From the experiments we conduct on some benchmark functions such as Schwefel, Rastrigin, Ackley, Step, Griewank, and Sphere, we can conclude that the DMPSO not only has marked advantage of global convergence property but also can effectively avoid the premature convergence problem to some extent and is one of good choices for use to solve the complex and high-dimensional optimization problems, although the DMPSO is not necessarily the best choice for solving various real-world optimization problems.

Acknowledgments

This work was supported by the key programs of the Institution of Higher Learning, Guangxi, China, under Grant (no. 201202ZD032), the Guangxi Key Laboratory of Hybrid Computation and IC Design Analysis, the Natural Science Foundation of Guangxi, China, under Grant (no. 0832084), and the Natural Science Foundation of China under Grant (no. 61074185).

References

  1. J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks. Part 1, pp. 1942–1948, Piscataway, NJ, USA, December 1995. View at: Google Scholar
  2. Y. Shi and R. Eberhart, “Empirical study of particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation, vol. 3, pp. 1945–1950, 1999. View at: Google Scholar
  3. K. M. Rasmussen and T. Krink, “Hybrid particle swarm optimization with breeding and subpopulations,” in Proceedings of the 3rd Genetic and Evolutionary third Genetic and Evolutionary Computation Conference, San Francisco, Calif, USA, 2001. View at: Google Scholar
  4. Y. H. Shi and R. C. Eberhart, “Fuzzy adaptive particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation, pp. 101–106, IEEE, Piscataway, NJ, USA, May 2001. View at: Google Scholar
  5. A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240–255, 2004. View at: Publisher Site | Google Scholar
  6. J. Kennedy and R. Mendes, “Population structure and particle swarm performance,” in Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1671–1676, Honolulu, Hawaii, USA, 2002. View at: Google Scholar
  7. F. van den Bergh and A. P. Engelbrecht, “A cooperative approach to participle swam optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 225–239, 2004. View at: Publisher Site | Google Scholar
  8. K. E. Parsopoulos and M. N. Vrahatis, “On the Computation of all global minimizers through particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 211–224, 2004. View at: Publisher Site | Google Scholar
  9. J. Sun and W. B. Xu, “A global search of quantum-behaved particle swarm optimization,” in Proceedings of the Congress on Evolutionary Computation, pp. 325–331, IEEE Press, Washington, DC, USA, 2004. View at: Google Scholar
  10. J. Sun, W. Xu, and J. Liu, “Parameter selection of quantum-behaved particle swarm optimization,” Lecture Notes in Computer Science, Springer, Berlin, Germany. View at: Google Scholar
  11. Z.-S. Lu and Z.-R. Hou, “Particle swarm optimization with adaptive mutation,” Acta Electronica Sinica, vol. 32, no. 3, pp. 416–420, 2004 (Chinese). View at: Google Scholar
  12. R. He, Y.-J. Wang, Q. Wang, J.-H. Zhou, and C.-Y. Hu, “An Improved particle swarm optimization based on self-adaptive escape velocity,” Journal of Software, vol. 16, no. 12, pp. 2036–2044, 2005 (Chinese). View at: Publisher Site | Google Scholar
  13. L. Cong, Y.-H. Sha, and L.-C. Jiao, “Organizational evolutionary particle swarm optimization for numerical optimization,” Pattern Recognition and Artificial Intelligence, vol. 20, no. 2, pp. 145–153, 2007 (Chinese). View at: Google Scholar
  14. B. Jiao, Z. Lian, and X. Gu, “A dynamic inertia weight particle swarm optimization algorithm,” Chaos, Solitons and Fractals, vol. 37, no. 3, pp. 698–705, 2008. View at: Publisher Site | Google Scholar
  15. J. J. Liang and P. N. Suganthan, “Dynamic multi-swarm particle swarm optimizer,” in Proceedings of the IEEE Swarm Intelligence Symposium (SIS '05), pp. 124–129, Pasadena, Calif, USA, June 2005. View at: Publisher Site | Google Scholar
  16. J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006. View at: Publisher Site | Google Scholar
  17. X. F. Wang, F. Wang, and Y.-H. Qiu, “Research on a novel particle swarm algorithm with dynamic topology,” Computer Science, vol. 34, no. 3, pp. 205–207, 2007 (Chinese). View at: Google Scholar
  18. P. S. Shelokar, P. Siarry, V. K. Jayaraman, and B. D. Kulkarni, “Particle swarm and ant colony algorithms hybridized for improved continuous optimization,” Applied Mathematics and Computation, vol. 188, no. 1, pp. 129–142, 2007. View at: Publisher Site | Google Scholar
  19. Q. Lu, S.-R. Liu, and X.-N. Qiu, “Design and realization of particle swarm optimization based on pheromone mechanism,” Acta Automatica Sinica, vol. 35, no. 11, pp. 1410–1419, 2009. View at: Publisher Site | Google Scholar
  20. Q. Lu, X.-N. Qiu, and S.-R. Liu, “A discrete particle swarm optimization algorithm with fully communicated information,” in Proceedings of the Genetic and Evolutionary Computation Conference (GEC '09), pp. 393–400, ACM/SIGEVO, New York, NY, USA, June 2009. View at: Publisher Site | Google Scholar
  21. Q. Lu and S.-R. Liu, “A particle swarm optimization algorithm with fully communicated information,” Acta Electronica Sinca, vol. 38, no. 3, pp. 664–667, 2010 (Chinese). View at: Google Scholar
  22. Z.-Z. Shao, H.-G. Wang, and H. Liu, “Dimensionality reduction symmetrical PSO algorithm characterized by heuristic detection and self-learning,” Computer Science, vol. 37, no. 5, pp. 219–222, 2010 (Chinese). View at: Google Scholar

Copyright © 2013 Wang Yong et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views927
Downloads585
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.