Abstract

Getting inspiration from the real birds in flight, we propose a new particle swarm optimization algorithm that we call the double flight modes particle swarm optimization (DMPSO) in this paper. In the DMPSO, each bird (particle) can use both rotational flight mode and nonrotational flight mode to fly, while it is searching for food in its search space. There is a King in the swarm of birds, and the King controls each bird’s flight behavior in accordance with certain rules all the time. Experiments were conducted on benchmark functions such as Schwefel, Rastrigin, Ackley, Step, Griewank, and Sphere. The experimental results show that the DMPSO not only has marked advantage of global convergence property but also can effectively avoid the premature convergence problem and has good performance in solving the complex and high-dimensional optimization problems.

1. Introduction

Particle swarm optimization (PSO) was developed by Kennedy and Eberhart in 1995 [1], based on the swarm behavior of birds in searching for food. Since then, PSO has got more and more attention from the researchers in the domain of information and has generated much wider interests, because of its simplicity of implementation, and less domain knowledge required. However, the original PSO still has the phenomenon of the premature convergence problem, which exists in most of the stochastic optimization algorithms. In order to improve the performance of the PSO, many scholars have proposed various approaches to improve the performance of the PSO such as listed in the paper [222]. The methods presented by the authors mentioned in the paper [222] can be summed up into two strategies. The first strategy is to add the group quantity of information through increasing the population size of swarm, in order to achieve the purpose of improving the performance of algorithm. However, this strategy cannot fundamentally overcome the premature convergence problem and will certainly lead to the increase in running time of computation. The second strategy is, under the condition of not increasing the population size of swarm, to excavate or to increase every particle’s latent capacity to achieve the goal of improving the performance of algorithm. Although these approaches mentioned in the paper [222] can improve the performance of the PSO to some extent but cannot fundamentally solve the premature convergence problem which exists in the original PSO.

In this paper, we intend to present a new particle swarm optimization, namely, the double flight modes particle swarm optimization (DMPSO for short), based on the flight characteristics of birds. The rest of this paper is organized as follows. In Section 2, we briefly introduce the original PSO, the PSO-W, and the CLPSO. Section 3 describes the double flight-modes particle swarm optimization. In Section 4, we conduct our simulation experiments on some test functions for each algorithm and compare the performance of the DMPSO with that of the original PSO, with that of the PSO-W, and with that of the CLPSO. We give our conclusions in Section 5.

2. Particle Swarm Optimizers

2.1. The Original PSO

Particle swarm optimization (PSO) was developed by Kennedy and Eberhart [1]. PSO emulates the swarm behavior and the individuals are treated as points in the -dimensional search space. Each individual is named as a “particle” which represents a potential solution to a problem. The position and the velocity of the th particle are represented as , and respectively. The best previous position (the position yielding the best fitness value) of the th particle is represented as . The best position discovered by the whole population is represented as . Then the vector and the position of the th particle are updated according to the following equation [1]: where and are the acceleration coefficients reflecting the weighting of stochastic acceleration terms that pull each particle toward and positions, respectively, and and are two random numbers in the range .

2.2. Some Variants PSO

Since PSO was introduced by Kennedy and Eberhart [1], many researchers have worked on improving its performance in various ways and deriving many interesting variants. One of the variant PSOs [2] introduces a parameter called inertia weight into the original PSO as follows:

in which the inertia weight plays the role of balancing the global and local search. A large inertia weight facilitates a global search, while a small inertia weight facilitates a local search. In (2), if its inertia weight is a linearly decreasing weight over the course of search, then the variant PSO [2] is usually represented as PSO-W.

Another variant PSO [16], called the comprehensive learning particle swarm optimizer (CLPSO), presents a new learning strategy. In the CLPSO, the velocity updating equation is changed to

in which defines which particles’ the th particle should follow. can be the corresponding dimension of any particle’s (including its own ), and the decision depends on probability , referred to as the learning probability, which can take different values for different particles. We first generate a random number for each dimension of the th particle. If this random number is larger than , then the corresponding dimension will learn from its own ; otherwise, it will learn from another particle’s .

3. The Double Flight Modes Particle Swarm Optimization

3.1. The Flight Characteristics of Birds

Through careful observation, we have found that (1) most of birds have superb flight-skills. They can use various flight modes, such as rotational flight mode and non-rotational flight mode, to fly in their search space, can avoid the attack of their natural enemies and various obstacles and can avoid themselves being immersed into blind alley and (2) there is usually a King of birds (a flight commander) in most of the swarms of birds; the King controls or directs every bird’s flight mode and flying direction while the swarms of birds are searching for food in the search space. Therefore, we think if a bird uses only one flight mode to fly in its search space all the time, it will be unable to avoid the attack of its natural enemies and various obstacles and will easily be immersed into blind alley. If there is not a King of birds controlling the flying direction of the swarm, then the swarm will be fallen into being scattered and disunited. In most cases, if a bird has superb flight skills, it usually can find more food when it is foraging for food in its search space.

For the sake of simplicity, we use the following idealized rules.(1)Each bird only uses rotational flight mode and non-rotational flight mode to fly while it is searching for food in its search space.(2)There is a King of birds among a swarm of birds. The King controls or directs every bird’s flight behavior in accordance with certain rules and directs each bird’s flight mode and flying direction while a swarm of birds is searching for food in its search space.(3)The flight speed of a bird has something to do with the distance between the bird and its flying destination. We can say to a certain degree that the farther the distance between the bird and its flying destination, the faster the speed flying to the destination.

If we idealize the flight characteristics of a swarm of birds according to the previous description, then we can redevelop a new particle swarm optimization inspired by the real birds in flight. In simulations, we use virtual birds (particles) naturally.

3.2. The Flight Modes of Birds

Let and be the position and the velocity of particle , respectively, be the best previous position yielding the best fitness value for the th particle, and be the best position discovered by the whole population.

We first give the conceptions of rotational flight mode and non-rotational flight mode, respectively.

Definition 1. Let be the position of particle at the time instant and be the best position discovered by the whole population.

(1) We call that particle uses rotational flight mode to fly to the position , if particle flies to the position according to the following equation: where the number is a random integer in the set .

We can use a diagrammatic sketch to depict that a group of birds is using rotational flight mode to fly to the as in Figure 1.

We can foresee if a group of birds is using rotational flight mode to fly to the position at the time instant , then to some extent, the group will gather around the position at the time instant .

(2) We call that particle uses non-rotational flight mode to fly to the position , if particle flies to the position according to the following equation: where is an increasing function about the variable (the distance between the th component of and the th component of ), and are the acceleration coefficients, and both and are two uniformly distributed random numbers in the range .

In our simulations of this paper, we select as the increasing function in (6), where and , .

We can use a diagrammatic sketch to depict that particle is using non-rotational flight mode to fly to the position as in Figure 2.

3.3. The Flight-Control Approach of the King

Since the King of birds controls each bird’s flight behavior in accordance with certain rules; therefore, we look at the King as a flight commander, and we think that every bird’s flight behavior is controlled by the King. Following that, we will set up a flight-command rule for the King, and the King uses this rule to control each bird’s flight behavior. We first give the conception of flight command as follows.

Definition 2. Let be the population size of the swarm and be the order of particle in the swarm according to the ascending sort of the fitness value at the time instant . Then the conception of flight command is defined as the following equation: in which, if ( accordingly), then the fitness value of particle will be the best one in the swarm at the time instant . Meanwhile, if ( accordingly), then the fitness value of particle will be the worst one in the swarm at the time instant .

The King controls each particle’s flight mode according to the following approach.

Step 1. The King first gives an instruction randomly, where is a normal random distribution in the range .

Step 2. Each particle chooses its flight mode according the following rule: That is to say, if , then particle will choose non-rotational flight mode to fly to next step. Otherwise, particle will choose rotational flight mode to fly to next step.

The procedure of the DMPSO can be simply described as in Procedure 1.

  Objective function ,
  Initialize each particle’s position and velocity randomly and assign to
the at the same time ( )
while (The stop criterion is not satisfied) do {
   For   , do
   {
     if ( )
     calculate the fitness value of particle   
     end if
     }
    Rank the swarm according to the ascending sort of the fitness value and get
    
    Assign each particle’s flight-mode according to the Rule (8)
   Update
    }
   
  end while
  output

4. Validation and Comparison

In order to test the performance of the DMPSO, we have tested it against the original PSO [1], the PSO-W [2], and the CLPSO [16]. For the ease of visualization, we have implemented our simulations using Matlab for various test functions.

4.1. Benchmark Functions

For the sake of having a fair and reasonable comparison between the DMPSO and the PSO, the DMPSO and the PSO-W, or the DMPSO and the CLPSO, we have chosen six well-known high-dimensional functions as our optimization simulation tests. All functions are tested on 50 dimensions. The properties and the formula of these functions are presented as follows.

(a) Schwefel’s function: It is a unimodal function and has a global minimum at . The complexity of Schwefel’s function is due to its deep local optima being far from the global optimum. It will be hard to find the global optimum if many particles fall into one of the deep local optima.

(b) Rastrigin’s function: The function is a complex multimodal function, and the number of its local minima shows an exponential increase with the problem dimension and its peak shape is up and down in jumping. When attempting to solve Rastrigin’s function, algorithms may easily fall into a local optimum. Therefore, an algorithm capable of maintaining a larger diversity is likely to yield better results, so Rastrigin is viewed as a typical function used for testing global search performance of algorithm. It has a global minimum at .

(c) Step function: in which is a bracket function (Gaussian function) and ,  . Step function is a discontinuous function and has a global minimum in the domain .

(d) Ackley’s function: The function has one narrow global optimum basin and many minor local optima and has a global optimum at .

(e) Sphere function: It is a unimodal function, and its global minimum at .

(f) Griewank’s function: The search space is relatively large in this optimization problem; Griewank’s function has a component causing linkages among variables, thereby making it difficult to reach the global optimum. Therefore, it is generally regarded as a complex multimodal problem that is hard to optimize. The function has a global minimum at .

4.2. Comparison of Experimental Results and Discussions

There are many means to carry out the comparison of algorithm performance. We can use such ways to compare the number of function evaluations (FEs) for a given accuracy or to compare their accuracies for a fixed number of function evaluations, and so forth. In our simulations, we use the two ways mentioned previously, and we set up a running-stopping condition for each algorithm. If a run has found the global optimal solution of optimization problem or has reached a fixed number of function evaluations, then it will stop running. We run each algorithm for 50 times so that we can do reasonable and meaningful analysis.

In order to ensure the comparability of the experimental results, the parameters settings are set as consistent as possible for the DMPSO, the PSO, the PSO-W, and the CLPSO. The parameters settings are listed as follows: the population size and the acceleration coefficients for all simulations. As for the test functions , , and , we set the maximum iterations at 500 (or the maximum number of function evaluations at 15000 correspondingly). As for the test functions , , and , we set the maximum iterations at 2000 (or the maximum number of function evaluations at 60000 correspondingly). On the other hand, we set the inertia weight for the CLPSO, and set for the DMPSO. As for the PSO-W, the linearly decreasing inertia weight is used which starts at 0.9 and ends at 0.4, and .

In our experiments, we choose the best fitness value (BFV), the worst fitness value (WFV), the mean value (Mean), and the number of function evaluations in the form of mean (MFEs) ± standard deviation (STDEV) (success rate of algorithm in finding the global optima) as the evaluation indicators of optimization ability for the four algorithms mentioned previously. These evaluation indicators cannot only reflect the optimization ability but also indicate the computing cost. We have got the experimental results being listed in Table 1.

In order to more easily contrast the convergence characteristics of the four algorithms, Figure 3 presents the convergence characteristics in term of the best fitness value of the mean run of each algorithm for each test function.

Discussions. From the results listed in Table 1 and the convergence curve simulation diagram in Figure 3, we can see that (1) the DMPSO performs much better than the original PSO, the CLPSO, and the PSO-W and (2) the DMPSO is much superior to the original PSO, the CLPSO, and the PSO-W in terms of global convergence property, accuracy, and efficiency. So, we conclude that the performance of the DMPSO is much better than that of the original PSO, that of the CLPSO, and that of the PSO-W.

5. Conclusions

This paper presents a novel particle swam optimization with double flight modes that we call the double flight modes particle swam optimization (DMPSO). In the optimization algorithm, each bird (particle) can use both rotational flight mode and non-rotational flight mode to fly while it is foraging for food in the search space. By using its superb flight skills, each bird (particle) has much improved its searching efficiency. From the experiments we conduct on some benchmark functions such as Schwefel, Rastrigin, Ackley, Step, Griewank, and Sphere, we can conclude that the DMPSO not only has marked advantage of global convergence property but also can effectively avoid the premature convergence problem to some extent and is one of good choices for use to solve the complex and high-dimensional optimization problems, although the DMPSO is not necessarily the best choice for solving various real-world optimization problems.

Acknowledgments

This work was supported by the key programs of the Institution of Higher Learning, Guangxi, China, under Grant (no. 201202ZD032), the Guangxi Key Laboratory of Hybrid Computation and IC Design Analysis, the Natural Science Foundation of Guangxi, China, under Grant (no. 0832084), and the Natural Science Foundation of China under Grant (no. 61074185).