Abstract

Particle swarm optimization (PSO) is inspired by sociological behavior. In this paper, we interpret PSO as a finite difference scheme for solving a system of stochastic ordinary differential equations (SODE). In this framework, the position points of the swarm converge to an equilibrium point of the SODE and the local attractors, which are easily defined by the present position points, also converge to the global attractor. Inspired by this observation, we propose a class of modified PSO iteration methods (MPSO) based on local attractors of the SODE. The idea of MPSO is to choose the next update state near the present local attractor, rather than the present position point as in the original PSO, according to a given probability density function. In particular, the quantum-behaved particle swarm optimization method turns out to be a special case of MPSO by taking a special probability density function. The MPSO methods with six different probability density functions are tested on a few benchmark problems. These MPSO methods behave differently for different problems. Thus, our framework not only gives an interpretation for the ordinary PSO but also, more importantly, provides a warehouse of PSO-like methods to choose from for solving different practical problems.

1. Introduction

Inspired by sociological behavior associated with bird flock, particle swarm optimization (PSO) was first introduced by Kennedy and Eberhart [1]. In a PSO, the individual particles of a swarm fly stochastically toward the positions of their own previous best performance and the best previous performance of the swarm. Researchers have been trying to excogitate new framework to interpret PSO in order to analyze the property of PSO and to construct new PSO-like methods. For instance, in [2] PSO is interpreted as a difference scheme for a second-order ordinary differential equation. Fernández-Martínez et al. interpret PSO algorithm as a stochastic damped mass-spring system: the so-called PSO continuous model and present a theoretical analysis of PSO trajectories in [3]. Based on a continuous vesion of PSO, Fernández Martínez and García Gonzalo propose Generalized PSO(GPSO) in [4] and introduce a delayed version of the PSO continuous model in [5]. Furthermore, Fernández-Martínez and García-Gonzalo give stochastic stability analysis of PSO models in [6] and propose two novel algorithms: PP-GPSO and RR-GPSO in [7]. PSO algorithms have been applied successfully for practical applications [810].

In [11], a so-called quantum-behaved particle swarm optimization (QPSO) is proposed based on an assumption that the individual particles in a PSO system have quantum behavior. A wide range of continuous optimization problems have been solved successfully by QPSO and many efficient strategies have been proposed to improve the algorithm [1120]. A global convergence analysis of QPSO is given by Sun et al. in [21].

In this paper, we interpret PSO as a finite difference scheme for solving a system of stochastic ordinary differential equations (SODE in short). In this framework, the convergent point of the position points in the PSO iteration process corresponds to an equilibrium point (a global attractor) of the SODE. We observe that the local attractors, which are easily computed by using the present position points, also converge to the global attractor in the PSO iteration process. Inspired by this observation, we propose a class of modified PSO iteration methods (MPSO in short) based on local attractors of the SODE. The idea of MPSO is to choose the next update state in a neighbourhood of the present local attractor, rather than the present position point as in the original PSO, according to a given probability density function. We will test the MPSO methods with six different probability density functions for solving a few benchmark problems. These MPSO methods behave differently for different problems. Thus, our framework not only gives an interpretation for the ordinary PSO but also, more importantly, provides a warehouse of PSO-like methods to choose from for solving different practical problems.

Our work is partly inspired by the second-order ordinary differential equation framework for PSO in [2, 3]. But the solution of a second-order ordinary differential equation is somehow more difficult to describe and it seems more difficult to construct new PSO-like methods through this framework. Our framework of ordinary differential equation makes the job easier.

Our work is also inspired by the quantum-behaved particle swarm optimization (QPSO) method in [11]. It turns out that QPSO becomes a special case of our MPSO by choosing a special probability density function. And fortunately, the convergence analysis for QPSO given in [21] is still valid for our MPSO methods. In fact, what matters in the convergence analysis is a suitable choice of the probability density function, and the particular quantum behavior does nothing in the convergence analysis.

The rest of the paper is organized as follows. In Section 2 we interpret PSO as a system of ODE and propose a class of MPSO methods. Then, in Section 3 we test, on some nonlinear benchmark functions, our MPSO methods with different probability density functions. Finally, some conclusions are gathered in Section 4.

2. Particle Swarm Optimization Algorithms

2.1. The Original PSO

Particle swarm optimization algorithm (PSO) was first introduced in Kennedy and Eberhart [1] inspired by sociological behavior associated with bird flock. In a PSO with population size , the velocity vector and the position vector of each particle are iteratively adjusted to minimize an objective function with as input value. At the th iteration step, the particle updates its velocity vector and position vector according to for and , where and are random numbers uniformly distributed on (0,1). The values of and are scaled by the constants and which are called acceleration coefficients. The vector denotes the personal best position (pbest in short), which is the position of particle giving the best objective function value so far, and the vector is the global best (gbest) position which is the position of the best particle among all particles. They are updated according to When the PSO system is convergent, all particles converge, as tends to infinity, to a global attractor ; that is, and their velocity vectors converge to zero.

2.2. Interpretation of PSO as a Finite Difference Scheme for an SODE

Let us rewrite the iteration formula (1) for PSO as a system of difference equations: for , where Generally but not necessarily, the constants and are set to be equal. For the sake of simple representation, we rewrite where . We regard (4) as a finite difference scheme with time step length for solving the following system of stochastic ordinary differential equations (SODE): where , , , and for all real numbers . Let ,  , , and , where , and so forth. Then, the SODE (7) is written as where is a nonlinear function. The theoretical analysis of this SODE and its difference schemes is difficult, but it is not our concern. For our purpose, we recall that usually, or at least very often, the solution of a difference scheme for an ODE will converge to an equilibrium point satisfying when the iteration time tends to infinity. Thus, the convergent point of the PSO iteration process corresponds to an equilibrium point of the SODE.

Usually, for an equilibrium point of the SODE (7) we have for some constant vector . Let us call the local attractor and the global attractor for the SODE at time .

2.3. MPSO for Iteratively Decreasing

Now, let us forget the original PSO and concentrate on the task of finding a global attractor (an equilibrium point) of the SODE (7). Based on (10), we see that finding an equilibrium point of the ODE system (7) is equivalent to making the position vector closer and closer to the local attractor . Thus, we choose the next update state in a neighbourhood of the present local attractor, rather than the present position point as in the original PSO, according to a given probability density function (PDF).

Define where is an adjustable parameter and is the mbest position defined by the average of the pbest positions of all particles; that is, . We choose a PDF using as a parameter indicating the “width" of the support of the function. Choose a random number , according to the PDF . Then, the update formula of MPSO is as follows: where the signs + and − are taken randomly in equal probability and . , a discrete approximation of the local attractor , is called discrete local attractor at th step. Therefore, is a point randomly chosen from a neighborhood of the discrete local attractor .

Our MPSO algorithm is described in detail in Algorithm 1.

Choose a PDF .
Initialize population position vector
Do
For to population size
 If Then
End For
For to population size
For to dimension
 Choose and compute .
 Choose a random number ,
 according to the PDF .
 Choose , .
 If Then
 If Then
End For
End For
Repeat the iteration until all the increments are small enough
or some other termination criterion is satisfied.

2.3.1. MPSO with Different PDFs

As examples, we give six different PDFs and depict them when in Figures 1, 2, 3, 4, 5, and 6.

Now, let us demonstrate how to choose a random number according to the probability density function . First, we obtain the corresponding probability distribution function by Then, for a randomly given number , we solve the equation to get Therefore, the update formulas corresponding to can be written as follows: where , , and . Here the superscripts and subscripts of and have been removed for the sake of clear representation. For instance, in the case of , the precise formula is

2.4. Quantum-Behaved Particle Swarm Optimization

In QPSO proposed by Sun et al. [11], the quantum state of a particle is depicted by a wave function , which is the solution of the Schrödinger equation where Hamiltonian operator is Note that the Schrödinger equation is a second-order partial differential equation. is chosen to be the PDF for the present position of the particle. In particular, and defined above are two such PDFs mentioned in [11]. Thus, QPSO can be regarded as a special case of MPSO. But the other four PDFs to are not related to the Schrödinger equation.

Let us elaborate a little bit on QPSO. In [11], Sun et al. assume that, at the th iteration, on the th dimension () of the search space, particle moves in a potential well centered at which is the th dimension coordinate of its local attractor.

Let , then they obtained the PDF in Jun Sun et al. [17] is determined by or where is the mbest position. Finally, Sun et al. gave a global convergence analysis of QPSO in [20], which employed certain properties of the PDF but had nothing to do with the particular Schördinger equation. Actually, all our six PDFs given Section 2.3.1 possess these properties, and hence the global convergence analysis applies.

3. Numerical Simulation

In this section, we use MPSO with six different PDFs mentioned in Section 2 to test five nonlinear benchmark functions used in [11]. The first function is the Sphere function described by The second function is Rosenbrock function described by The third function is the generalized Rastrigrin function described by The fourth function is the generalized Griewank function described by The last function is Shcaffer function described by where is an -dimension real-valued vector. The initialization and search ranges of the four functions used in [11] are listed in Table 1.

As in [11], different population sizes (20, 40, and 80) are used for each function to investigate the scalability. The maximum number of the generations is set to 1000, 1500, and 2000, corresponding to the dimensions 10, 20, and 30 for the four functions, respectively. The mean best fitness values and standard deviations, out of total 50 runs of MPSO with different PDFs on to , are shown in Tables 2, 3, 4, 5, and 6. In the tables, is the parameter in (12), and means that decreases linearly from 1 to 0.5. The boldface results are obtained by performing the QPSO algorithm in [11], which is equivalent to MPSO with the PDF . An efficiency comparison of MPSOs with different PDFs on to is presented in Table 7.

4. Conclusion

Particle swarm optimization (PSO) algorithm is interpreted as a finite difference scheme for solving a system of stochastic ordinary differential equations (SODE in short). It is illustrated that the position points of the swarm and the local attractors, which are easily defined by the present position points, all converge to a global attractor of the SODE. A class of modified PSO iteration methods (MPSO in short) based on local attractors of the SODE is proposed such that the next update state is chosen near the present local attractor, rather than the present position point as in the original PSO, according to a given probability density function. In particular, the quantum-behaved particle swarm optimization method turns out to be a special case of MPSO by taking a special probability density function. The MPSO methods with six different probability density functions are tested on a few benchmark problems. These MPSO methods behave differently for different problems. Thus, our framework not only gives an interpretation for the ordinary PSO but also, more importantly, provides a warehouse of PSO-like methods to choose from for solving different practical problems.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (11171367) and the Fundamental Research Funds for the Central Universities of China (2662013BQ049, 2662014QC011).