Discrete Dynamics in Nature and Society

Discrete Dynamics in Nature and Society / 2014 / Article

Research Article | Open Access

Volume 2014 |Article ID 628357 | 10 pages | https://doi.org/10.1155/2014/628357

Particle Swarm Optimization Based on Local Attractors of Ordinary Differential Equation System

Academic Editor: Manuel De la Sen
Received24 Apr 2014
Revised08 Aug 2014
Accepted15 Aug 2014
Published26 Aug 2014

Abstract

Particle swarm optimization (PSO) is inspired by sociological behavior. In this paper, we interpret PSO as a finite difference scheme for solving a system of stochastic ordinary differential equations (SODE). In this framework, the position points of the swarm converge to an equilibrium point of the SODE and the local attractors, which are easily defined by the present position points, also converge to the global attractor. Inspired by this observation, we propose a class of modified PSO iteration methods (MPSO) based on local attractors of the SODE. The idea of MPSO is to choose the next update state near the present local attractor, rather than the present position point as in the original PSO, according to a given probability density function. In particular, the quantum-behaved particle swarm optimization method turns out to be a special case of MPSO by taking a special probability density function. The MPSO methods with six different probability density functions are tested on a few benchmark problems. These MPSO methods behave differently for different problems. Thus, our framework not only gives an interpretation for the ordinary PSO but also, more importantly, provides a warehouse of PSO-like methods to choose from for solving different practical problems.

1. Introduction

Inspired by sociological behavior associated with bird flock, particle swarm optimization (PSO) was first introduced by Kennedy and Eberhart [1]. In a PSO, the individual particles of a swarm fly stochastically toward the positions of their own previous best performance and the best previous performance of the swarm. Researchers have been trying to excogitate new framework to interpret PSO in order to analyze the property of PSO and to construct new PSO-like methods. For instance, in [2] PSO is interpreted as a difference scheme for a second-order ordinary differential equation. Fernández-Martínez et al. interpret PSO algorithm as a stochastic damped mass-spring system: the so-called PSO continuous model and present a theoretical analysis of PSO trajectories in [3]. Based on a continuous vesion of PSO, Fernández Martínez and García Gonzalo propose Generalized PSO(GPSO) in [4] and introduce a delayed version of the PSO continuous model in [5]. Furthermore, Fernández-Martínez and García-Gonzalo give stochastic stability analysis of PSO models in [6] and propose two novel algorithms: PP-GPSO and RR-GPSO in [7]. PSO algorithms have been applied successfully for practical applications [810].

In [11], a so-called quantum-behaved particle swarm optimization (QPSO) is proposed based on an assumption that the individual particles in a PSO system have quantum behavior. A wide range of continuous optimization problems have been solved successfully by QPSO and many efficient strategies have been proposed to improve the algorithm [1120]. A global convergence analysis of QPSO is given by Sun et al. in [21].

In this paper, we interpret PSO as a finite difference scheme for solving a system of stochastic ordinary differential equations (SODE in short). In this framework, the convergent point of the position points in the PSO iteration process corresponds to an equilibrium point (a global attractor) of the SODE. We observe that the local attractors, which are easily computed by using the present position points, also converge to the global attractor in the PSO iteration process. Inspired by this observation, we propose a class of modified PSO iteration methods (MPSO in short) based on local attractors of the SODE. The idea of MPSO is to choose the next update state in a neighbourhood of the present local attractor, rather than the present position point as in the original PSO, according to a given probability density function. We will test the MPSO methods with six different probability density functions for solving a few benchmark problems. These MPSO methods behave differently for different problems. Thus, our framework not only gives an interpretation for the ordinary PSO but also, more importantly, provides a warehouse of PSO-like methods to choose from for solving different practical problems.

Our work is partly inspired by the second-order ordinary differential equation framework for PSO in [2, 3]. But the solution of a second-order ordinary differential equation is somehow more difficult to describe and it seems more difficult to construct new PSO-like methods through this framework. Our framework of ordinary differential equation makes the job easier.

Our work is also inspired by the quantum-behaved particle swarm optimization (QPSO) method in [11]. It turns out that QPSO becomes a special case of our MPSO by choosing a special probability density function. And fortunately, the convergence analysis for QPSO given in [21] is still valid for our MPSO methods. In fact, what matters in the convergence analysis is a suitable choice of the probability density function, and the particular quantum behavior does nothing in the convergence analysis.

The rest of the paper is organized as follows. In Section 2 we interpret PSO as a system of ODE and propose a class of MPSO methods. Then, in Section 3 we test, on some nonlinear benchmark functions, our MPSO methods with different probability density functions. Finally, some conclusions are gathered in Section 4.

2. Particle Swarm Optimization Algorithms

2.1. The Original PSO

Particle swarm optimization algorithm (PSO) was first introduced in Kennedy and Eberhart [1] inspired by sociological behavior associated with bird flock. In a PSO with population size , the velocity vector and the position vector of each particle are iteratively adjusted to minimize an objective function with as input value. At the th iteration step, the particle updates its velocity vector and position vector according to for and , where and are random numbers uniformly distributed on (0,1). The values of and are scaled by the constants and which are called acceleration coefficients. The vector denotes the personal best position (pbest in short), which is the position of particle giving the best objective function value so far, and the vector is the global best (gbest) position which is the position of the best particle among all particles. They are updated according to When the PSO system is convergent, all particles converge, as tends to infinity, to a global attractor ; that is, and their velocity vectors converge to zero.

2.2. Interpretation of PSO as a Finite Difference Scheme for an SODE

Let us rewrite the iteration formula (1) for PSO as a system of difference equations: for , where Generally but not necessarily, the constants and are set to be equal. For the sake of simple representation, we rewrite where . We regard (4) as a finite difference scheme with time step length for solving the following system of stochastic ordinary differential equations (SODE): where , , , and for all real numbers . Let ,  , , and , where , and so forth. Then, the SODE (7) is written as where is a nonlinear function. The theoretical analysis of this SODE and its difference schemes is difficult, but it is not our concern. For our purpose, we recall that usually, or at least very often, the solution of a difference scheme for an ODE will converge to an equilibrium point satisfying when the iteration time tends to infinity. Thus, the convergent point of the PSO iteration process corresponds to an equilibrium point of the SODE.

Usually, for an equilibrium point of the SODE (7) we have for some constant vector . Let us call the local attractor and the global attractor for the SODE at time .

2.3. MPSO for Iteratively Decreasing

Now, let us forget the original PSO and concentrate on the task of finding a global attractor (an equilibrium point) of the SODE (7). Based on (10), we see that finding an equilibrium point of the ODE system (7) is equivalent to making the position vector closer and closer to the local attractor . Thus, we choose the next update state in a neighbourhood of the present local attractor, rather than the present position point as in the original PSO, according to a given probability density function (PDF).

Define where is an adjustable parameter and is the mbest position defined by the average of the pbest positions of all particles; that is, . We choose a PDF using as a parameter indicating the “width" of the support of the function. Choose a random number , according to the PDF . Then, the update formula of MPSO is as follows: where the signs + and − are taken randomly in equal probability and . , a discrete approximation of the local attractor , is called discrete local attractor at th step. Therefore, is a point randomly chosen from a neighborhood of the discrete local attractor .

Our MPSO algorithm is described in detail in Algorithm 1.

Choose a PDF .
Initialize population position vector
Do
For to population size
 If Then
End For
For to population size
For to dimension
 Choose and compute .
 Choose a random number ,
 according to the PDF .
 Choose , .
 If Then
 If Then
End For
End For
Repeat the iteration until all the increments are small enough
or some other termination criterion is satisfied.

2.3.1. MPSO with Different PDFs

As examples, we give six different PDFs and depict them when in Figures 1, 2, 3, 4, 5, and 6.

Now, let us demonstrate how to choose a random number according to the probability density function . First, we obtain the corresponding probability distribution function by Then, for a randomly given number , we solve the equation to get Therefore, the update formulas corresponding to can be written as follows: where , , and . Here the superscripts and subscripts of and have been removed for the sake of clear representation. For instance, in the case of , the precise formula is

2.4. Quantum-Behaved Particle Swarm Optimization

In QPSO proposed by Sun et al. [11], the quantum state of a particle is depicted by a wave function , which is the solution of the Schrödinger equation where Hamiltonian operator is Note that the Schrödinger equation is a second-order partial differential equation. is chosen to be the PDF for the present position of the particle. In particular, and defined above are two such PDFs mentioned in [11]. Thus, QPSO can be regarded as a special case of MPSO. But the other four PDFs to are not related to the Schrödinger equation.

Let us elaborate a little bit on QPSO. In [11], Sun et al. assume that, at the th iteration, on the th dimension () of the search space, particle moves in a potential well centered at which is the th dimension coordinate of its local attractor.

Let , then they obtained the PDF in Jun Sun et al. [17] is determined by or where is the mbest position. Finally, Sun et al. gave a global convergence analysis of QPSO in [20], which employed certain properties of the PDF but had nothing to do with the particular Schördinger equation. Actually, all our six PDFs given Section 2.3.1 possess these properties, and hence the global convergence analysis applies.

3. Numerical Simulation

In this section, we use MPSO with six different PDFs mentioned in Section 2 to test five nonlinear benchmark functions used in [11]. The first function is the Sphere function described by The second function is Rosenbrock function described by The third function is the generalized Rastrigrin function described by The fourth function is the generalized Griewank function described by The last function is Shcaffer function described by where is an -dimension real-valued vector. The initialization and search ranges of the four functions used in [11] are listed in Table 1.


Function Asymmetric initialization range Search range

Sphere
Rosenbrock
Genralized Rastrigrin
Genralized Griewank
Schaffer

As in [11], different population sizes (20, 40, and 80) are used for each function to investigate the scalability. The maximum number of the generations is set to 1000, 1500, and 2000, corresponding to the dimensions 10, 20, and 30 for the four functions, respectively. The mean best fitness values and standard deviations, out of total 50 runs of MPSO with different PDFs on to , are shown in Tables 2, 3, 4, 5, and 6. In the tables, is the parameter in (12), and means that decreases linearly from 1 to 0.5. The boldface results are obtained by performing the QPSO algorithm in [11], which is equivalent to MPSO with the PDF . An efficiency comparison of MPSOs with different PDFs on to is presented in Table 7.


PDFDim.
Mean best Standard deviation Mean best Standard deviation Mean best Standard deviation


()
10 1000
20 1500
30 2000


()
10 1000
20 1500
30 2000


()
10 1000
20 1500
30 2000


()
10 1000
20 1500
30 2000


()
10 1000 1.18E  −  276.19E  −  278.47E  −  405.82E  −  398.09E  −  535.23E  −  52
20  1500  1.16E  −  144.48E  −  141.47E  −  237.46E  −  231.04E  −  313.46E  −  31
30  2000  7.16E  −  094.01E  −  082.73E  −  166.02E  −  169.68E  −  223.07E  −  21


()  
  10    1000  
  20    1500  
  30    2000  


PDFDim.
Mean best Standard deviation Mean best Standard deviation Mean best St andard deviation


()
10 1000 17.9685 29.8452 12.2590 21.7814 11.7597 26.2940
20 1500 111.6378 188.7111 60.2500 50.6469 46.2081 41.0317
30 2000 416.5972 1436.0 112.4362 145.4795 72.8064 99.0574


()
10 1000 33.6155 49.8280 26.2646 43.4507 8.2692 9.3772
20 1500 82.0246 67.5004 60.3972 51.3739 44.3745 42.1392
30 2000 138.9158 138.7761 83.9015 80.3806 68.5438 42.1898


()
10 1000 17.2888 36.1287 16.5371 31.0729 11.9782 26.6798
20 1500 89.6570 162.4040 54.0191 56.6993 49.7047 50.1423
30 2000 221.5118 539.5228 86.1085 152.8715 67.2254 103.1385


()
10 1000 71.5892 150.1641 29.8696 46.9684 17.7129 18.4078
20 1500 86.1848 107.0572 78.7056 95.2644 52.9101 47.3649
30 2000 154.0989 203.9596 104.0223 155.8188 80.6434 88.6188


()
10 1000 45.694593.595014.928518.80439.294412.6001
20 1500 120.3651166.243577.011096.056157.082045.8037
30 2000 242.7943408.3627135.5536204.610056.085545.8168


()
10 1000 20.5409 30.0800 14.8070 21.2420 10.4618 20.9702
20 1500 93.9863 127.0970 86.2331 129.4297 49.4717 51.3767
30 2000 504.8399 1090.4 145.6577 198.6213 80.3452 53.4474


PDFDim.
Mean best Standard deviation Mean best Standard deviation Mean best Standard deviation


()
10 1000 5.8080 4.0212 4.1037 3.3882 3.1426 3.0072
20 1500 14.9072 4.9084 10.1125 3.1486 7.5353 2.4052
30 2000 31.5566 7.7407 20.4983 4.8202 14.4129 3.1085


()
10 1000 9.9946 5.2287 5.9174 3.0855 4.5410 2.3456
20 1500 25.7827 10.8980 19.7675 8.6651 15.2177 5.7206
30 2000 44.8329 14.6816 28.6026 8.0842 23.8493 10.6089


()
10 1000 4.9402 3.6175 3.0414 3.2103 1.8484 1.1974
20 1500 15.7036 4.8358 9.6034 2.6184 7.8317 2.4439
30 2000 31.3558 7.7029 18.3678 4.8567 14.8082 3.2635


()
10 1000 4.5903 2.4440 2.6680 1.5771 2.1146 1.3227
20 1500 15.2589 5.9192 11.4040 4.5605 9.1622 4.9490
30 2000 31.5361 12.5203 22.5729 10.0456 17.8487 8.0797


()
10 1000 6.49763.90733.07991.65812.62771.5756
20 1500 16.33926.585910.70553.46369.84603.6358
30 2000 32.8285 11.455723.27077.5592 16.10053.5702


()
10 1000 5.4773 4.4511 3.4689 1.8336 2.4370 1.3997
20 1500 16.2287 5.2804 10.9497 3.1922 8.7061 2.8914
30 2000 35.3445 9.7132 22.2326 5.6518 18.6051 5.6434


PDFDim.
Mean best Standard deviation Mean best Standard deviation Mean best Standard deviation


()
10 1000 0.0846 0.1467
20 1500 0.0049 0.0240
30 2000 0.0024 0.0094


()
10 1000 0.0531 0.0951