Abstract

Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms.

1. Introduction

This paper considers the following global optimization problem: where is a continuous variable vector with domain defined by the bound constraint , . The function is a continuous real-valued function.

Many real-world problems, such as engineering and related areas, can be reduced to formulation (1). This problem usually has many local optima, so it is difficult to find its global optimum. For solving such problem, researchers have presented many methods during the past years, which can be divided into two groups: deterministic and stochastic algorithms. Most deterministic algorithms usually effective for unimodal functions have one global optimum and need gradient information. However, stochastic algorithms do not require any properties of the objective function. Therefore, more attention has been paid to stochastic algorithms recently, and many effective algorithms have been presented, including Simulated Annealing (SA) [1], Genetic Algorithm (GA) [2, 3], Differential Evolution (DE) [4], Particle Swarm Optimization (PSO) [5], Ant Colony Optimization (ACO) [6], Artificial Bee Colony (ABC) [7], and Harmony Search (HS) [8].

Among these stochastic algorithms, PSO is a population-based and intelligent method, which is inspired by the emergent motion of a flock of birds searching for food [5]. In PSO, a population of potential solutions is evolved through successive iterations. Since PSO algorithm has a number of desirable properties, including simplicity of implementation, scalability in dimension, and good empirical performance, it has been applied to solve many real-world problems, such as capacitor placement problem [9], short term load forecasting [10], soft sensor [11], the voltage stability of the electric power distribution systems [12, 13], the orbits of discrete chaotic dynamical systems towards desired target region [14], and the permutation flow-shop sequencing problem [15].

Although PSO algorithm has been applied successfully in solving many difficult optimization problems, it also has difficulties in keeping balance between exploration and exploitation when solving complex multimodal problems. In order to get a better performance for PSO algorithm, many variants of PSO have been developed. For example, by using random value of inertia weight, Eberhart and Shi proposed a modified PSO, which can track the optima in a dynamic environment [16]. By utilizing the success rate of the particles, a new adaptive inertia weight strategy was presented [17]. Through using Cauchy mutation, a hybrid PSO (HPSO) was developed [18]. In [19], a hybrid PSO with a wavelet mutation (HWPSO) was given, in which the mutation incorporates with a wavelet function. To avoid premature convergence, a novel parameter automation strategy is proposed [20]. Valdez et al. introduced an improved FPSO + FGA hybrid method by combining the advantages of PSO and GA [21]. Based on mutation operator and different local search techniques, a superior solution guided PSO (SSG-PSO) was developed [22]. By using the second personal best and the second global best particle, two modified PSO algorithms are presented [23]. In order to avoid being trapped in local optima in the convergence process, some other improved PSO have been proposed, such as crossover [24], orthogonal learning strategy [25], chaos [26], and elitist learning strategy [27].

In this paper, by utilizing the information of the the best neighbor of each particle and the best particle of the entire population in the current iteration, a new Particle Swarm Optimization algorithm is proposed, which is named NPSO. To avoid premature, an abandoned mechanism is presented in our algorithm. Furthermore, for improving the global convergence speed, a chaotic search is implemented in the best solution of each iteration.

The remainder of this paper is organized as follows. Section 2 describes the original PSO. Then our proposed modifications to PSO are described in Section 3. Numerical results and discussions are presented in Section 4. Finally, some concluding remarks are provided in Section 5.

2. Particle Swarm Optimization (PSO)

Assume that the search space is -dimensional; denotes the size of the swarm population. In PSO, each particle has a position in the search space and a velocity to indicate its current state. A position denotes a feasible solution. The position and the velocity are updated by the best position encountered by the particle so far and the best position found by the entire population of particles according to the following equations: where and are two learning factors which control the influence of the social and cognitive components, are random numbers in the range , and is the inertia weight, which ensures the convergence of the PSO algorithm and is decreased linearly.

3. The Novel PSO Algorithm (NPSO)

In the original PSO, since each particle moves in the search space guided only by its historical best solution and the global best solution , it may get trapped in a local optimal solution when current global best solution in a local optimum and not easy for the particle escapes from it. To solve such problem, in this section, three improvement strategies are proposed.

3.1. The First Improvement Strategy

From (2), we observe that only the information of the historical best position of each particle and the global best position of all particles is utilized. As a matter of fact, the information of the best neighbor of the particle may provide a better guidance than . The details are given as follows.

Firstly, we explain how to define the neighbors and to determine the best neighbor of the particle . In order to define appropriate neighbors, different approaches could be used. In this paper, the neighbors of are defined by using the mean Euclidean distance between and the rest of solutions. Let be the Euclidean distance between and and let be the mean Euclidean distance for . Then can be computed as follows: By (3), if , then could be accepted as a neighbor of .

In addition, we can also use a more general and flexible definition to determine a neighbor of : If (4) is used, then a new parameter , which refers to the “neighbourhood radius,” will be added to the parameters of PSO. If , it turns to the standard PSO. With the value of increasing, the neighborhood of enlarges or its neighborhood shrinks as the value of decreases. Once the neighbors are determined, we select the best position among the neighbors of as the best neighbor .

After determining the the best neighbor , we give a new way moving for each particle. If , then set If , then set After that, moves to a new position by the following equation:

In our algorithm, it is obvious that before each particle moves, it first watches the region which is centered by itself, selects the best neighbour, and then uses (5)–(7) to generate the next position.

3.2. The Second Improvement Strategy

To avoid premature, in our algorithm, an abandoned mechanism is proposed.

Assume that “” is a predetermined number, which will be used to determine whether the position of the particle should be abandoned. At the beginning, set . With the implementation of the algorithm, if the position was not improved, then set ; else set . If the position can not be improved anymore when is achieved, that is, , then the position will be abandoned and a new position will replace it, which is generated by using the following equation: where is a random number in the range and is the inertia weight that controls the impact of the optimal solution at current iteration, which is increased linearly.

From (8), it can be seen that, in the early stage of the algorithm, will be generated with a large randomness and in the late stage, it will be generated near the global best position .

3.3. The Third Improvement Strategy

To improve the global convergence of NPSO, a chaotic search operator is adopted. Next, we give the details.

Let be the best solution of the current iteration. Firstly, utilize the following equation (9) to generate chaotic variable : where is the length of chaotic sequence and is a random number. Then map to a chaotic vector in the interval : where and are the lower bound and upper bound of variable , respectively. Finally, a new candidate solution is obtained by the following equation: where is a shrinking factor, which is defined as follows: where is the maximum number of iterations and is the number of iterations.

By (11) and (12), it can be seen that will become smaller with the increase of evolutionary generation; that is, the local search range will become smaller with the process of evolution.

Based on the abovementioned explanation, the pseudocode of the NPSO algorithm is given in Algorithm 1.

(1) Initialize a population of particles with random positions in a given search space, and random
  velocities ; the maximum iteration ; ; ; the length of chaotic sequence .
(2) Set and find .
(3) while do
(4)
(5) for to do
(6)  for to do
(7)   By (3) and (4), update the velocity of each particle.
(8)   By (5), update the position of each particle.
(9)  end for
(10)   if
(11)    , set ;
(12)   else
(13)   set .
(14)  end if
(15)  if
(16)   set ,
(17)  end if
(18) end for
(19) for to do
(20)  if
(21)   By (8), to generate a new position, and replace .
(22)  end if
(23) end for
(24) By (9)–(12), to chaotic search in , and update (if necessary).
(25)
(26) end while

4. Experimental Results and Discussion

4.1. Experiments 1

In this subsection, the performance of NPSO algorithm is compared to PSO algorithm by evaluating convergence and best solution found for 14 benchmark functions, where are shifted functions and are rotated functions. The characteristics, dimensions, initial range, and formulations of these functions are listed in Table 1. is for and for ; is the shifted global optimum; is the orthogonal matrix.

The proposed algorithm NPSO and PSO are coded in Matlab 7.0, and the experiments’ platform is a personal computer with Pentium 4, 3.06 GHz CPU, 512 M memory, and Windows XP.

The parameters of algorithms are given as follows. The common parameters are the dimension , the population size , the maximum number of iterations , , , , and . NPSO settings are as follows: the length of chaotic sequence and the limit of a position abandoned are set to 5; . Each experiment is repeated 30 times independently. The global minimums (Min), maximum number of iterations (Max iteration), mean best values (Mean), and standard deviations (SD) are given in Table 2. To show the convergence speed of PSO and NPSO more clearly, the convergence graphs of PSO and NPSO are shown in Figure 1.

From Table 2, it can be seen that the NPSO performs better than PSO on all test functions. From Figure 1, it can be seen that the convergence speed of NPSO is more fast than PSO.

4.2. Experiments 2

In this subsection, to further test the efficiency of NPSO, it is compared with other five algorithms, that is, CPSO [28], CLPSO [29], FIPS [30], Frankenstein [31], and AIWPSO [17].

Twelve benchmark functions are used for the comparison. The characteristics, dimensions, initial range, and formulations of these functions are listed in Table 3.

In order to make a fair comparison, the maximum number of function evaluations (maxFEs) is set to 2e5 for all algorithms; the population size is 20. The other parameters for NPSO are set as in Experiments 1. The comparison results are presented in Table 4. For the sake of convenience and reliability, except for the NPSO algorithm, the rest of results reported here are taken directly from the literature [17].

From Table 4, it can be seen that NPSO is significantly better than the other five algorithms on almost all the test functions, except for the function .

5. Conclusion

In this paper, by utilizing the information of the best neighbor of each particle and the best solution in the current iteration, we presented a new move equation. After that, based on the other two improvement strategies, a novel Particle Swarm Optimization algorithm NPSO was proposed. The performance of NPSO was compared with the standard PSO and other five variants of PSO. The results showed that NPSO presents promising results for considered problems.

In the future, the adaption of the parameters in NPSO can be studied to improve its performance.

Conflict of Interests

The authors declare that there is no conflict of interest regarding the publication of this paper.

Acknowledgments

The research was supported by NSFC (U1404105, 11171094); the Key Scientific and Technological Project of Henan Province (142102210058); the Doctoral Scientific Research Foundation of Henan Normal University (qd12103); the Youth Science Foundation of Henan Normal University (2013qk02); Henan Normal University National Research Project to Cultivate the Funded Projects (01016400105); the Henan Normal University Youth Backbone Teacher Training.