Abstract

The particle swarm optimization (PSO) is a recently invented evolutionary computation technique which is gaining popularity owing to its simplicity in implementation and rapid convergence. In the case of single-peak functions, PSO rapidly converges to the peak; however, in the case of multimodal functions, the PSO particles are known to get trapped in the local optima. In this paper, we propose a variation of the algorithm called parallel swarms oriented particle swarm optimization (PSO-PSO) which consists of a multistage and a single stage of evolution. In the multi-stage of evolution, individual subswarms evolve independently in parallel, and in the single stage of evolution, the sub-swarms exchange information to search for the global-best. The two interweaved stages of evolution demonstrate better performance on test functions, especially of higher dimensions. The attractive feature of the PSO-PSO version of the algorithm is that it does not introduce any new parameters to improve its convergence performance. The strategy maintains the simple and intuitive structure as well as the implemental and computational advantages of the basic PSO.

1. Introduction

Evolutionary algorithms (EAs) are increasingly being applied to solve the problems in diverse domains. These metaheuristic algorithms are found to be successful in many domains chiefly because of their domain-independent evolutionary mechanisms. Evolutionary computation is inspired by biological processes which are at work in nature. Genetic algorithm (GA) [1] modeled on the Darwinian evolutionary paradigm is the oldest and the best known evolutionary algorithm. It mimics the natural processes of selection, crossover, and mutation to search for optimum solutions in massive search spaces.

Particle swarm optimization (PSO) is a recently developed algorithm belonging to the class of biologically inspired methods [29]. PSO imitates the social behavior of insects, birds, or fish swarming together to hunt for food. PSO is a population-based approach that maintains a set of candidate solutions, called particles, which move within the search space. During the exploration of the search space, each particle maintains a memory of two pieces of information: the best solution (pbest) that it has encountered so far and the best solution (gbest) encountered by the swarm as a whole. This information is used to direct the search.

Researchers have found that PSO has the following advantages over the other biologically inspired evolutionary algorithms: (1) its operational principle is very simple and intuitive; (2) it relies on very few external control parameters; (3) it can be easily implemented; and (4) it has rapid convergence. PSO has developed very fast, has obtained very good applications in wide variety of areas [1012], and has become one of the intelligent computing study hotspots in the recent years [1214].

In the case of single-peak functions, PSO rapidly converges to the peak; however, in the case of multi-modal functions, the PSO particles are known to get trapped in the local optima. A significant number of variations are being made to the standard PSO to avoid the particles from getting trapped in the local optima [1525]. Other methods which restart the particles trapped at the local optima have also been proposed [18, 19, 22]. In these methods, when the velocity of a particle falls below a given threshold, it is reinitialized to a randomly selected large value to avoid stagnation. The risk of stagnation is reduced by randomly accelerating the flying particles. Some of the more recent methods utilize multiswarms to search for the global optimum while avoiding getting trapped in the local optima [2630].

However, in all the studies mentioned above, a number of new parameters are introduced in the original PSO. This destroys the simplicity of the algorithm and leads to an undesirable computational overhead. In this study, we introduce another variation to the evolution of the standard PSO but without resorting to additional new parameters. The strategy maintains the simple and intuitive structure as well as the implemental and computational advantages of the basic PSO. Thus, the contribution of this study is the improvement of the performance of the basic PSO without increasing the complexity of the algorithm.

This paper is organized as follows. In Section 2, we present the original PSO as proposed by Kennedy and Eberhart in 1948 [6]. In Section 3, we explain the working of the standard PSO and some of the variations found in the literature on PSO. In Section 4, we present our new parallel swarm oriented (PSO) and demonstrate the performance results in Section 5. We conclude the paper in Section 6 and propose some ideas for further research.

2. Original Particle Swarm Optimization

The population-based PSO conducts a search using a population of individuals. The individual in the population is called the particle, and the population is called the swarm. The performance of each particle is measured according to a predefined fitness function. Particles are assumed to “fly” over the search space in order to find promising regions of the landscape. In the minimization case, such regions possess lower functional values than other regions visited previously. Each particle is treated as a point in a -dimensional space which adjusts its own “flying” according to its flying experience as well as the flying experience of the other companion particles. By making adjustments to the flying based on the local best (pbest) and the global best (gbest) found so far, the swarm as a whole converges to the optimum point, or at least to a near-optimal point, in the search space.

The notations used in PSO are as follows. The th particle of the swarm in iteration is represented by the -dimensional vector, . Each particle also has a position change known as velocity, which for the th particle in iteration is . The best previous position (the position with the best fitness value) of the th particle is . The best particle in the swarm, that is, the particle with the smallest function value found in all the previous iterations, is denoted by the index . In a given iteration , the velocity and position of each particle are updated using the following equations: where ; . is the size of the swarm, and is the iteration limit; and are positive constants (called “social factors”); and are random numbers between 0 and 1; and is the inertia weight that controls the impact of the previous history of the velocities on the current velocity, influencing the tradeoff between the global and local experiences. A large inertia weight facilitates global exploration (searching new areas), while a small one tends to facilitate local exploration (fine-tuning the current search area). Equation (1) is used to compute a particle’s new velocity, based on its previous velocity and the distances from its current position to its local best and to the global best positions. The new velocity is then used to compute the particle’s new position (2).

3. Standard PSO and Its Variations

Any successful meta-heuristic algorithm maintains a delicate balance between exploration (diversifying the search to wider areas of the search space) and exploitation (intensifying the search in narrow promising areas). Shi and Eberhart later introduced the inertia weight in the original PSO to improve the PSO search [31]. A high value of the inertial weight favors exploration, while a low value favors exploitation. The inertia weight is defined as where and are, respectively, the initial and the final values of the inertia weight, Iter is the current iteration number, and is the maximum number of iterations. Many studies have shown that the PSO performance is improved by decreasing linearly from 0.9 to 0.4 using the above equation.

Eberhart and Shi have further proposed a random modification for the inertia weight to make the standard PSO applicable to dynamic problems [4]. The inertia weight is randomly modified according to the following equation:

As opposed to the above linear decrement, Jie et al. [32] have proposed a nonlinear modification of the inertia weight over time given by This inertia weight varies slowly in the initial stages but more rapidly in the final stages. This implies that the algorithm makes a wider global search in the early stages and narrower local search in the final stages.

Generally, the very same velocity and position update formulae are applied to each and every flying particle. Moreover, the very same inertia weight is applied to each particle in a given iteration. However, Yang et al. [33] have proposed a modified PSO algorithm with dynamic adaptation, in which a modified velocity updating formula of the particle is used, where the randomness in the course of updating the particle velocity is relatively decreased and each particle has a different inertia weight applied to it in a given iteration. Further, this algorithm introduces two new parameters describing the evolving state of the algorithm, the evolution speed factor and the aggregation degree factor. In the new strategy, the inertia weight is dynamically adjusted according to the evolution speed and the aggregation degree. The evolution speed factor is given by where is the fitness value of . The parameter () reflects the evolutionary speed of each particle. The smaller the value of is, the faster the speed is.

The aggregation degree is given by where is the mean fitness of all the particles in the swarm and is the optimal value found in the th iteration.

The inertia weight is updated as where is the initial inertia weight. The choice of and is typically in the range .

Another variation is the introduction of a constriction coefficient to replace to ensure the quick convergence of PSO [34]. The velocity update is given by where is the constriction coefficient, , and .

4. Parallel Swarm Oriented PSO

In all the PSO variations mentioned in the preceding sections, a number of new parameters are introduced in the original PSO. This destroys the simplicity of the algorithm and leads to an undesirable computational overhead. In this section, we describe our novel approach called the parallel swarms oriented PSO (PSO-PSO). This version of the PSO does not introduce any new algorithm parameters to improve its convergence performance. The strategy maintains the simple and intuitive structure as well as the implemental and computational advantages of the basic PSO.

The algorithm consists of a multievolutionary phase and a single-evolutionary phase. In the multi-evolutionary phase, initially a number of sub-swarms are randomly generated so as to uniformly cover the decision space. The multiple sub-swarms are then allowed to evolve independently, each one maintaining its own particle-best (pbest) and swarm-best (sbest). The latter is a new term that we have introduced to represent the best particle in a given swarm in its history of evolution. After a predetermined number of evolutionary cycles, the multi-evolutionary phase ends and the single-evolutionary phase begins. The sub-swarms exchange information and record the global-best (gbest) of the entire collection of the sub-swarms. In the single-evolutionary phase all the subswarms merge and continue to evolve using the individual sub-swarm particle-best and swarm-best and the overall global-best. The sub-swarms then return to the multi-evolutionary phase and continue the search. The algorithm flow chart of the PSO-PSO algorithm is shown in Figure 1.

Multievolutionary Phase. number of independent swarms evolve in parallel.

Step 1. Randomly generate number of independent swarm populations so as to be uniformly distributed over the entire decision space.

Step 2. Evaluate the fitness of the particles in each individual swarm. In the minimization problems, the fitness of a particle is inversely proportional to the value of the function.

Step 3. Determine the particle-best (pbest) and the swarm-best (sbest) of each individual swarm.

Step 4. Update the velocity and position of each particle in each swarm according to (1) and (2).

Step 5. Allow the individual swarms to evolve independently through iterations (i.e., repeat Steps 2 through 4).

Single-Evolutionary Phase. The individual swarms exchange information.

Step 6. Determine the global-best (gbest) by comparing the swarm-best (sbest) of all the swarms. For minimization problems, the gbest in a given iteration is given by the following equation:

Step 7. The individual swarms start interacting by using the gbest as reference. Update the velocities of all the particles according to the following equation: where is the position of the th particle in the th swarm, is the velocity of the th particle in the th swarm, is the pbest of the th particle in the th swarm, is the sbest of the th swarm, and is the global-best of the entire information-exchanging collection of sub-swarms. are the acceleration parameters, and are uniform random numbers.

Step 8. Update the positions of all the particles according to the following equation:

Step 9. Repeat Steps 28 through iterations and output the global-best.

5. Performance Comparison of Single and Multiswarm PSO

The PSO-PSO algorithm is implemented in MATLAB on a 24-core server with 256 GB RAM. Each CPU core is dedicated to the evolution of a single sub-swarm in the multi-stage computational phase of the algorithm. The results are presented in Figures 2, 3, and 4. For smaller dimensions, there is no appreciable difference between the performance of the ordinary (single) PSO and the multi-swarm PSO-PSO. This can be seen in the optimization results of the 10-dimensional Rosenbrock and Rastrigin functions (Figures 2(a) and 2(b)). However, the superior performance of the multi-swarm PSO-PSO approach is evident in higher dimensions. This can be seen in the optimization of the 20- (Figures 3(a) and 3(b)) and 30-dimensional Rosenbrock and Rastrigin functions (Figures 4(a) and 4(b)).

6. Conclusion

The particle swarm optimization (PSO) is increasingly becoming widespread in a variety of applications as a reliable and robust optimization algorithm. The attractive features of this evolutionary algorithm is that it has very few control parameters, is simple to program, and is rapidly converging. The only drawback reported so far is that at times it gets trapped in local optima. Many researchers have addressed this issue but by introducing a number of new parameters in the original PSO. This destroys the simplicity of the algorithm and leads to an undesirable computational overhead. In this study, we have proposed a variation of the algorithm called parallel swarms oriented PSO (PSO-PSO) which consists of a multi-stage and a single stage of evolution. The two interweaved stages of evolution demonstrate better performance on test functions, especially of higher dimensions. The attractive feature of PSO-PSO version of the algorithm is that it does not introduce any new parameters to improve its convergence performance. The PSO-PSO strategy maintains the simple and intuitive structure as well as the implemental and computational advantages of the basic PSO. Thus, the contribution of this study is the improvement of the performance of the basic PSO without increasing the complexity of the algorithm.