Abstract

Particle swarm optimization (PSO) has attracted many researchers interested in dealing with various optimization problems, owing to its easy implementation, few tuned parameters, and acceptable performance. However, the algorithm is easy to trap in the local optima because of rapid losing of the population diversity. Therefore, improving the performance of PSO and decreasing the dependence on parameters are two important research hot points. In this paper, we present a human behavior-based PSO, which is called HPSO. There are two remarkable differences between PSO and HPSO. First, the global worst particle was introduced into the velocity equation of PSO, which is endowed with random weight which obeys the standard normal distribution; this strategy is conducive to trade off exploration and exploitation ability of PSO. Second, we eliminate the two acceleration coefficients and in the standard PSO (SPSO) to reduce the parameters sensitivity of solved problems. Experimental results on 28 benchmark functions, which consist of unimodal, multimodal, rotated, and shifted high-dimensional functions, demonstrate the high performance of the proposed algorithm in terms of convergence accuracy and speed with lower computation cost.

1. Introduction

Particle swarm optimization (PSO) [1] is a population-based intelligent algorithm, and it has been widely employed to solve various kinds of numerical and combinational optimization problems because of its simplicity, fast convergence, and high performance.

Researchers have proposed various modified versions of PSO to improve its performance; however, there still are premature or lower convergence rate problems. In the PSO research, how to increase population diversity to enhance the precision of solutions and how to speed up convergence rate with least computation cost are two vital issues. Generally speaking, there are four strategies to fulfill these targets as follows.

(1) Tuning control parameters. As for inertial weight, linearly decreasing inertial weight [2], fuzzy adaptive inertial weight [3], rand inertial weight [4], and adaptive inertial weight based on velocity information [5], they can enhance the performance of PSO. Concerning acceleration coefficients, the time-varying acceleration coefficients [6] are widely used. Clerc and Kennedy analyzed the convergence behavior by introducing constriction factor [7], which is proved to be equivalent to the inertial weight [8].

(2) Hybrid PSO, which hybridizes other heuristic operators to increase population diversity. The genetic operators have been hybridized with PSO, such as selection operator [9], crossover operator [10], and mutation operator [11]. Similarly, differential evolution algorithm [12], ant colony optimization [13], and local search strategy [14] have been introduced into PSO.

(3) Changing the topological structure. The global and local versions of PSO are the main type of swarm topologies. The global version converges fast with the disadvantage of trapping in local optima, while the local version can obtain a better solution with slower convergence [15]. The Von Neumann topology is helpful for solving multimodal problems and may perform better than other topologies including the global version [16].

(4) Eliminating the velocity formula. Kennedy proposed the bare-bones PSO (BPSO) [17] and variants of BPSO [18, 19]. Sun et al. proposed quantum-behaved PSO (QPSO) and relative convergence analysis [20, 21].

In recent years, some modified PSO have extremely enhanced the performance of PSO. For example, Zhan et al. proposed adaptive PSO (APSO) [22] and Wang et al. proposed so-called diversity enhanced particle swarm optimization with neighborhood search (DNSPSO) [23]. The former introduces an evolutionary state estimation (ESE) technique to adaptively adjust the inertia weight and acceleration coefficients. The later ones, a diversity enhancing mechanism and neighborhood-based search strategies, were employed to carry out a tradeoff between exploration and exploitation.

Though all kinds of variants of PSO have enhanced performance of PSO, there are still some problems such as hardly implement, new parameters to just, or high computation cost. So it is necessary to investigate how to trade off the exploration and exploitation ability of PSO and reduce the parameters sensitivity of the solved problems and improve the convergence accuracy and speed with the least computation cost and easy implementation. In order to carry out the targets, in this paper, the global worst position (solution) was introduced into the velocity equation of the standard PSO (SPSO), which is called impelled/penalized learning according to the corresponding weight coefficient. Meanwhile, we eliminate the two acceleration coefficients and from the SPSO to reduce the parameters sensitivity of the solved problems. The so-called HPSO has been employed to some nonlinear benchmark functions, which compose unimodal, multimodal, rotated, and shifted high-dimensional functions, to confirm its high performance by comparing with other well-known modified PSO.

The remainder of the paper is structured as follows. In Section 2, the standard particle swarm optimization (SPSO) is introduced. The proposed HPSO is given in Section 3. Experimental studies and discussion are provided in Section 4. Some conclusions are given in Section 5.

2. Standard PSO (SPSO)

The PSO is inspired by the behavior of bird flying or fish schooling; it is firstly introduced by Kennedy and Eberhart in 1995 [1] as a new heuristic algorithm. In the standard PSO (SPSO) [2], a swarm consists of a set of particles, and each particle represents a potential solution of an optimization problem. Considering the th particle of the swarm with particles in a -dimensional space, its position and velocity at iteration are denoted by and . Then, the new velocity and position on the -dimension of this particle at iteration will be calculated by using the following: where , and is the population size; , and is the dimension of search space; and are two uniformly distributed random numbers in the interval ; acceleration coefficients and are nonnegative constants which control the influence of the cognitive and social components during the search process. , called the personal best solution, represents the best solution found by the th particle itself until iteration ; , called the global best solution, represents the global best solution found by all particles until iteration . is the inertial weight to balance the global and local search abilities of particles in the search space, which is given by where is the initial weight, is the final weight, is the current iteration number, and is the maximum iteration number. Then, update particle’s position using the following: and check , where and represent lower and upper bounds of the th variable, respectively.

3. Human Behavior-Based PSO (HPSO)

In this section, a modified version of SPSO based on human behavior, which is called HPSO, is proposed to improve the performance of SPSO. In SPSO, all particles only learn from the best particles and . Obviously, it is an ideal social condition. However, considering the human behavior, there exist some people who have bad habits or behaviors around us, at the same time, as we all known that these bad habits or behaviors will bring some effects on people around them. If we take warning from these bad habits or behaviors, it is beneficial to us. Conversely, if we learn from these bad habits or behaviors, it is harmful to us. Therefore, we must give an objective and rational view on these bad habits or behavior.

In HPSO, we introduce the global worst particle, who is of the worst fitness in the entire population at each iteration. It is denoted as and defined as follows: where represents the fitness value of the corresponding particle.

To simulate human behavior and make full use of the , we introduce a learning coefficient , which is a random number obeying the standard normal distribution; that is, . If , we consider it as an impelled learning coefficient, which is helpful to enhance the “flying” velocity of the particle; therefore, it can enhance the exploration ability of particle. Conversely, if , we consider it as a penalized learning coefficient, which can decrease the “flying” velocity of the particle; therefore, it is beneficial to enhance the exploitation. If , it represents that these bad habits or behaviors have not effect on the particle. Meanwhile, in order to reduce the parameters sensitivity of the solved problems, we take place of the two acceleration coefficients and with two random learning coefficients and , respectively. Therefore, the velocity equation has been changed as follows: where and are two random numbers in range of and . The random numbers , , and are the same for all but different for each particle, and they are generated anew in each iteration. If overflows the boundary, we set boundary value to it. Consider where and are the minimum and maximum velocity of the -dimensional search space, respectively. Similarly, if flies out of the search space, we limit it to the corresponding bound value.

In SPSO, the cognition and social learning terms move particle towards good solutions based on and in the search space as shown in Figure 1. This strategy makes a particle fly fast to good solutions, so it is easy to trap in local optima. From Figure 2, we can clearly observe that both impelled learning term and penalized term provide a particle with the chance to change flying direction. Therefore, the impelled/penalized term plays a key role in increasing the population diversity, which is beneficial in helping particles to escape from the local optima and enhance the convergence speed. In HPSO, the impelled/penalized learning term performs a proper tradeoff between the exploration and exploitation.

To sum up, Figure 3 illustrates the flowchart of HPSO. Meanwhile, the pseudocodes of implementing the HPSO are listed as shown in Algorithm 1.

Initialize Parameters:
population size;
the dimensionality of search space;
the number of maximum iteration;
the inertial weight;
the allowable position boundaries, ;
the allowable velocity boundaries, ;
Initialize Population:   , ,  ;
;
;
Initialize   , and   :
 Evaluate fitness of all particles in ;
;
;
;
For  
For each particle
  Update velocity according to (5) and check the boundaries;
  Update position according to (3) and check the boundaries;
Endfor
Evaluate fitness of all particles in   ;
Update   , and   ;
Endfor.
Return the best solution.

4. Experimental Studies and Discussion

To evaluate the performance of HPSO, 28 minimization benchmark functions are selected [22, 24, 25] as detailed in Section 4.1. HPSO is compared with SPSO in different search spaces and the results are given in Section 4.2. In addition, HPSO is compared with some well-known variants of PSO in Section 4.3.

4.1. Benchmark Functions

In the experimental study, we choose 28 minimization benchmark functions, which consist of unimodal, multimodal, rotated, shifted, and shifted rotated functions. Table 1 lists the main information; please refer to papers [22, 24, 25] to obtain further detailed information about these functions. Among these functions, are unimodal functions. is the Rosenbrock function, which is unimodal for and but may have multiple minima in high dimension cases. are unrotated multimodal functions and the number of their local minima increases exponentially with the problem dimension. are rotated functions. are shifted functions and and are shifted rotated multimodal functions and is a randomly generated shift vector located in the search space. To obtain a rotated function, an orthogonal matrix [26] is considered and the rotated variable is computed. Then, the vector is used to evaluate the objective function value.

4.2. Comparison of HPSO with SPSO

The performance on the convergence accuracy of HPSO is compared with that of SPSO. The test functions listed in Table 1 are evaluated. For a fair comparison, we set the same parameters value. Population size is set to 30 (), upper bounds of velocity , and the corresponding lower bounds , where and are the lower and upper bounds of variables, respectively. Inertia weight is linearly decreased from 0.9 to 0.4 in SPSO and HPSO. Acceleration coefficients and in SPSO are set to 2. The two algorithms are independently run 30 times on the benchmark functions. The results in terms of the best, worst, median, mean, and standard deviation (SD) of the solutions obtained in the 30 independent runs by each algorithm in different search spaces are as shown in Tables 2, 3, and 4. At the same time, the maximum iteration is 1000 for , 2000 for , and 3000 for , respectively.

From Tables 24, we can clearly observe that the convergence accuracy of HPSO is better than SPSO on the most benchmark functions. An interesting result is that HPSO can find the global optimal solutions on functions , , , , , , , , and in all search spaces; that is to say, HPSO can obtain the 100% success rate on the listed functions. Considering and , though HPSO can find the global optimal solutions in all different search ranges, it only obtained the mean values 333.3333 and 0.8333, respectively, in 100-dimensional space. At the same time, HPSO offers the higher convergence accuracy on functions , , , , , , , , , and . However, we must observe that SPSO has higher performance on function . As for , SPSO has better performance in 30-dimensional search space, but HPSO has better performance in 50- and 100-dimensional search spaces. As for shifted rotated functions and , both SPSO and HPSO have worst convergence accuracy. As seen, the dimension of the selected functions has great effect on SPSO. For example, considering function , SPSO has mean value 666.6686, 3.6667 + 03, and 4.0698 + 04 in 30-dimensional, 50-dimensional, and 100-dimensional search spaces, respectively, while HPSO has mean values 0, 0, and 333.333 in the corresponding search space. Therefore, we also conclude that HPSO has better stability than SPSO from the data in different search spaces.

In the 9th columns of Tables 24, we report the statistical significance level of the difference of the means of the two algorithms. Note that here “+” indicates that the value is significant at a 0.05 level of significance by two-tailed test, and “−” stands for the difference of means that is not statistically significant.

Figure 4 graphically presents the comparison in terms of convergence characteristics of the evolutionary processes in solving the selected benchmark functions in 30-dimensional search space with and .

4.3. Comparison of HPSO with Other PSO Algorithms

In this section, a comparison of HPSO with some well-known PSO algorithms which are listed in Table 5 is performed to evaluate the efficiency of the proposed algorithm.

At first, we choose 10 unimodal and multimodal test functions for this evaluation. According to [22], the algorithms GPSO [2], LPSO [16], VPSO [27], FIPS [28], HPSO-TVAC [6], DMS-PSO [29], CLPSO [24], and APSO [22] are considered as detailed in Table 5. The experimental results of the algorithms are directly from [22] as shown in Table 6. In this trial, the population size , the dimension , and the maximum fitness evaluations (FEs) were set to also. The parameter configurations of the selected algorithms have been set according to their corresponding references. The inertia weight is linearly decreased from 0.9 to 0.4 in HPSO. HPSO is independently run 30 times and the mean and SD are shown in Table 6. As seen, HPSO has the first rank among the algorithms and obtains the global minimum on functions , , , , , and and gives the good near-global optima on functions and . Meanwhile, HPSO has the worst performance on functions and . As for , APSO has the best convergence accuracy, and HPSO only wins CLPSO. Considering , DMS-PSO has the best performance.

Then, in the next step, we choose six functions from [25] and seven algorithms of GPSO, QIPSO [30], UPSO [31], FIPS, AFSO [25], and AFSO-Q1 [25] as detailed in Table 5. For a fair comparison, the population size , the dimension , and the maximum iteration also in HPSO, and the inertia weight is linearly decreased from 0.9 to 0.4. HPSO is independently run 30 times and the mean and SD are shown in Table 7. As seen, HPSO shows better performance and has the first rank. HPSO finds the global optimal solution on functions , , , and . FIPS and UPSO have better convergence accuracy on functions and , respectively.

Therefore, it is worth saying that the proposed algorithm has considerably better performance than the other well-known PSO algorithms in unimodal and multimodal high-dimensional functions.

5. Conclusion

In this paper, a modified version of PSO called HPSO has been introduced to enhance the performance of SPSO. To simulate the human behavior, the global worst particle was introduced into the velocity equation of SPSO, and the learning coefficient which obeys the standard normal distribution can balance the exploration and exploitation abilities by changing the flying direction of particles. When the coefficient is positive, it is called impelled leaning coefficient, which is helpful to enhance the exploration ability. When the coefficient is negative, it is called penalized learning coefficient, which is beneficial for improving the exploitation ability. At the same time, the acceleration coefficients and have been replaced with two random numbers, whose sum is equal to  1 in ; this strategy decreases the dependence on parameters of the solved problems. The proposed algorithm has been evaluated on 28 benchmark functions including unimodal, unrotated multimodal, rotated, shifted, and shifted rotated functions, and the experimental results confirm the high performance of HPSO on the main functions. However, as seen, HPSO has the worst performance on shifted rotated functions, so it is worth researching how to enhance the performance of HPSO on shifted rotated functions in the future. Meanwhile, applying HPSO to solve real-world problems is also a research field.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The project is supported by the National Natural Science Foundation of China (Grant no. 61175127) and the Science and Technology Project of Department of Education of Jiangxi Province China (Grant no. GJJ12093).