Research Article  Open Access
A Quantum Particle Swarm Optimization Algorithm with Teamwork Evolutionary Strategy
Abstract
The quantum particle swarm optimization algorithm is a global convergence guarantee algorithm. Its searching performance is better than the original particle swarm optimization algorithm (PSO), but the control parameters are less and easy to fall into local optimum. The paper proposed teamwork evolutionary strategy for balance global search and local search. This algorithm is based on a novel learning strategy consisting of crosssequential quadratic programming and Gaussian chaotic mutation operators. The former performs the local search on the sample and the interlaced operation on the parent individual while the descendants of the latter generated by Gaussian chaotic mutation may produce new regions in the search space. Experiments performed on multimodal test and composite functions with or without coordinate rotation demonstrated that the population information could be utilized by the TEQPSO algorithm more effectively compared with the eight QSOs and PSOs variants. This improves the algorithm performance, significantly.
1. Introduction
With the development of reallife engineering technology, the related optimization problems are more and more complicated. Experience shows that using traditional methods to solve these complex problems is inefficient. In order to cope with this limitation, a variety of artificial intelligence algorithms, such as particle swarm optimization (PSO) based memetic algorithm (MA) [1], simulated annealing algorithm (SABA) [2], ant colony optimization (MSCRSP–ACO) [3], bat algorithm (BA) [4], firefly algorithm [5], biogeographybased optimization (BBO) [6], the cuckoo search algorithm [7, 8], charged system search (CSS) [9], gravitational search algorithm [10], grey wolf optimizer (GWO) [11], ant colony optimization (ACO) algorithm [12], and new adaptive ant algorithm [13], have been proposed and applied to solve optimization problems. The particle swarm optimization (PSO) method is a broader class of swarm intelligence methods for solving GO problems. The method was originally proposed by Kennedy to simulate the social behaviour of flocks and was first introduced as an optimization method in 1995. Unlike other evolutionary algorithms that use evolutionary operators to manipulate individuals, PSO relies on the exchange of information between individuals. Each particle in the particle swarm algorithm flies in the searching space at a certain speed, and speed is dynamically adjusted according to the information of the particle itself.
Since 1995, many have been made to improve the performance of PSO by experts and scholars [14, 15]. However, Van den Bergh [16] proved that PSO is not a global optimization algorithm. Sun et al. [17] combined the quantum theory with the particle swarm optimization (PSO) algorithm to establish a quantum behavior particle swarm optimization algorithm (QPSO). This algorithm ensures finding the global optimal solution in the search space. Experimental results performed on various benchmark functions demonstrate that the proposed algorithm could improve the standard PSO algorithm. The global convergence of QPSO guarantees that the global optimal solution is calculated in the case of infinite searching iterations. However, in practical problems, this condition is unrealistic because any optimization algorithm allows only a limited number of iterations to search for the optimal solution. When solving complex problems, QPSO is also prone to fall into local optimum or slow convergence. Experts and scholars have developed various strategies to improve the convergence speed and global optimal performance of QPSO. Liu et al. [18] introduced simulated annealing into the QPSO algorithm; Chen et al. [19] presented an improved GSO optimizer with quantumbehaved operator for scroungers; Li et al. [20] presented cooperative quantumbehaved particle swarm optimization (CQPSO); Li et al. [21] proposed a dynamiccontext cooperative quantumbehaved particle swarm optimization algorithm; H Long et al. [22, 23] proposed a new algorithm PDCQPSO and diversity maintained into QPSO; Leandro dos Santos Coelho [24, 25] presented a novel quantumbehaved PSO (QPSO) using chaotic mutation operator, mutation operator with Gaussian probability distribution; Qin et al. [26] presented a hybrid improved QPSO algorithm (LTQPSO); Tian et al. [27] proposed a heuristic method known as quantumbehaved particle swarm optimization (QPSO) to solve the inverse advection–dispersion problem.
However, it is quite difficult to improve global search capabilities and speed up convergence at the same time. If you try to avoid falling into the local optimal state, the convergence speed may be slower, so the QPSO algorithm needs to be specifically modified to be suitable for the practical optimization problem.
In the group intelligence algorithm, how to balance the global and local search ability is a key issue that is the balance between exploration and gain. Particle swarms have some shortcomings in this respect. In terms of exploration, the fast convergence characteristics easily lead to premature convergence and, in the aspect of gain, the single search mode of particle swarm has the problem of insufficient convergence precision, and, in the multiobjective particle swarm, the frequent replacement of global optimal solution leads to more prominent problem of exploration and development. In summary, this paper proposes a new cooperative evolutionary strategy teamwork particle swarm optimization algorithm (TEQPSO) for solving nonlinear numerical problems including unimodal and multipeak test functions. The main difference between the comprehensive learning particle swarm optimizer (CLPSO) [28] and traditional QPSO is that CLQPSO uses cooperative evolution strategy to generate local attractors for each particle, as follows: using two operators (crosssequential quadratic programming operators and the cooperative learning strategy consisting of Gaussian chaotic mutation operator generates local attractors; it uses a cooperative learning strategy composed of two operators to generate local attractors; a probability factor is employed to control the implementation of these two operators to realize a balance between exploration and gain of the teamwork evolutionary strategy.
2. Teamwork Evolutionary Strategy Quantum Particle Swarm Optimization
2.1. Cooperative Evolutionary Strategy Quantum Particle Swarm Optimization
Consider that the particle moved on a onedimensional well . Now, its position () could be calculated from the following equation:
where is the particle motion centre. In QPSO algorithm, it is also called the attractor of the particle. is the characteristic length of the potential well where its value is directly related to the convergence speed and searching ability of the algorithm. is a random number with a uniform distribution function in the range .
Parameter should be appropriately determined through the QPSO algorithm. This parameter could be calculated from the following equations: where
where is the optimal position of the individual in the search history of the particle . is the ContractionExpansion (CE) factor. This parameter should be decreased during the algorithm running.
In the QPSO algorithm, each particle takes the weighted average position of the individual historical optimal position and the optimal position of the group history as its own attraction point. This calculation method could be concluded from the particle motion trajectory results [29, 30]. Although this kind of calculation is simple, it has two obvious defects: (i) apart from its own experience learning, each particle position depends on the historical optimal position of the group. This leads to rapid decline in the diversity of large groups which reduces the algorithm capability for solving complex multipeak optimization problems. (ii) The possible distribution space of each particle attraction point gradually decreases during the evolution process of the algorithm. The particles are limited to a rectangle with vertices and . The gradually approaches . Finally, this algorithm could not jump out of local optimum in the final stage. Now, we have
where is a random number with uniform distribution function over the interval . The subscript is the number of a randomly selected particle with the best fitness value. Moreover, the ratio of the particles is selected as . is a perturbation vector defined as
where subscripts b and c are two randomly selected particles in the group and .
The following observation (update) equation for the particle position in the QPSO algorithm could be obtained. Consider
2.2. TE Strategy
In the evolution procedure, some useful information about individual particles and the global optimal position may lose through the algorithm. In addition, the movement of some attractors in a worse direction leads to poor fitness in the next evolutionary process. Therefore, to improve the algorithm performance, the effective information on the individual and global optimal positions of the particles should be utilized through an appropriate method. To improve the optimization ability of the algorithm using the mentioned information, the crossover algorithm and local search are incorporated into a crosssequential quadratic programming (SQP) algorithm. The algorithm falls into local optimum in its final stage. This means that the individual and global optimum positions of the particles in the population are very close to each other or even the same. Considering the mentioned problem, a Gaussian chaotic mutation operator is proposed to improve the population diversity and jump out of the local optimum.
2.3. CROSSSQP Operator
(1) Local Search Based on the SQP Algorithm. The SQP method [31] is one of the nonlinear programming methods for constrained optimization. In the SQP algorithm, the quasiNewton update method is utilized. In this method, the Lagrange function’s Hessian function in each iteration is calculated, approximately. Then, the quadratic programming subquestion is generated to calculate the searching direction of the line searching process. Thus, the following optimization problem could be written:
where the subscript is the current number of iterations; is the matrix that could be approximated by the quasiNewton and the BFGS methods. The solution of the quadratic programming subproblem is employed as the linear search direction for the next iteration.
In summary, the SQP method mainly includes three stages: (i) updating the Hessian matrix of the Lagrangian function; (ii) calculating the line search and performance function; and (iii) solving the quadratic programming problem.
(2) CrossAlgorithm. The crossover operator is derived from the genetic algorithm [32]. Through crossoperation, the information exchange between individuals in the group could be performed. As a result, the excellent genes are gradually retained during the evolutionary process. Accordingly, the group evolves into a good direction. To solve complex multipeak optimization problems through the QPSO algorithm, the group diversity and the algorithm performance should be improved by incorporating the crossover operator into the algorithm. In the beginning, the measurement position corresponding to the particle is generated. Then, and the individual historical optimal position are discretely intersected to generate a test position . The crossover equation is given as
where is a random number with a uniform distribution function in the interval ; is an integer number that is uniformly generated in the interval ; is the crossover operation probability. This operation is similar to the binomial operation in the differential evolution algorithm [33].
Finally, the optimal historical position of the updated particle is calculated as
where is the fitness function. Without loss of generality, this paper only considers the minimization problem.
Different optimization problems require different optimal parameters at different stages of evolution. Therefore, finding an appropriate value for all problems is difficult. In this paper, this parameter is directly encoded into each particle by an adaptive method to attain an adaptive control method. The particle in the extended coded group could be described as
For each group particle, the crossover probability could be updated according to the following adaptation rule:where is the probability of updating parameter .
The individual historical optimal position could be calculated through the CROSSSQP operator. It could be employed as the attractor of particle .
2.4. Gaussian Chaotic Mutation Operator
Necessary conditions for global convergence of the QPSO algorithm could be obtained through a convergence analysis approach; that is, each particle , converges on . Its expression is where , is the historical optimal position of the particle and is the optimal particle position.
In the TEQPSO algorithm, the dimension of the particle is considered as one. The chaotic mapping relationship is established by calculating the distance between positions and for particle . In addition, the chaotic search range for each generated particle is dynamically adjusted, iteratively. The searching range of particles is calculated as follows:
The Gaussian chaotic variation in [34] could be employed to improve the singleobjective PSO algorithm into the multiobjective PSO algorithm. Accordingly, we havewhere is the random number in the interval , is the distance from the global optimal position of the particle and is the introduced logistic chaotic map value that is calculated such that the omnidirectional ergodicity property for the variation is realized, or
The initial value of is considered as and , and are the upper and lower bounds of the particle search space, and is a Gaussian process with the following distribution function:where is the current number of iterations, is the maximum number of iterations, and is the initial variance.
The detailed operation flow of the Gaussian chaotic mutation operator is shown in Table 2. Before applying Gaussian chaotic mutation operators, the number of Gaussian variations should be determined first. If the number of variations is too large, the calculating time of the algorithm increases greatly while if the number of mutations is too small, the probability that the algorithm jumps out of the local optimum decreases.
2.5. “Exploration” and “Gain” Analysis in Cooperative Evolution Strategy
“Exploration” in the algorithm refers to adding crosssequential quadratic programming operators to enhance the local search capability of the optimization algorithm. The “gain” refers to accessing the areas that were visited in the historical search to enhance the global search or the refining capability of the optimization algorithm [32]. The crossover operator and SQP strategy have been employed in the crosssequential quadratic programming operator to strengthen the local search capability or “exploration” in the teamwork evolutionary strategy. The descendants of the Gaussian chaotic mutation operator generated by the Gaussian chaotic mutation may appear in the new searching space. Therefore, the Gaussian chaotic operator is nearer to the “gain” in the teamwork evolutionary strategy.
The TEQPSO algorithm could be summarized through the following steps.
Generate particle groups in the decision space, randomly. Now, initialize the global and individual optimal values. Calculate the fitness value for each particle, the maximum number of iterations T, and the upper and lower bounds ( and ) for the particle .
Calculate the current optimal position for particle and the optimal position for all particles. Let be zero for each particle , where indicates that individual optimal position for the particle remains unchanged.
For the acquisition method of the attractor of the particle : (i) if , then is obtained according to the CROSSSQP operator; (ii) if , then is obtained according to the Gaussian chaotic operator. The Gaussian chaotic operator operation is carried out by the steps given in Table 2; (iii) if , according to the CROSSSQP operator , this operator is applied using the corresponding steps given in Table 1. Otherwise, is obtained according to the Gaussian chaotic mutation operator where its steps are illustrated in Table 2.


Update the particle swarm and fitness values. If ; if , .
Determine whether the algorithm satisfies the termination condition; then output . Otherwise, return to step .
3. Test Functions and Settings
3.1. Test Function
To evaluate the performance of the algorithm, the optimization function defined in IEEECEC 2014 is selected, as shown in Table 3. The MATLAB code for these functions can be downloaded from http://www.ntu.edu.sg/home/epnsugan/. Table 3 gives some main information about the relevant test functions. It could be seen from Table 3 that the twelve test functions could be divided into four categories, in which the function types of functions and are identical to singlemode functions, the functions of are consistently multimodal functions, and are rotating multimode, and are multiobjective test functions. For the state function, to rotate the multimodal function, first construct an orthogonal matrix N and multiply the variables by the multimodal function to obtain a new variable . This variable could be utilized to calculate the fitness value of the function [38].

Among them, , , and ; then .
3.2. Algorithm Parameter Analysis
The TEQPSO algorithm mainly uses three new parameters: probability factor , the number of mutations , and the maximum number of iterations . These three parameters could be analysed accurately by selecting each function from three types of functions . The TEQPSO algorithm could work effectively if appropriate values are chosen for , , and . To find the optimal values for these parameters, two of them are considered constant while the other one changes within the setting range. Now, the parameters giving the best fitness values for functions are selected as the optimal parameters. This ensures the accuracy of the algorithm and reduces the statistical error. The experimental results given below are obtained from 50 independent running statistics of the algorithm. For each test function, each algorithm performs a subfunction evaluation () times.
(1) Probability Factor p. In the TEQPSO algorithm, the probability factor is introduced to decide whether to use the crossover operator or the Gaussian chaotic mutation operator. This makes an effective balance between the local and global search effects. To investigate the effect of different probability factors on the algorithm performance and find the best probability factor, the number of mutations c=8 and the stagnation factor are selected. Figure 1(a) shows the average result of 50 independent running times of the TEQPSO algorithm. As what could be seen from Figure 1(a), good results are obtained for most of the test functions for p=0.15. Thus, p=0.15 could be a suitable value for the probability factor of the TEQPSO algorithm.
(a)
(b)
(c)
(2) Number of Mutations c. During the chaotic operator execution, the number of mutations determines the number of chaotic variations in the particles. If large value is chosen for this parameter, the algorithm will waste too much computation time. Considering small values for this parameter reduces the capability of the algorithm to jump out of the local optimum. Therefore, an appropriate value for number of mutations should be chosen. Figure 1(b) gives the statistical results of the algorithm running for different values of . The probability factor p=0.15 and the stagnation factor T=9 are considered. It could be seen from Figure 1(b) that superior results for most test functions could be obtained by choosing .
(3) Stagnation Factor T. In the TEQPSO algorithm, if the particle does not change its individual optimal position in Tth iteration, a cooperative learning strategy is adopted to construct the attractor so that the particle jumps out of the current position. Therefore, the stop factor T could be utilized to control the constructing attractor frequencies using the teamwork evolutionary strategy. To derive the appropriate stopping factor, the probability factor p=0.15 and the number of mutations c=8 are selected. The running results of the algorithm for different values of the stagnation factors are presented in Figure 1(c). It is obvious that the algorithm gives better results for most of the test functions for T=9.
3.3. Related Algorithm Parameter Settings
We make two groups of experiments, with each to test the standard QPSO [34], the CSPSO [35], the CLPSO [28], the WQPSO [36], the CQPSO [20], the COQPSO [37], and TEQPSO in this paper. The parameter settings of the TEQPSO algorithm and other algorithms are shown in Table 4. When solving the 10dimensional (10D) problem, the population size is set to 10, and the maximum fitness evaluation (FE) is set to 400000. When solving the 30dimensional (30D) problem, the population size is set to 40 and the maximum FE is set to 400000. All experiments were performed 30 times, and the mean and standard deviation from the calculation results on each algorithm were given. In solving multiobjective optimization problems, because the PSO has higher computational efficiency, usually the fitness calculation takes the most time. Therefore, this article does not compare the algorithmrelated calculation times of these algorithms.
4. Experimental Results and Analysis
4.1. Results of 10Dimensional (10D) Problems
Table 5 shows the mean and variance of seven algorithms running 30 times for eight test functions. As can be seen from Table 5, the TEQPSO algorithm obtains the highest precision solution to the unimodal function; the mean and standard deviation values of TEQPSO are 0.00e+00 and 0.00e+00, followed by WQPSO, COQPSO, CQPSO, QPSO, CLPSO, and CSPSO. The CLPSO algorithm has the best performance in the unimodal function . The mean and standard deviation values of CLPSO are 6.18e+00 and 2.89e+00, followed by CSPSO, COQPSO, QPSO, WQPSO, TEQPSO, and CQPSO. Then, the TEQPSO algorithm achieves the best results of the multimodal function , with the mean and variance values of 0.00e+00 and 0.00e+00. For the other six algorithms of functions , , and , they fall into local optimum; only TEQPSO obtains the best solution. The results show that the TEQPSO algorithm can maintain better population diversity, enhance the ability of the algorithm to jump out of local optimum, and have higher search accuracy on complex multimodal functions. The functions and are rotational mode function, and the other five algorithms are difficult to solve. Only CLPSO and TEQPSO obtain better calculation results. CLPSO obtains the highest precision solution in the rotation function . The mean and standard deviation values of CLPSO are 6.41e05 and 3.58e05, followed by CSPSO, CQPSO, WQPSO, COQPSO, QPSO, and TEQPSO. And TEQPSO obtains the most accurate solution in the rotation function . The mean and standard deviation of TEQPSO are 1.88e+00 and 1.05e+00, followed by WQPSO, CQPSO, COQPSO, CLPSO, QPSO, and CSPSO. In summary, the quality of TEQPSO solutions to functions , , , , , and is significantly better than the other six algorithms.

In order to compare the convergence of several algorithms, Figures 2–9 show the convergence of the seven algorithms under eight test functions. It can be seen from the figure that, compared with the TEQPSO, the other four quantum particle swarm algorithms have obvious faster convergence speed, but they are also very easy to fall into local optimum. The other two particle swarm optimization algorithms converge slowly and perform better in unimodal functions. Therefore, after adopting the teamwork evolutionary strategy, compared with the other four quantum particle swarm optimization algorithms, the convergence speed of the TEQPSO is slowed down, but the ability of the algorithm to jump out of the local optimal region is also significantly enhanced.
4.2. Results of the 30Dimensional (30D) Problem
Table 6 shows the mean and variance of seven algorithms running 30 times on the eight test functions. The best results of seven algorithms are shown in bold.

It can be seen from Table 6 that TEQPSO algorithm obtains the most accurate solution in the unimodal function , which jumps out of the local optimum; the mean and standard deviation values of TEQPSO are 0.00e+00 and 0.00e+00, followed by WQPSO, COQPSO, CQPSO, QPSO, CLPSO, and CSPSO. The CSPSO algorithm has the best performance in the unimodal function ; the mean and standard deviation values of CSPSO are 6.78e01 and 1.39e01, followed by CLPSO, TEQPSO, CQPSO, WQPSO, QPSO, and COQPSO.
Then, the algorithm of TEQPSO obtains the best results on the multimodal functions ; the mean and standard deviation values of are 7.40e17 and 4.94e17 and the mean and standard deviation values of are 0.00e+00 and 0.00e+00. For the functions , , and , the other six algorithms fall into local optimum and only TEQPSO obtains the best solution. The results show that TEQPSO algorithm can maintain better population diversity, enhance the ability of the algorithm to jump out of local optimum, and have higher search accuracy on complex multimodal functions. The functions and are rotational modal functions. The other five algorithms are difficult to solve the functions and . Only WQPSO and TEQPSO can obtain better calculation results. WQPSO obtains the highest precision solution in the rotation function ; the mean and standard deviation values of WQPSO are 8.27e03 and 1.55e03, followed by WQPSO, TEQPSO, CQPSO, CSPSO, QPSO, CLPSO, and COQPSO. TEQPSO obtains the highest precision solution in the rotation function ; the mean and standard deviation values of TEQPSO are 3.77e+00 and 1.82e+00, followed by QPSO, COQPSO, WQPSO, CQPSO, CSPSO, and CLPSO. In summary, the precision of the TEQPSO solutions in functions , , , , , and is better than the other six algorithms significantly. Since the convergence characteristics of seven algorithms under 30dimensional test function are similar to the 10dimensional convergence characteristics, the convergence characteristics of seven algorithms under 30D are not given in this paper.
4.3. Calculation Results Discussion
By analyzing the experimental results of the TEQPSO algorithm in the case of 10D and 30D, it could be concluded that this algorithm does not work well for unimodal functions. The TEQPSO algorithm gives superior results for multimodal functions compared with other QPSO algorithms. Jumping out of local optimum and achieving better search accuracy for rotating multimodal functions, which is very helpful for solving complex problems, are other advantages of this algorithm. According to the “No Free Lunch” theorem, the TEQPSO algorithm leads to higher search accuracy for multimodal and rotating multimodal functions. However, its convergence speed is significantly slower than other QPSO algorithms. Therefore, the TEQPSO algorithm gives obvious advantages in solving multiobjective and complex problems. In addition, the calculation effect is better.
5. Conclusion
This paper proposes a QPSO algorithm for the teamwork evolutionary strategy. The algorithm adopts a novel learning strategy, namely, teamwork evolutionary strategy, which consists of crosssequential quadratic programming and Gaussian chaotic mutation operators. The new strategy allows the particle to have more samples to learn and larger potential space to fly. Through analysis and experiments, it is derived that the teamwork evolutionary strategy increases the TEQPSO algorithm ability to utilize the information in the group. In comparison with the eight QSOs and PSOs variants, it could be concluded that the TEQPSO algorithm significantly improves the algorithm performance for multipeak cost functions while the TEQPSO algorithm is effective in solving singlepeak cost functions.
Data Availability
The relevant raw data of this article should be all downloaded from http://www.ntu.edu.sg/home/EPNSugan/index_files/CEC2014. All the test functions in this paper can also be found in the reference J. J. Liang, B. Y. Qu, and P. N. Suganthan, “Problem Definitions and Evaluation Criteria for the CEC 2014 Special Session and Competition on Single Objective RealParameter Numerical Optimization.”
Conflicts of Interest
The authors declare that they have no conflicts of interest.
References
 B. Liu, L. Wang, and Y. Jin, “An effective PSObased memetic algorithm for flow shop scheduling,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 37, no. 1, pp. 18–27, 2007. View at: Publisher Site  Google Scholar
 H. Zhang, G. Bai, and C. Liu, “A broadcast path choice algorithm based on simulated annealing for Wireless Sensor Network,” in Proceedings of the IEEE Int Conf on Automation and Logistics, pp. 310–314, IEEE, Zhengzhou, China, 2012. View at: Publisher Site  Google Scholar
 Z. Zhang, N. Zhang, and Z. Feng, “Multisatellite control resource scheduling based on ant colony optimization,” Expert Systems with Applications, vol. 41, no. 6, pp. 2816–2823, 2014. View at: Publisher Site  Google Scholar
 A. H. Gandomi, X. S. Yang, A. H. Alavi, and S. Talatahari, “Bat algorithm for constrained optimization tasks,” Neural Computing and Applications, vol. 22, no. 6, pp. 1239–1255, 2013. View at: Publisher Site  Google Scholar
 A. H. Gandomi, X.S. Yang, and A. H. Alavi, “Mixed variable structural optimization using firefly algorithm,” Computers & Structures, vol. 89, no. 2324, pp. 2325–2336, 2011. View at: Publisher Site  Google Scholar
 D. Simon, “Biogeographybased optimization,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 6, pp. 702–713, 2008. View at: Publisher Site  Google Scholar
 X. Li, J. N. Wang, and M. Yin, “Enhancing the performance of cuckoo search algorithm using orthogonal learning method,” Neural Computing and Applications, vol. 24, no. 6, pp. 1233–1247, 2014. View at: Publisher Site  Google Scholar
 X. T. Li and M. H. Yin, “A particle swarm inspired cuckoo search algorithm for real parameter optimization,” Soft Computing, vol. 20, no. 4, pp. 1389–1413, 2016. View at: Publisher Site  Google Scholar
 A. Kaveh and S. Talatahari, “A novel heuristic optimization method: charged system search,” Acta Mechanica, vol. 213, no. 34, pp. 267–289, 2010. View at: Publisher Site  Google Scholar
 E. Rashedi, H. Nezamabadipour, and S. Saryazdi, “GSA: a gravitational search algorithm,” Information Sciences, vol. 179, no. 13, pp. 2232–2248, 2009. View at: Publisher Site  Google Scholar
 S. Mirjalili, S. M. Mirjalili, and A. Lewis, “Grey wolf optimizer,” Advances in Engineering Software, vol. 69, pp. 46–61, 2014. View at: Publisher Site  Google Scholar
 Q. Li, C. Zhang, P. Chen et al., “Improved ant colony optimization algorithm based on particle swarm optimization,” Control and Decision, vol. 28, no. 6, pp. 873–878, 2013. View at: Publisher Site  Google Scholar
 C. Zhang, L. Tu, and J. Wang, “Application of Self adaptive ant colony optimization in TSP,” Journal of Central South University: Science and Technology, vol. 46, no. 8, pp. 2944–2949, 2015. View at: Google Scholar
 A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Selforganizing hierarchical particle swarm optimizer with timevarying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240–255, 2004. View at: Publisher Site  Google Scholar
 L. Messerschmidt and A. P. Engelbrecht, “Learning to play games using a PSObased competitive learning approach,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 280–288, 2004. View at: Publisher Site  Google Scholar
 F. Van den Bergh, An Analysis of Particle Swarm Optimizers [Ph.D. thesis], University of Pretoria, November 2001.
 J. Sun, C.H. Lai, W.B. Xu, Y. Ding, and Z. Chai, “A modified quantumbehaved particle swarm optimization,” in Proceedings of the 7th International Conference on Computational Science (ICCS '07), pp. 294–301, Beijing, China, May 2007. View at: Publisher Site  Google Scholar
 J. Liu, J. Sun, and W. B. Xu, “Improving quantumbehaved particle swarm optimization by simulated annealing,” Computational Intelligence and Bioinformatics, pp. 130–136, 2006. View at: Google Scholar
 D. Chen, J. Wang, F. Zou, W. Hou, and C. Zhao, “An improved group search optimizer with operation of quantumbehaved swarm and its application,” Applied Soft Computing, vol. 12, no. 2, pp. 712–725, 2012. View at: Publisher Site  Google Scholar
 Y. Li, R. Xiang, L. Jiao, and R. Liu, “An improved cooperative quantumbehaved particle swarm optimization,” Soft Computing, vol. 16, no. 6, pp. 1061–1069, 2012. View at: Publisher Site  Google Scholar
 Y. Li, L. Jiao, R. Shang, and R. Stolkin, “Dynamiccontext cooperative quantumbehaved particle swarm optimization based on multilevel thresholding applied to medical image segmentation,” Information Sciences, vol. 294, pp. 408–422, 2015. View at: Publisher Site  Google Scholar
 H. Long, S. Wu, and H. Fu, “Parallel diversitycontrolled quantumbehaved particle swarm optimization algorithm,” in Proceedings of the 2014 Tenth International Conference on Computational Intelligence and Security (CIS), pp. 74–79, Kunming, China, November 2014. View at: Publisher Site  Google Scholar
 L. D. S. Coelho, “A quantum particle swarm optimizer with chaotic mutation operator chaos,” Solitons Fractals, vol. 37, no. 5, pp. 1409–1418, 2008. View at: Google Scholar
 Q. Qian, M. Tokgo, C. Kim, C. Han, J. Ri, and K. Song, “A hybrid improved quantumbehaved particle swarm optimization algorithm using adaptive coefficients and natural selection method,” in Proceedings of the 7th International Conference on Advanced Computational Intelligence, ICACI 2015, pp. 312–317, Xiamen, China, March 2015. View at: Google Scholar
 L. D. S. Coelho, “Gaussian quantumbehaved particle swarm optimization approaches for constrained engineering design problems,” Expert Systems with Applications, vol. 37, no. 2, pp. 1676–1683, 2010. View at: Publisher Site  Google Scholar
 H. Long and S. Wu, “Quantumbehaved particle swarm optimization with diversitymaintained,” in Ecosystem Assessment and Fuzzy Systems Management, Berlin, Germany, 2014. View at: Google Scholar
 N. Tian, J. Sun, W. Xu, and C.H. Lai, “An improved quantumbehaved particle swarm optimization with perturbation operator and its application in estimating groundwater contaminant source,” Inverse Problems in Science and Engineering, vol. 19, no. 2, pp. 181–202, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 J. J. Liang, A. K. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006. View at: Publisher Site  Google Scholar
 M. Clerc and J. Kennedy, “The particle swarmexplosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002. View at: Publisher Site  Google Scholar
 R. B. Wilson, A Simplicial Algorithm for Concave Programming, Graduate School of Business Administration, Harvard University, Boston, Mass, USA, 1963.
 J. H. Holland, Adaptation in Natural and Artificial Systems, MIT Press, Cambridge, Mass, USA, 1992. View at: MathSciNet
 W. Gong, Z. Cai, and Y. Wang, “Repairing the crossover rate in adaptive differential evolution,” Applied Soft Computing, vol. 15, pp. 149–168, 2014. View at: Publisher Site  Google Scholar
 M. Han, J. Fan, and J. Wang, “A dynamic feedforward neural network based on gaussian particle swarm optimization and its application for predictive control,” IEEE Transactions on Neural Networks and Learning Systems, vol. 22, no. 9, pp. 1457–1468, 2011. View at: Publisher Site  Google Scholar
 J. Sun, B. Feng, and W. Xu, “Particle swarm optimization with particles having quantum behavior,” in Proceedings of the 2004 Congress on Evolutionary Computation, vol. 1, pp. 325–331, 2004. View at: Google Scholar
 A. Meng, Z. Li, H. Yin, S. Chen, and Z. Guo, “Accelerating particle swarm optimization using crisscross search,” Information Sciences, vol. 329, pp. 52–72, 2016. View at: Publisher Site  Google Scholar
 M. Xi, J. Sun, and W. Xu, “An improved quantumbehaved particle swarm optimization algorithm with weighted mean best position,” Applied Mathematics and Computation, vol. 205, no. 2, pp. 751–759, 2008. View at: Publisher Site  Google Scholar
 S. Lu and C. Sun, “Quantumbehaved particle swarm optimization with cooperativecompetitive coevolutionary,” in Proceedings of the 2008 International Symposium on Knowledge Acquisition and Modeling (KAM), pp. 593–597, Wuhan, China, December 2008. View at: Publisher Site  Google Scholar
 R. Salomon, “Reevaluating genetic algorithm performance under coordinate rotation of benchmark functions,” BioSystems, vol. 39, no. 3, pp. 263–278, 1996. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2019 Guoqiang Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.