Abstract

An improved quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO) for unconstrained optimization is presented and empirically studied in this paper. In EB-QPSO, the novel elitist breeding strategy acts on the elitists of the swarm to escape from the likely local optima and guide the swarm to perform more efficient search. During the iterative optimization process of EB-QPSO, when criteria met, the personal best of each particle and the global best of the swarm are used to generate new diverse individuals through the transposon operators. The new generated individuals with better fitness are selected to be the new personal best particles and global best particle to guide the swarm for further solution exploration. A comprehensive simulation study is conducted on a set of twelve benchmark functions. Compared with five state-of-the-art quantum-behaved particle swarm optimization algorithms, the proposed EB-QPSO performs more competitively in all of the benchmark functions in terms of better global search capability and faster convergence rate.

1. Introduction

Particle swarm optimization (PSO), inspired by the social behavior of bird flocks [1], is an important and widely used population-based stochastic algorithm. Unlike evolutionary algorithms, PSO is computationally inexpensive and its implementation is straightforward. Each potential solution in PSO, represented by a particle, flies in a multidimensional search space with a velocity dynamically adjusted by the particle’s own former information and the experience of the other particles. For its superiority, PSO has rapidly developed with applications in solving real-world optimization problems in recent years [25].

However, as demonstrated by van den Bergh [6], PSO is not a guaranteed global convergence algorithm according to the convergence criteria in [7]. Based on quantum mechanics and trajectory analysis of PSO [8], Sun et al. [9] proposed a variant of PSO, quantum-behaved PSO (QPSO) algorithm, which is theoretically proved to be global convergent using Markov process [10, 11]. The global convergence of QPSO guarantees to find the global optimal solution upon unlimited number of search iterations. Nevertheless, such condition is unrealistic when it comes to the real-world problems as only a finite number of iterations are allowed for the search of optimal solution on using any optimization algorithm. Thus, QPSO is also likely to be trapped in local optima or with slow convergence speed when it is used to solve complex problems. So far, many researchers developed various strategies to improve performance of QPSO in terms of convergence speed and global optimality [1221]. However, it is rather difficult to improve the global search capability and accelerate the rate of convergence simultaneously. If any attempt focuses on avoiding being stuck at local optima, it is likely to have a slower convergence rate.

In PSO, the personal best (pbest) of each particle and global best (gbest) of the swarm found so far in the search process can be considered as the elitists of the whole swarm at any search iteration. In most of the current QPSOs, the information of elitists is either used directly or with some simple extra processing to guide the flying behavior of each particle in the search space; to the best of our knowledge, deep exploration with the elitist is not taken into account to assist the search of solutions in any work of QPSO reported in literatures. The exploration on the elitists will produce some extra information that may be beneficial for the search of the optimal solution.

In this study, a novel variant of the QPSO algorithm, called the quantum-behaved particle swarm optimization with elitist breeding (EB-QPSO), is proposed for the desirable aims to achieve better global search capability and convergence rate by employing a breeding scheme through transposon operators on elitists of the swarm. Elitist breeding is a kind of advanced elitist exploration method treating the elitist members as parents to create new diverse individuals with transposon operators. On one hand, elitist breeding helps to diversify the particle swarm during the search and thus enhance the global search capability of the algorithm. On the other hand, the new bred individuals with better fitness are selected as the new members of elitists and used to guide the swarm to perform exploration and exploitation more efficiently. Experiment results on twelve benchmark functions show that EB-QPSO outperforms the original QPSO and other four state-of-the-art QPSO variants.

The rest of this paper is organized as follows. A brief introduction of QPSO is presented in Section 2. In Section 3, an overview of related work is given. The proposed EB-QPSO algorithm is elaborated and compared with various existing QPSO algorithms over twelve benchmark functions in Sections 4 and 5, respectively. Finally, the general conclusions of the paper are given in Section 6.

2. Quantum-Behaved Particle Swarm Optimization

In the original PSO, each particle is defined by a position vector which signifies a solution in the search space and associated with a velocity vector responsible for the exploration of the search space. Let denote the swarm size and the dimensionality of the search space, during the evolutionary process, the velocity and the position of each particle are updated with the following rules:where and , and are the th dimension component of velocity and position of particle in search iteration , respectively, and are the th dimension of the personal best of particle and the global best of the swarm in search iteration , respectively, is the inertia weight, and are two positive constant acceleration coefficients, and and are two random numbers uniformly distributed in the interval (0, 1).

According to the trajectory analysis given by Clerc and Kennedy [8], the convergence of the PSO algorithm may be achieved if each particle converges to its local attractor , of which the coordinates are defined as where .

The concept of the QPSO was developed based on the analysis above. Each single particle in QPSO is treated as a spin-less one moving in quantum space and the probability of the particle’s appearing at position in the search iteration is determined from a probability density function [22]. Employing the Monte Carlo method, each particle flies with the following rules:where is a parameter called contraction-expansion coefficient; both and are random numbers uniformly distributed on ; is a global virtual point called mainstream or mean best defined as

A time-varying decreasing method [23] usually is adapted to control the contraction-expansion coefficient defined as follows:where and are the initial and final values of , respectively; is the maximum number of iterations; is the current search iteration number.

The QPSO algorithm has simpler evolutional equation forms and fewer parameters than classical PSO, substantially facilitating the control and convergence in the search space. Without loss of generality, let be the objective function to be minimized; the procedure for implementing the QPSO is given in Algorithm 1.

(1)  Procedure of QPSO
(2)   For   to swarm size
(3)    randomize the position of each particle ;
(4)   evaluate ; ;
(5)   Endfor
(6)   Do
(7)     = ;
(8)    compute by (4);
(9)    For   to swarm size
(10)   calculate with (2);
(11)    update with (3);
(12)   If   <
(13)     = ;
(14)   Endif
(15)  Endfor
(16) Until maximum number of iterations is reached

Since QPSO was proposed, it has attracted much attention and different variants of QPSO have been proposed to enhance the performance from different aspects and applied to solve various real world optimization problems. In this section, a brief review of these QPSO variants will be presented. In general, most current QPSO variants can be classified into three categories, that is, the improvement based on operators from other evolutionary algorithms, hybrid search methods, and cooperative methods.

To improve the search efficiency, different operators derived from other evolutionary algorithms were introduced into QPSO algorithms. The mutation operator provides diversity in the search of solutions and consequently enhances the global search capability; therefore, various mutation operators based on Cauchy probability distribution [24], Gaussian probability distribution [14, 25], Lévy probability distribution [20], and chaotic sequences [26] were proposed to improve the QPSO performance in preventing premature convergence to local optima. Selection operators are beneficial in making good use of the information of elitists and the particles’ position in the last iteration. Elitist selection operator and ranking strategy were introduced into QPSO to balance the convergence rate and global searching ability in [27, 28]. In [15], the mbest in each particle’s position update procedure is determined by a random selection operator on all the pbests of the whole swarm. Another approach incorporating a ranking operator to select a random particle with its personal best position replacing the global best position in order to guide the particle to escape from the local optima is also proposed in [15]. Besides mutations and selections mentioned above, DE operators [29] and crossover operator [30] were also integrated into QPSO to enhance the particles’ search capability.

Various search methods, including those from other optimization algorithms and the new proposed approaches, were incorporated into QPSO for better search efficiency. In [31], local search was incorporated into the main global search mechanism of QPSO to enhance the searching qualities of QPSO. The chaotic search method was integrated into QPSO to diversify the swarm in the latter period of the search process so as to alleviate the likelihood to be stuck in the local optima [32]. In [33, 34], the immune system was introduced to QPSO to effectively restrain the degeneration in the process of optimization. Simulated annealing was incorporated into QPSO to improve the ability to effectively jump out of the local optima in [35].

The third category, cooperative methods, refers to the searches performed with multiswarms or the approaches of optimizing different components of each solution vector separately. In [36], two subswarms were used to search in different layers and the cooperation of the subswarms leads to better performance in approximating solutions to the global optima. In [21, 37], a particle is split into several subparts and makes contribution to the population not only as a whole item but also in subset of the position vector.

4. The Proposed Algorithm

There are numerous variants of QPSO which have been proposed in recent literatures. In most of the QPSOs, elitists are simply stored in memory for the solution comparison or through simple processing step like mutation for the exploration search. Nevertheless, to the best of our knowledge, all the existing QPSOs seldom explore the elitist memory and get more potential information from it, which might be a significant trend to improve the performance of the algorithm.

The memory of QPSO mainly consists of the elitist individuals which are the personal best of each particle and the global best of the whole swarm. According to the iterative equation of QPSO, the memory is very important in affecting the behavior of the swarm and thus impinges the performance of the algorithm. If the personal best particles and the global best particle get trapped in local optima, the algorithm will likely converge to local optima. On the other hand, if the individuals in memory are less likely to be stuck at local optima, the algorithm can achieve better solutions.

In fact, we believe that the information from elitists can have a better use in aiding the search exploration and exploitation of the global optima. New subswarms can be generated from the elitists with proper mechanism to aim at the better search efficiency in terms of the solution quality and rate of convergence. An elitist exploration strategy, namely, elitist breeding, is proposed in this study. It makes use of the elitists generated in the evolutionary processes of algorithm to create new subswarms through the proposed breeding scheme. In the elitist breeding scheme, an elitist pool consists of personal best particles and global best particle found so far was constructed, and then the transposon operators which has the ability to enhance the diversity of solutions are selected as the breeding operators to explore the elitist memory and extract some more potential essences from the elitist individuals, thus to improve the search efficiency of QPSO. Moreover, the mechanism of updating the elitists with the new bred individuals with better fitness also provides a more efficient and precise search guidance for the swarm. The pseudocodes of the proposed EB-QPSO are given in Algorithm 2.

(1)   Procedure of EB-QPSO
(2)   For   to swarm size
(3)    randomize the position of each particle ;
(4)    evaluate ; = ;
(5)   Endfor
(6)   Do
(7)     = ;
(8)    compute by (4);
(9)    If elitist breeding criterion is met
(10)   For   to swarm size
(11)     = ;
(12)   Endfor
(13)    = ;
(14)    = transposon_op();
(15)   For   to swarm size
(16)    If   < ;
(17)      = ;
(18)    Endif
(19)   Endfor
(20)    = ;
(21)  Endif
(22)  For   to swarm size N
(23)   calculate with (2);
(24)   update with (3);
(25)   If  
(26)    ;
(27)   Endif
(28)  Endfor
(29) Until maximum number of iterations is reached

We can see from the pseudocodes of EB-QPSO that when the criteria is met, the personal best of each particle and the global best will be put into an elitist pool named epool and a new subswarm called epool_eb is generated through the transposon operators on the elitist pool. The size of epool_eb is the same as the epool and each individual in both of the pools is identified with its sequential order. To maintain the diversity of the swarm, each individual in epool_eb is only compared with the member with the same sequential order in epool. The pbests will be updated when the new generated individual is better than the corresponding one in epool. Here, a predefined parameter is used to control the frequency of elitist breeding. In every iteration, the breeding operation will be performed once.

Transposon operators were firstly proposed by Man et al. [38] and mainly used in multiobjective evolutionary algorithms (MOEAs). A transposon is made of consecutive genes located in the randomly assigned position in each chromosome while the transposon operators are lateral movement operations that happen in one chromosome or between different ones. Generally speaking, there exist two types of transposon operators, that is, cut-and-paste and copy-and-paste, which are demonstrated in Figures 1 and 2. The transposon operations conducted within an individual chromosome or on a different chromosome are chosen randomly. Moreover, the size of each transposon can be greater than one and is decided by a parameter called jumping percentage while the number of transposons is also a predefined parameter. Another parameter, the jumping rate, is assigned to determine the probability of the activation of transposon operations.

As demonstrated in Figure 3, each particle which can be regarded as a chromosome consists of the same number of genes as the size of its position vector and each gene holds a real number of the corresponding decision variable. It is clear that the number of the chromosomes is the same as the number of particles in the swarm. With such representation, the transposon operators can be integrated into PSOs. However, since different decision variables might have different boundary constraints, the transposon operators might generate invalid individual. As illustrated in Figure 3, the boundary violation occurs if is copied to the position of . To overcome the boundary violation problem caused by transposon operators, according to the description in [39], the position vector of each particle is normalized to a chromosome consists of the ratio value of each variable occupying in its own boundary range before performing the transposon operations. The conversion equation is defined as follows:where and represent the lower and upper bounds of , respectively. After the transposon operations, the value of each gene should be restored back to its corresponding positional value in the search space according to the equation as follows:

In transposon operations, a temporary pool called epool_ratio is used to store all the normalized chromosomes converted from the individuals in epool according to (6). When the transposon operations have been done, each individual will be restored from epool_ratio and saved to epool_eb. The pseudocodes of the function transposon_op are given in Algorithm 3.

(1)   Function epool_eb = transposon_op(epool)
(2)   generate the epool_ratio with the epool based on (6)
(3)   For   to
(4)    For   to number of transposon
(5)     If   < jumping rate
(6)      ;
(7)       =
(8)      If   ==
(9)       If  
(10)      perform cut and paste operation in epool_ratio  ;
(11)      Else
(12)      perform copy and paste operation in epool_ratio  ;
(13)     Endif
(14)    Else
(15)     If  
(16)      perform cut and paste operation in epool_ratio   and epool_ratio  ;
(17)     Else
(18)      perform copy and paste operation in epool_ratio   and epool_ratio  ;
(19)     Endif
(20)    Endif
(21)   Endif
(22)  Endfor
(23) Endfor
(24) restore the individuals from epool_ratio based on (7) and save to epool_eb

5. Experiment Result and Discussion

To validate the performance of the proposed EB-QPSO, it was tested on a set of twelve widely adopted benchmark testing functions and compared with the original QPSO [9] and four state-of-the-art QPSO variants including WQPSO [28], CAQPSO [14], QPSO-RM [15] and QPSO-RO [15], which have been studied thoroughly in the corresponding literatures, on solution accuracy, convergence speed, and algorithm reliability. These QPSO variants are very competitive in comparing with some other forms of PSO algorithms including PSO with inertia weight (PSO-In) [40], PSO with constriction factor (PSOCo) [41], the Standard PSO [42], Gaussian PSO [43], Gaussian Bare Bones PSO [44], Exponential PSO (PSO-E) [45], Lévy PSO [46], comprehensive learning PSO (CLPSO) [47], dynamic multiple swarm PSO (DMS-PSO) [48], and fully informed particle swarm (FIPS) [49].

5.1. Test Instances and Algorithmic Configuration

The test set consisted of twelve unconstrained test instances widely used in early researches reported in literatures [5053] as listed in Table 1 with seven unimodal functions () and five multimodal functions (). These benchmark instances pose different difficulties to optimization algorithms. The more detailed description of each function can be found in [51]. Following [51], an “acceptance” value is defined to gauge the acceptability of a solution found by the QPSOs.

For the simulation experiment, the contraction expansion (CE) coefficients of the proposed EB-QPSO algorithm and the compared five QPSOs decreased linearly from to during the search process according to (5). and were set at 0.6 and 0.5 for EB-QPSO, QPSO-RM and QPSO-RO while 1.0 and 0.5 were used for QPSO, WQPSO, and CAQPSO accordingly. The other parameters of the proposed EB-QPSO algorithm for the experiment were chosen at values as given in Table 2. For the compared QPSO algorithms, parameters settings other than CE coefficients were used with values according to their corresponding references.

For each test problem, 50 simulation trials were conducted for each of the compared algorithms with random initial populations. For a fair comparison among all the QPSO algorithms, they were tested with the same population size of 20 at each trial run. Furthermore, the maximum number of objective function evaluations (FEs) was set at 4.0 × 104 for each test instance for all the algorithms. Since all the algorithms in the simulation experiment are stochastic, not solely compared on the success rate, it is essential to have the statistical analysis conducted so as to provide confidential comparisons. The median () and interquartile range (IQR) of each test instance were recorded as the measures of location (or central tendency) and statistical dispersion. To provide the confidential comparisons, the statistical analysis as in [54, 55] was used. The general structure of statistical analysis is given in Figure 4.

Kolmogorov-Smirnov test is firstly conducted to check whether the value of the results follow the normal (Gaussian) distribution or not. If the results do not follow the normal distribution, the nonparametric Kruskal-Wallis test is performed in order to compare the median result of each algorithm; if not, the Levene test is used to check the homogeneity of the variances. A Welch test is performed to verify the confidence of comparisons if the samples have different variance, otherwise ANOVA test is done to accomplish this task. The confidence level of 95% is used in the statistical analysis. The results are shown in the last column of Tables 3 and 4 with the symbol “+” indicating that the performance difference between the proposed EB-QPSO and best algorithm among the other QPSO algorithms is statistically significant, while the symbol “−” representing the difference is insignificant.

5.2. Comparisons on the Solution Accuracy

The performance results on the solution accuracy of each of the algorithms in the simulation experiment are shown in Table 3 in terms of the median and interquartile range of the solutions obtained in the 50 independent runs by each algorithm. The best result among those obtained by all six contenders for each problem is highlighted with boldface.

From the results in Table 3, we can see that the proposed EB-QPSO algorithm obtains the best values in all of the 12 test instances. It is worth mentioning that EB-QPSO achieves the solutions which have values reached or approximated to the global optima on problems , , , , and whereas the other compared algorithms are unable to do so. It illustrates that the proposed EB-QPSO having the ability to avoid being stuck at the local optima with the benefits from the elitist breeding through transposon operators. The difference in performance of the proposed algorithm comparing to the other five QPSO is statistically significant as indicated in the statistical analysis result shown in the last column of the table.

5.3. Comparisons on the Convergence Speed and Reliability

Another salient yardstick for evaluating the algorithm performance is the speed in approximating the global optimum. As shown in Table 4, EB-QPSO entirely offers a much higher convergence speed which is measured by the median and interquartile range of FEs number needed to reach an acceptable solution. Note that here “/” represents the corresponding algorithm cannot reach an acceptable solution in at least one-half of all the trials. To compare the convergence characteristics, Figure 5 graphically presents the convergence processes in terms of median results obtained in the 50 runs of all six contenders in solving the 12 test instances. It is worth mentioning that the global optima of all the test instances are at “0” and the logarithm of “0” has no mathematical meaning and cannot be displayed graphically. For any algorithm, if the optima is found, it will stop searching for solution further even though the maximum number of function evaluation limit is not reached; thus, upon such condition, the corresponding convergence curve for a particular test case will be stopped at that point, It is the case for problems , , , , , and as the search convergences to the global optima. It shows that the proposed EB-QPSO has the best convergent efficiency amongst the compared algorithms in all 12 test instances.

Reliability here refers to the success rate; that is, the percentage of trial runs reaching acceptable solutions. Table 5 reveals that EB-QPSO is able to reach acceptable solutions in all the trials over all the test instances whereas the compared algorithms cannot.

6. Conclusion

QPSO is a promising optimization technique which has shown its superiority in solving wide range of optimization problems. However, it is still a difficult problem to improve the global search capability and accelerate convergence speed of QPSO simultaneously. In this paper, we presented an improved quantum-behaved particle swarm optimization algorithm with elitist breeding (EB-QPSO) for unconstrained optimization. The novel elitist breeding scheme acts on the elitists found during the evolutionary process to jump out of the likely local optima and guide the swarm to perform exploration and exploitation more efficiently and thus improves the performance of QPSO in terms of better global search capability and faster convergence speed. The performance of EB-QPSO has been compared against the existing QPSO algorithms, the original, WQPSO, CAQPSO, QPSO-RM, and QPSO-RO on a test suite consisting of twelve benchmark functions. All simulation results have demonstrated that EB-QPSO has superiority over other QPSOs in solution accuracy, convergence speed, and reliability significantly. Besides, EB-QPSO can locate the global optima of most of the test functions while the other algorithms cannot. Our further work will concentrate on applying the EB-QPSO algorithm to the real-world optimization problems and on integrating the approach of elitist breed into other swarm intelligence algorithms. In this paper, the proposed EB-QPSO was studied empirically; the theoretical analysis of the global convergence of EB-QPSO will be developed in the future.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.