BioInspired Learning and Adaptation for Optimization and Control of Complex Systems
View this Special IssueResearch Article  Open Access
Fei Han, YuWenTian Sun, QingHua Ling, "An Improved Multiobjective QuantumBehaved Particle Swarm Optimization Based on Double Search Strategy and Circular Transposon Mechanism", Complexity, vol. 2018, Article ID 8702820, 22 pages, 2018. https://doi.org/10.1155/2018/8702820
An Improved Multiobjective QuantumBehaved Particle Swarm Optimization Based on Double Search Strategy and Circular Transposon Mechanism
Abstract
Although multiobjective particle swarm optimization (MOPSO) has good performance in solving multiobjective optimization problems, how to obtain more accurate solutions as well as improve the distribution of the solutions set is still a challenge. In this paper, to improve the convergence performance of MOPSO, an improved multiobjective quantumbehaved particle swarm optimization based on double search strategy and circular transposon mechanism (MOQPSODSCT) is proposed. On one hand, to solve the problem of the dramatic diversity reduction of the solutions set in later iterations due to the single search pattern used in quantumbehaved particle swarm optimization (QPSO), the double search strategy is proposed in MOQPSODSCT. The particles mainly learn from their personal best position in earlier iterations and then the particles mainly learn from the global best position in later iterations to balance the exploration and exploitation ability of the swarm. Moreover, to alleviate the problem of the swarm converging to local minima during the local search, an improved attractor construction mechanism based on oppositionbased learning is introduced to further search a better position locally as a new attractor for each particle. On the other hand, to improve the accuracy of the solutions set, the circular transposon mechanism is introduced into the external archive to improve the communication ability of the particles, which could guide the population toward the true Pareto front (PF). The proposed algorithm could generate a set of more accurate and welldistributed solutions compared to the traditional MOPSO. Finally, the experiments on a set of benchmark test functions have verified that the proposed algorithm has better convergence performance than some stateoftheart multiobjective optimization algorithms.
1. Introduction
Optimization problems widely exist in fields including economics [1], natural science [2], and industrial production [3]. Compared to singleobjective optimization problems (SOPs), the theories and methods of multiobjective optimization problems (MOPs) are more complicated and usually incomplete. Moreover, it is difficult to find the feasible Pareto front (PF) for most optimization algorithms due to the equality and inequality constrains as well as multivariable constrains in a MOP [1]. Since the objectives in MOPs are conflict in many cases, there exists a set of solutions which are superior to the rest of solutions in the search space when all objectives are considered but are inferior to other solutions in the space in one or more objectives [4]. These solutions are termed nondominated solutions. How to obtain a set of welldistributed nondominated solutions is a crucial task of multiobjective optimization.
As an efficient evolutionary algorithm, particle swarm optimization (PSO) algorithm has been widely used for various optimization problems [5]. Due to its fast convergence speed and good convergence accuracy, multiobjective optimization based on PSO has attracted much more attention. In [6], the multiobjective particle swarm optimization (MOPSO) algorithm was first proposed to handle various kinds of MOPs, in which Pareto dominance was used to generate nondominated solutions. In MOPSO, a secondary archive was first used to store the nondominated solutions found by the algorithm to guide the particles toward the true PF. Although the convergence performance of the algorithm is improved greatly by this mechanism, the distribution of the PF generated by the algorithm could not be guaranteed. In [7], the mechanism of crowding distance computation was introduced into MOPSO to maintain the external archive, which could make the algorithm obtain a set of welldistributed nondominated solutions. Based on these theories, the external archive was widely used in different kinds of MOPSO and many mechanisms were applied to maintain the external archive. In [8], a new archive called nondominated local set was proposed to store the personal best solutions obtained by one particle during the search process, which showed a superiority in terms of capturing the shape of the true PF and obtaining nondominated solutions with satisfactory diversity characteristics. In [9], a new multiobjective particle swarm optimizer based on εDominance was proposed to utilize εDominance to generate a new archive and the crowding distance to maintain the archive. In [10], a novel hybrid multiobjective particle swarm optimization was proposed, which used a teachinglearningbased optimization algorithm to promote the diversity of the algorithm and circular crowded sorting to select gbest and teacher solution from the archive. In [11], a niching multiobjective particle swarm optimization was proposed to solve the MOPs of contractors in circuits, in which the entropy weight ideal point theory was proposed to ensure the diversity of the nondominated solutions. Different from the algorithms mentioned above, a multiobjective particle swarm optimization algorithm based on a decomposition approach (dMOPSO) was proposed in [12], which decomposed a continuous and constrained MOP into many SOPs. Although this algorithm does not need an external archive to store the nondominated solutions, the solutions generated by the algorithm cannot cover the entire true PF for some complicated MOPs. It can be seen that an external archive is crucial for most algorithms to select the leading particle.
The traditional search strategy of the particles in PSO has the deficiency of the particles not being able to traverse the entire search space, which may lead to an incomplete search in the search space and miss some solutions. To overcome this deficiency, a new PSO based on probability called quantumbehaved particle swarm optimization (QPSO) was proposed in [13], in which particles could appear in any position of the feasible region. Since the faster convergence speed of QPSO than classical PSO, multiobjective optimization algorithms based on QPSO have been studied by many researchers. In [14], QPSO and adaptive grid were introduced to improve the performance of MOPSO (MOQPSOAG), which showed a superior convergence performance when compared with other algorithms, especially in terms of convergence speed. In [15], a novel quantum Delta potential well with two local attractors named doublepotential well was established for QPSO and applied to deal with MOPs, which showed a better convergence and distribution performance when handling highdimensional MOPs. In [16], QPSO was used to handle constrained multiobjective optimization problems, in which two strategies were investigated to combine constraint processing mechanism with a multiobjective QPSO. It can be seen that QPSO has a great potential to deal with MOPs.
Despite the fact that QPSO shows superior performance over PSO in terms of convergence speed when handling MOPs, the leadership of the average best position may be reduced dramatically due to the single search pattern in QPSO, which is not beneficial to obtain more potential solutions for MOPs. Moreover, the mechanism of constructing an attractor for each particle may reduce the ability of the particles to jump out of local optima when its personal best position is equal to the global best position. Finally, most MOPSO algorithms only use the external archive to select the global and local best individuals without fully utilizing the potential significant information hidden in the archive, which may reduce the convergence accuracy of the algorithm.
To alleviate the situation mentioned above, an improved multiobjective quantumbehaved particle swarm optimization based on double search strategy and circular transposon mechanism (MOQPSODSCT) is proposed in this paper. On one hand, the double search strategy is proposed to adjust the relationship between the exploration and the exploitation ability, which is beneficial for the diversity of the solutions set as well as the convergence speed. To implement the double search strategy, a search probability parameter is proposed to make the two search patterns execute alternatively to maintain the diversity of the swarm. Moreover, an improved attractor construction mechanism based on oppositionbased learning is used to construct a better attractor for one particle when its personal best position is equal to the global best position, which improves the ability of the particles to escape from local optimum. On the other hand, the circular transposon mechanism is introduced into the external archive to help the particles find more useful information between each other, which can make the population move toward the true PF with a higher probability and maintain the distribution of the PF. Therefore, the MOQPSODSCT algorithm has improved the performance in convergence accuracy and kept the reasonable distribution of the solutions.
The reminder of the paper is organized as follows. In Section 2, the concepts of multiobjective optimization, the basic model of QPSO, and some related strategies are briefly described. The details of the proposed algorithm (MOQPSODSCT) are depicted in Section 3. The experiment results on several benchmark test functions are given in Section 4. Finally, the conclusions and our future work are included in Section 5.
2. Preliminaries
2.1. Multiobjective Optimization
In general, the model of a minimization MOP can be defined as follows:where is a vector with decision variables, is the number of objectives, is the function value of the  objective, and and are the number of inequality and equality constraints, respectively.
To better understand how a given MOP produces a large number of nondominated solutions in the feasible region , some definitions related to Pareto Optimality are described as follows [17].
Definition 1 (Pareto dominance). A vector is said to dominate another vector , denoted by , if and only if
Definition 2 (Pareto optimal). A solution is said to be a Pareto optimal or a nondominated solution, if and only if
Definition 3 (Pareto optimal set). A set containing all the Pareto optimal is a Pareto optimal set, which can be defined as
Definition 4 (Pareto front). The region generated by the objective function values of all Pareto optimal solutions is Pareto front, which can be defined as
2.2. QuantumBehaved Particle Swarm Optimization
In 2004, a quantumbehaved particle swarm optimization algorithm was proposed to overcome the deficiency of PSO not being able to traverse the entire search space. Particles in QPSO only have positions without velocities, which is extremely different form the particles in PSO. In quantum world, the positions of particles can not be determined because of the uncertainty principle so that particles can appear anywhere in the feasible search region with a certain probability, which guarantees the ability of QPSO to converge to the global optimal.
Each particle in PSO is considered as an individual with two attributes in Ddimensional search space. The position and velocity of each particle can be represented as and , respectively, where M is the size of population. Then the position and velocity of each particle can be updated by the following equations [18]: where ; is the personal best position of the  particle; is the global best position in the current iteration; is the inertial weight of PSO; denotes the iteration number; and are two acceleration factors which can balance the impact of and ; and are two numbers randomly generated in .
Different from the traditional PSO, the main idea of QPSO is to establish a potential field to bind the particles. In [19], the convergence behavior of particles was analyzed by Clerc M and Kennedy J, which demonstrated that the  particle in PSO is attracted by an attractor . Obviously, the attractor directly influences the convergence behavior of the particles in QPSO. The position of is described aswhere is the personal best position of the  particle; is the global best position in the current iteration; . In QPSO, is a uniformly distributed random number between . Then, the position of the th particle in QPSO can be updated as shown in where is the average best position in population; is the length of the Delta potential well, which denotes the creativity of the individual in the population, with a larger value usually means the particle has a relatively large search range and is more likely to find more solutions; is a uniformly distributed random number between ; is a contractionexpansion coefficient usually less than 1.782 [20]. In our experiments, is set to be positive and linearly decreasing from 1.0 to 0.5 as proposed in [21].where and are set to 1.0 and 0.5, respectively; is the current iteration; and is the maximum iteration number.
2.3. OppositionBased Learning Strategy
In [22], the concepts of oppositionbased learning were proposed, which demonstrated that the opposite solution has a higher probability to find a better solution than the current solution. On the other words, we can get the opposite position of the particle according to oppositionbased learning. If the opposite position is better than the original position, the original position will be replaced with the opposite position. Currently, this mechanism is often combined with intelligent algorithms to deal with optimization problems. In [23], oppositionbased learning was introduced into a new intelligent algorithm named chicken swarm optimization to improve the diversity of the population. In [24], oppositionbased learning was used to generate the transformed population in the population initialization, which enhanced the performance of constrained differential evolution algorithms. In [25], oppositionbased learning was executed by the current best individual to generate an opposition search populations in its dynamic search boundaries, which improved the balance and exploring ability of the algorithm. In [26], the opposite process of the oppositionbased learning strategy was combined with the refraction principle of light to improve the global search ability of PSO. Therefore, in this paper, oppositionbased learning will be used to construct a new attractor for the particle when its personal best position is equal to the global best position to help the particle to escape from local optimum in this paper. Some definitions of oppositionbased learning are as follows.
Definition 5 (opposite number). Let be a real number defined on a certain interval . The opposite number is defined as follows:
Definition 6 (opposite point). Let be a point in a Ddimensional coordinate system and . The opposite point is completely defined by its coordinates where
2.4. Transposon Mechanism
Transposon, also known as jumping genes, was a special chromosomal genetic phenomenon firstly discovered by Barbara McClintock in 1950 [27, 28]. In her work on maize chromosomes, she found that genes can jump from one position to another on the same chromosome when existing dissociation elements and genes can jump between different chromosomes when existing activator elements. She also found that there were two methods to make a gene move on the chromosome. One method is cutandpaste transposon, in which one part of DNA is cut then pasted to another part. The other is called copyandpaste transposon, in which the information on one position in DNA is copied and pasted to RNA, then the information in one position is copied from RNA to another position in DNA. In [29], the transposon mechanism was first adopted to replace the operations of crossover, mutation, and selection in traditional evolutionary algorithms, which enhanced the diversity of the population. In PSO, a particle in the search space can be considered as a chromosome, then this operation can also be executed on a particle. In this paper, the copyandpaste transposon operation will be performed on the individuals in the external archive, which can greatly improve the information exchange ability of the particles in the external archive and guide the population toward the true PF. This operation is easily implemented only with a probability parameter . For example, a transposon and an insertion position are generated for two particles C_{1} and C_{2}, respectively. Then, the transposon of C_{1} is cut and pasted to the insertion position of C_{2}; the transposon of C_{2} is cut and pasted to the insertion position of C_{1}. Two new particles S_{1} and S_{2} can be generated after this operation. The copyandpaste transposon operation on two particles is shown in Figure 1.
3. The Proposed MOQPSODSCT Algorithm
The proposed MOQPSODSCT algorithm is aimed at dealing with the two problems about how to find more potential solutions for different MOPs and how to improve the convergence accuracy and the distribution of the nondominated solutions. In the MOQPSODSCT algorithm, the double search strategy is proposed to balance the exploration and exploitation ability of the swarm and obtain a large number of solutions, in which an improved attractor construction mechanism based on oppositionbased learning can help each particle to escape from local optima. The double search strategy combined with the oppositionbased learning strategy could solve the former problem. To solve the latter problem, the circular transposon mechanism is proposed to improve the communication ability of the particles in the external archive, which can make the solutions have a higher accuracy and a better distribution. Therefore, the MOQPSODSCT algorithm can generate a set of welldistributed and accurate nondominated solutions. The double search strategy and circular transposon mechanism are depicted in detail in the following subsections.
3.1. The Double Search Strategy Combined with the OppositionBased Learning
To improve the diversity of the solutions set and keep a balance between the exploration and exploitation ability of the swarm, the double search strategy is proposed in the MOQPSODSCT algorithm to replace the single search pattern in QPSO. In earlier iterations, the particles dominantly learn from their personal best position to improve the ability of the particles to discover new regions. In later iterations, the particles dominantly learn from the global best position to improve the convergence performance. A search probability parameter is used to control the two search patterns to be executed alternately.
Inspired by the social and individual cognition in PSO, two search patterns are proposed in the MOQPSODSCT algorithm to replace the single search pattern in QPSO. One search pattern called global search pattern is described in (15) as follows:where is the personal best position of the  particle. Obviously, this search pattern focuses on the individual cognition, which can make all the particles freely diverge into the whole search space and improve the probability of the particles to find more global best solutions. So the global search pattern is suitable to be executed in earlier iterations.
The other search pattern called local search pattern is described in (16) as follows:where is a particle selected from the external archive with a large crowding distance. Obviously, this search pattern focuses on the social cognition, which can make the particles search around the elite individuals to improve the ability to converge to an optimum. Since has a large crowding distance, it can directly guide the particle to move toward the sparse region of the search space, which can keep the distribution of the nondominated solutions set. So the local search pattern is suitable to be executed in later iterations.
To keep a balance between the implementation of the two patterns, a search probability parameter is introduced. In [30], the exponential decrease rule and randomness were integrated into the dynamical control of the inertial weight instead of decreasing simply with iterations. Inspired by this idea, this method is introduced to dynamical adjust the value of , then is shown aswhere is the number of the current iteration and is the maximum number of iterations. It can be seen that the value of is decreased with the increasing of iterations on the whole but maintains an oscillatory trend because the curve is affected by a random number between . If is more than 0.5, the swarm will perform the global search represented by (15); otherwise, the swarm will turn to the local search represented by (16). Thus, the particles in MOQPSODSCT can be updated as follows: Therefore, the two search patterns alternatively executed can not only avoid the situation that all the particles in QPSO converge to the same position , but also make the particles move toward the sparse region, which is beneficial for the diversity of the nondominated solutions set.
However, the particle will get trapped into local optimum if its personal best position is equal to the global best position in (8) because the attractor can not be updated for a long time. In MOQPSODSCT, oppositionbased learning is adopted to further search a better attractor for an individual locally when its personal best position is equal to the global personal. According to (14), the opposite attractor of the can be described aswhere and are the upper bound and the lower bound of the  dimension of population, respectively. If is dominated by , the current attractor will be kept; If is dominated by , the current attractor will be replaced with ; otherwise, one of the two attractors will be selected randomly as the current attractor.
3.2. The Circular Transposon Mechanism
It is evident that the external archive is crucial for most MOPSO algorithms to store the nondominated solutions and select the best position of the population. However, individuals in the external archive can not exchange useful information between each other, which is not beneficial for the population to move toward the true PF. To improve the communication ability of the particles in the external archive, the circular transposon mechanism is introduced into the external archive in the proposed algorithm.
The flowchart of the circular transposon mechanism is given in Figure 2. To adopt this mechanism into the external archive, the 50% nondominated solutions with the largest crowding distance in the external archive will be stored to a new archive . For each solution (), a solution is randomly selected from , then a probability parameter will be introduced to control the operation described as Figure 1 on and . For each dimension of and , if the value of is less than an uniformly distributed random number between , this operation will be performed on and for once. After this operation, a new set containing the children of and will be generated. Therefore, the solutions in are the results of the information exchange among the particles in . Finally, the algorithm for updating the external archive is used to update the external archive , which can be referred to Algorithm 3 in [31].
3.3. The Steps of the Proposed MOQPSODSCT Algorithm
The steps of the MOQPSODSCT algorithm are given in Figure 3, and the detailed steps are summarized as follows.
Step 1. Initialize the position of each particle, the corresponding parameters, and the external archive .
Step 2. Evaluate all the particles and store the nondominated solutions to the external archive according to Pareto dominance.
Step 3. For each particle in population, select one particle from the external archive as a leader according to the crowding distance.
Step 4. If the personal best position of the current particle is equal to the best position of the population, oppositionbased learning described as will be used to construct an attractor for the current particle; otherwise, an attractor will be constructed according to (8) for the current particle.
Step 5. Calculate the value of according to (17) and update the position for each particle according to the double search strategy described as .
Step 6. Evaluate all the particles and update ; the 50% solutions in with the largest crowding distance are stored to another archive .
Step 7. For each particle in , randomly select one solution from , perform the operation of Figure 2 on the two individuals, then randomly select one from the offspring generated by the circular transposon mechanism, and add it to a new set .
Step 8. Use Algorithm 3 in [31] to update .
Step 9. If the termination condition is satisfied, the optima results are output; otherwise repeat Steps 3–8.
3.4. Time Complexity of the MOQPSODSCT Algorithm
According to Section 3.3, the time complexity of the MOQPSODSCT algorithm is determined by the main loop from Steps 3 to 8. This main loop can also be divided into three parts. The time complexity of the proposed algorithm is mainly influenced by the number of the objectives , the size of the population , the size of the archive , and the number of the decision variables .
The first part is the double search strategy from Steps 3–6. The time complexity of this part is determined by the size of the population because the double search strategy is performed on each particle. Therefore, the time complexity of this part is .
The second part is Step 7, which contains the circular transposon mechanism. It can be observed from Section 3.2 and Figure 3 that the time complexity of the circular transposon mechanism is determined by the size of the archives and and the number of decision variables. Since not all the decision variables are affected by this mechanism and the number of decision variables is much smaller than the size of , the influence of the number of decision variables can be ignored. Therefore, the time complexity of the second part is .
The last part is Step 8, in which Algorithm 3 in [31] is used to update the external archive. The time complexity of this part is also , which can be obtained from [31].
Therefore, the time complexity of the MOQPSODSCT algorithm is in each iteration. Since the values of and are much smaller than and , the time complexity of the MOQPSODSCT algorithm is in each iteration.
4. Experiment Results and Discussion
4.1. Test Functions and Parameters Setting
In this section, to verify the performance of the proposed algorithm on ZDT (ZDT1~4, ZDT6) [32] and DTLZ (DTLZ2, DTLZ5, and DTLZ6) [33] function sets, the MOQPSODSCT algorithm will be compared to other multiobjective optimization algorithms. These algorithms can be divided into two categories, multiobjective PSO including MPSO/D [34], MOQPSOAG [14], dMOPSO [12], SMPSO [35], and other multiobjective optimization algorithms including NSGAII [36] and MOEA/D [37]. MOQPSOAG and dMOPSO are both mentioned above. MPSO/D is a multiobjective particle swarm optimization that uses objective space decomposition to maintain the diversity of the solutions. SMPSO is a new multiobjective swarm optimization algorithm, in which one mechanism is used to limit the velocity of the particles to enable the particles search the space sufficiently. NSGAII is a multiobjective genetic algorithm without an external archive, which uses a fast nondominated sorting to make the algorithm find the true PF at a high speed and crowding distance computation to get a welldistributed solutions. MOEA/D is a multiobjective evolutionary algorithm based on decomposition, which can optimize a number of subproblems simultaneously instead of treating a MOP as a whole to deal with. It is necessary to compare the performance of MOQPSODSCT and MOQPSOAG because they all based on QPSO. Since the particles in QPSO do not have velocities, the comparison between SMPSO and MOQPSODSCT can sufficiently test the performance of the quantum model. The comparison of SMPSO, MOQPSOAG, and MOQPSODSCT can test the performance of the circular transposon mechanism because they all use an external archive. The performance difference between the algorithm with an external archive and the algorithm without an external archive can be seen from the comparison of MOQPSODSCT, MPSO/D, and dMOPSO. The comparison of MOQPSODSCT, NSGAII, and MOEA/D can test whether PSO is more competitive than other evolutionary algorithms. The details of test functions are listed in Table 1.

In the experiments, the size of population is 100, the maximum evaluation is 30000, and the maximum iteration is 300 for all the algorithms. The size of the external archive is 100 for ZDT and DTLZ function sets. Parameters in all the algorithms are shown in Table 2. 30 test runs are done for each test function in each algorithm. The algorithms are executed on MATLAB R2015b, Intel(R) Core(TM) i56500, 3.20GHz, 4GB RAM.

4.2. Performance Indicator
In order to better evaluate the performance of MOQPSODSCT, IGD (Inverted Generational Distance) [38] will be used as an indicator in this paper. IGD is the distance from the PF obtained by the algorithm to the true PF. Let be the true Pareto optimal set and be the solutions set obtained by our proposed algorithm. The IGD of to can be calculated by the following equation:where = is the minimum normalization Euclidean distance; and are the maximum and minimum value of the  objective in , respectively; ; is the number of the objectives; ; ; is the size of the solutions set obtained by our proposed algorithm. The value of is set to roughly 1000 for ZDT test functions and 5000 for DTLZ test functions. It can be seen that a value of zero for IGD means the PF obtained by the algorithm has completely coincided with the true PF.
4.3. Experimental Results and Discussion
The mean values and standard deviations (std) of IGD for all the algorithms in each test function are all summarized in Table 3, where the best values are highlighted in bold and italic font. The PFs obtained by all algorithms in each test function are shown in Figures 4–11. The convergence curves of all algorithms in each test function are shown in Figure 12. It can be seen from Table 3 that MOQPSODSCT obtains best results on six test functions in terms of both mean and std values, including ZDT1, ZDT2, ZDT3, ZDT4, DTLZ5, and DTLZ6, which demonstrates that MOQPSODSCT has a better convergence and distribution performance than other compared algorithms.

ZDT1 is a twoobjective function with a convex PF and ZDT2 is a twoobjective function with a concave PF. Almost all the algorithms can converge to the PF. It can be seen from Table 3 that MOQPSODSCT, MPSO/D, MOQPSOAG, and SMPSO all perform better than dMOPSO, NSGAII, and MOEA/D. As shown in Figures 4, 5, and 12, MOQPSOAG has a relatively faster convergence speed than other algorithms, but it can not generate a set of welldistributed solutions, which will lead to a slightly worse IGD value than other algorithms. NSGAII has the slowest convergence speed in all the algorithms and cannot find the true PF on ZDT1. Although the IGD value of MOQPSODSCT, SMPSO, and MPSO/D are very close, the std value of MOQPSODSCT is significantly better than SMPSO and MPSO/D in terms of magnitude. Therefore, MOQPSODSCT has the best performance on ZDT1 and ZDT2.
ZDT3 is a twoobjective function with a disconnected PF. It can be observed from Figure 6 that the worst performance on ZDT3 is NSGAII, which is unable to find the true PF; MOEA/D, dMOPSO, and MPSO/D all have the similar performance; they can converge to the true PF but fail to obtain a set of satisfactory solutions. It can be seen from Table 3 that the IDG values of MOQPSODSCT, MOQPSOAG, and SMPSO are very close, but the std of MOQPSODSCT is significantly better than MOQPSOAG and SMPSO in terms of magnitude. Therefore, MOQPSODSCT has the best performance on ZDT3.
ZDT4 is a twoobjective function with 21^{9} local PFs and a global PF. Many algorithms can not converge to the true PF because of their low ability to escape from local optimal. As shown in Figure 7, both MPSO/D and MOQPSOAG get trapped at a local PF far away from the true PF; the PFs produced by NSGAII and MOEA/D are closer to the true PF; it can be deduced that NSGAII and MOEA/D have the potential to converge to the true PF if the maximum evaluation is set to a larger number. Only MOQPSODSCT, dMOPSO, and SMPSO can find the true PF. It can be seen from Figure 12 that the convergence speed of MOQPSODSCT is significantly faster than other algorithms. The values in Table 3 indicate that MOQPSODSCT can find best convergence and better spread solutions along the entire true PF than other compared algorithms with the lowest value of IGD.
ZDT6 is a twoobjective function with a nonconvex and nonuniformly spaced PF. All the algorithms can converge easily to the true PF. It can be observed from Figure 8 that SMPSO and MOQPSOAG cannot push all the solutions to the true PF. In Table 3, MOQPSODSCT, MPSO/D, dMOPSO, and MOEA/D have the similar IGD value. The best performance on ZDT6 is dMOPSO with the lowest std value of IGD. MOQPSODSCT performs the second best on ZDT6.
DTLZ2 is a threeobjective function with a spherical PF. As shown in Figure 9, the PFs obtained by dMOPSO are far away from the true PF; MOQPSODSCT and NSGAII have similar performance such that they can converge to the true PF but cannot make solutions spread well along the entire PF; MOQPSOAG and SMPSO can find the true PF but fail to obtain a set of satisfactory solutions. It can be seen from Table 3 and Figure 9 that both MOEA/D and MPSO/D can not only converge easily to the true PF but also generate a set of welldistributed solutions. MOEA/D performs best on DTLZ2 with the lowest IGD value and MOQPSODSCT performs the third best on DTLZ2.
DTLZ5 and DTLZ6 are all 3objective functions that test the ability to converge into a degenerated curve and the search ability in disconnected area, respectively. As shown in Figures 10 and 11, NSGAII and SMPSO can converge to the true PF but cannot keep the distribution of the solutions; MPSO/D, MOEA/D and dMOPSO all have a bad performance on DTLZ5 and DTLZ6. MOQPSOAG has good performance on DTLZ5 while it cannot push all the solutions to the true PF on DTLZ6. The values in Table 3 indicate that MOQPSODSCT performs significantly better than other algorithms.
4.4. The Comparison of the Diversity of the Solutions
The diversity of the nondominated solutions set indicates the number of the nondominated solutions obtained by the algorithm and the distribution of the nondominated solutions set. To measure the diversity of the nondominated solutions set, SP (Spacing Metric) [38] is introduced as an indicator. Let be the nondominated solutions set obtained by the algorithm, so the indicator SP can be defined aswhere ; is the size of ; is the mean value of ; is the number of the objectives. A value of zero for SP means a good diversity of the nondominated solutions set. To demonstrate that the double search strategy in MOQPSODSCT can make the particles find more nondominated solutions and improve the diversity of the nondominated solutions set, we will compare the SP values of MOQPSODSCT and MOQPSOAG on each test function. To highlight the effectiveness of the double search strategy, the value of will be set to 0 to exclude the influence of the circular transposon mechanism on the diversity of the nondominated solutions set. 30 test runs are done for each algorithm. The mean and std values of SP are shown in Table 4, where the best values are highlighted in bold and italic font. It can be seen from Table 4 that MOQPSODSCT get the best results in all test functions. Therefore, the double search strategy can be said to have the ability to improve the diversity of the nondominated solutions set.

4.5. Discussion on Probability Parameter
In our proposed algorithm, parameter is used to control the frequency of the implementation of the circular transposon mechanism, which directly influences the information exchange ability of the particles in the archive. The mean values of IGD obtained by MOQPSODSCT with changing from 0 to 1.0 for ZDT1, ZDT2, ZDT3, ZDT4, ZDT6, DTLZ2, DTLZ5, and DTLZ6 are shown in Figure 13. 30 test runs are done for each test function. It can be seen that some test functions are insensitive to the change of , such as ZDT1, ZDT2, ZDT3, DTLZ2, and DTLZ5. It can be observed from Figures 14 and 15 that the value of has a great influence on ZDT4, ZDT6 and DTLZ6. MOQPSODSCT will not find the true PF of ZDT4 and some solutions cannot be pushed to the true PF of ZDT6 if is set to 0. The convergence speed of MOQPSODSCT on DTLZ6 can be greatly reduced without this mechanism. It can be seen from Figure 14 that the logarithm of IGD values of MOQPSODSCT on all test functions almost has no differences with changing from 0.2 to 1.0, but the computing resources will be severely consumed if is set to a large value. Therefore, the value of is set to 0.2 in this paper.
5. Conclusions
In this paper, an improved multiobjective quantumbehaved particle swarm optimization based on double search strategy and circular transposon mechanism (MOQPSODSCT) was proposed. In MOQPSODSCT, the double search strategy with a search probability parameter was used to update the position of all the particles. To help the particles escape from local optimal, oppositionbased learning was introduced to construct a better attractor for each particle when its personal best position is equal to the global best position. The double search strategy can make the algorithm generate more solutions for MOPs. Then, the circular transposon mechanism was added to the external archive to exchange useful information between particles, which greatly improved the convergence accuracy of the algorithm and made the true PF be covered with the nondominated solutions evenly and dispersively. The experiment results have demonstrated the best performance of MOQPSODSCT in most MOPs when comparing with other multiobjective optimization algorithms. However, the experiment results on 3objective problems (DTLZ function sets) are worse than the results on 2objective problems (ZDT function sets) and 3objective optimization problems are less involved in this paper. Therefore, our next improvements will be aimed at 3objective problems. The future work will include two aspects. Firstly, some information hidden in the swarm will be considered to guide the particle to select the appropriate search pattern instead of introducing a new search probability parameter, because the new parameter will increase the complexity of designing the algorithm and adjusting the parameter. Secondly, this algorithm will be applied to solve some realworld problems on gene selection. For example, our proposed algorithm will be combined with other classical machine learning algorithms to deal with a multiobjective optimization model with two problems about how to obtain predictive genes with lower redundancy and higher prediction accuracy.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported by the National Natural Science Foundation of China [Nos. 61572241 and 61271385], the National Key R&D Program of China [No. 2017YFC0806600], the Foundation of the Peak of Six Talents of Jiangsu Province [No. 2015DZXX024], the Fifth “333 High Level Talented Person Cultivating Project” of Jiangsu Province [No. (2016) III0845], and the Research Innovation Program for College Graduates of Jiangsu Province .
References
 X. Li, J. Lai, and R. Tang, “A Hybrid Constraints Handling Strategy for Multiconstrained Multiobjective Optimization Problem of Microgrid Economical/Environmental Dispatch,” Complexity, vol. 2017, Article ID 6249432, 12 pages, 2017. View at: Google Scholar
 J. Lee, W. Seo, and D. W. Kim, “Effective Evolutionary Multilabel Feature Selection under a Budget Constraint,” Complexity, vol. 2018, Article ID 3241489, 2018. View at: Google Scholar
 M.S. CasasRamírez, J.F. CamachoVallejo, R. G. GonzálezRamírez, J.A. MarmolejoSaucedo, and J. VelardeCantú, “Optimizing a Biobjective ProductionDistribution Planning Problem Using a GRASP,” Complexity, vol. 2018, Article ID 3418580, 13 pages, 2018. View at: Publisher Site  Google Scholar
 N. Srinivas and K. Deb, “Muiltiobjective optimization using nondominated sorting in genetic algorithms,” Evolutionary Computation, vol. 2, no. 3, pp. 221–248, 1994. View at: Publisher Site  Google Scholar
 R. C. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” in Proceedings of the 6th International Symposium on Micromachine and Human Science, pp. 39–43, Nagoya, Japan, October 1995. View at: Google Scholar
 C. A. Coello Coello, G. T. Pulido, and M. S. Lechuga, “Handling multiple objectives with particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 256–279, 2004. View at: Publisher Site  Google Scholar
 C. R. Raquel and P. C. Naval Jr., “An effective use of crowding distance in multiobjective particle swarm optimization,” in Proceedings of the 7th Annual conference on Genetic and Evolutionary Computation, pp. 257–264, ACM, June 2005. View at: Publisher Site  Google Scholar
 M. A. Abido, “Multiobjective particle swarm optimization with nondominated local and global sets,” Natural Computing, vol. 9, no. 3, pp. 747–766, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 S. R. Margarita and C. Coello, “Improving PSObased MultiObjective Optimization Using Crowding, Mutation and εDominance,” Lecture Notes in Computer Science, vol. 3410, pp. 505–519, 2005. View at: Google Scholar
 T. Cheng, M. Chen, P. J. Fleming, Z. Yang, and S. Gan, “A novel hybrid teaching learning based multiobjective particle swarm optimization,” Neurocomputing, vol. 222, pp. 11–25, 2017. View at: Publisher Site  Google Scholar
 W. Yang, J. Guo, Y. Liu, and G. Zhai, “The design of contactors based on the niching multiobjective particle swarm optimization,” Complexity, vol. 2018, Article ID 9054623, 10 pages, 2018. View at: Publisher Site  Google Scholar
 S. Z. Martínez and C. A. Coello Coello, “A multiobjective particle swarm optimizer based on decomposition,” in Proceedings of the 13th annual conference on Genetic and evolutionary computation, vol. 162, pp. 69–76, Dublin, Ireland, July 2011. View at: Publisher Site  Google Scholar
 J. Sun, B. Feng, and W. Xu, “Particle swarm optimization with particles having quantum behavior,” in Proceedings of the Congress on Evolutionary Computation (CEC '04), pp. 325–331, Portland, Ore, USA, June 2004. View at: Google Scholar
 Z. Shi and Q. W. Chen, “Multiobjective optimization algorithm based on quantumbehaved particle swarm and adaptive grid,” Information and Control, vol. 40, no. 2, pp. 214–220, 2011. View at: Google Scholar  MathSciNet
 S.H. Xu, X.D. Mu, D. Chai, and P. Zhao, “Multiobjective quantumbehaved particle swarm optimization algorithm with doublepotential well and sharelearning,” Optik  International Journal for Light and Electron Optics, vol. 127, no. 12, pp. 4921–4927, 2016. View at: Publisher Site  Google Scholar
 H. AlBaity, S. Meshoul, and A. Kaban, “Constrained multiobjective optimization using a quantum behaved particle swarm,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Preface, vol. 7665, no. 3, pp. 456–464, 2012. View at: Google Scholar
 C. A. C. Coello, G. B. Lamont, and D. A. van Veldhuizen, Evolutionary Algorithms for Solving MultiObjective Problems, Springer, New York, NY, USA, 2007. View at: MathSciNet
 Y. H. Shi and R. C. Eberhart, “A modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98), pp. 69–73, Anchorage, Alaska, USA, May 1998. View at: Google Scholar
 M. Clerc and J. Kennedy, “The particle swarmexplosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002. View at: Publisher Site  Google Scholar
 J. Sun, W. Xu, and B. Feng, “Adaptive parameter control for quantumbehaved particle swarm optimization on individual level,” in Proceedings of the 2005 IEEE International Conference on Systems, Man and Cybernetics, vol. 4, pp. 3049–3054, Waikoloa, HI, USA, 2006. View at: Publisher Site  Google Scholar
 J. Sun, W. Fang, X. Wu, V. Palade, and W. Xu, “Quantumbehaved particle swarm optimization: analysis of individual particle behavior and parameter selection,” Evolutionary Computation, vol. 20, no. 3, pp. 349–393, 2012. View at: Publisher Site  Google Scholar
 H. R. Tizhoosh, “Oppositionbased learning: a new scheme for machine intelligence,” in Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation, CIMCA and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (IAWTIC '05), pp. 695–701, Vienna, Austria, November 2005. View at: Google Scholar
 C. Qu, S. Zhao, Y. Fu, and W. He, “Chicken swarm optimization based on elite oppositionbased learning,” Mathematical Problems in Engineering, vol. 2017, Article ID 2734362, 20 pages, 2017. View at: Publisher Site  Google Scholar  MathSciNet
 W. Wei, J. Zhou, F. Chen, and H. Yuan, “Constrained differential evolution using generalized oppositionbased learning,” Soft Computing, vol. 20, no. 11, pp. 4413–4437, 2016. View at: Publisher Site  Google Scholar
 P. Wang, W. Gao, X. Qian et al., “Hybrid fireworks explosion optimization algorithm using elite oppositionbased learning,” Journal of Computer Applications, vol. 34, no. 10, pp. 2886–2890, 2014. View at: Google Scholar
 P. Shao, Z.J. Wu, X.Y. Zhou, and C.S. Deng, “Improved particle swarm optimization algorithm based on opposite learning of refraction,” Tien Tzu Hsueh Pao/Acta Electronica Sinica, vol. 43, no. 11, pp. 2137–2144, 2015. View at: Google Scholar
 B. McClintock, “The origin and behavior of mutable loci in maize,” Proceeding of the National Academy of Science, vol. 36, no. 6, pp. 344–355, 1950. View at: Publisher Site  Google Scholar
 B. McClintock, “Chromosome organization and genic expression,” Cold Spring Harbor Symposia on Quantitative Biology, vol. 16, pp. 13–47, 1951. View at: Publisher Site  Google Scholar
 T. M. Chan, K. F. Man, K.S. Tang, and S. Kwong, “A jumping gene paradigm for evolutionary multiobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 2, pp. 143–159, 2008. View at: Publisher Site  Google Scholar
 H.B. Ouyang, L.Q. Gao, S. Li, and X.Y. Kong, “Improved globalbestguided particle swarm optimization with learning operation for global optimization problems,” Applied Soft Computing, vol. 52, pp. 987–1008, 2017. View at: Publisher Site  Google Scholar
 Q. Lin, J. Li, Z. Du, J. Chen, and Z. Ming, “A novel multiobjective particle swarm optimization with multiple search strategies,” European Journal of Operational Research, vol. 247, no. 3, pp. 732–744, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjective evolutionary algorithms: empirical results,” Evolutionary Computation, vol. 8, no. 2, pp. 173–195, 2000. View at: Publisher Site  Google Scholar
 K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable multiobjective optimization test problems,” in Proceedings of the Congress on Evolutionary Computation (CEC '02), pp. 825–830, May 2002. View at: Publisher Site  Google Scholar
 C. Dai, Y. Wang, and M. Ye, “A new multiobjective particle swarm optimization algorithm based on decomposition,” Information Sciences, vol. 325, pp. 541–557, 2015. View at: Publisher Site  Google Scholar
 A. J. Nebro, J. J. Durillo, G. Nieto, C. A. C. Coello, F. Luna, and E. Alba, “SMPSO: a new psobased metaheuristic for multiobjective optimization,” in Proceedings of the IEEE Symposium on Computational Intelligence in MultiCriteria DecisionMaking (MCDM '09), pp. 66–73, Nashville, Tenn, USA, April 2009. View at: Publisher Site  Google Scholar
 K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGAII,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002. View at: Publisher Site  Google Scholar
 Q. Zhang and H. Li, “MOEA/D: a multiobjective evolutionary algorithm based on decomposition,” IEEE Transactions on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007. View at: Publisher Site  Google Scholar
 J. Xiao, J.J. Li, X.X. Hong et al., “An Improved MOEA/D Based on Reference Distance for Software Project Portfolio Optimization,” Complexity, vol. 2018, Article ID 3051854, 16 pages, 2018. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2018 Fei Han et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.