Abstract

Quantum-behaved particle swarm optimization (QPSO) algorithm is a variant of the traditional particle swarm optimization (PSO). The QPSO that was originally developed for continuous search spaces outperforms the traditional PSO in search ability. This paper analyzes the main factors that impact the search ability of QPSO and converts the particle movement formula to the mutation condition by introducing the rejection region, thus proposing a new binary algorithm, named swarm optimization genetic algorithm (SOGA), because it is more like genetic algorithm (GA) than PSO in form. SOGA has crossover and mutation operator as GA but does not need to set the crossover and mutation probability, so it has fewer parameters to control. The proposed algorithm was tested with several nonlinear high-dimension functions in the binary search space, and the results were compared with those from BPSO, BQPSO, and GA. The experimental results show that SOGA is distinctly superior to the other three algorithms in terms of solution accuracy and convergence.

1. Introduction

Particle swarm optimization (PSO) algorithm is a population-based optimization method, which was originally introduced by Eberhart and Kennedy in 1995 [1]. In PSO, the position of a particle is represented by a vector in search space, and the movement of the particle is determined by an assigned vector called the velocity vector. Each particle updates the velocity based on its current velocity, the best previous position of the particle, and the global best position of the population. PSO is extensively used for the optimization problems because it has simple structures and is easy to implement. However, it has some disadvantages, such that it easily falls into local optima when solving the complex and high-dimension problems [2, 3]. Hence a number of variant algorithms have been proposed to overcome the disadvantages of PSO [4, 5].

The particle swarm algorithm based on the probability convergence is one of the variant algorithms. This kind of particle swarm algorithm allows the particles to move according to probability instead of using velocity-displacement particle movement way. The Bare Bones PSO (BBPSO) family is a typical class of probabilistic PSO algorithms [68]. The Gaussian distribution was used in the original version of BBPSO, which was proposed by Kennedy [6]. then several new BBPSO variants used other distributions which seem to generate better results [79].

Inspired by the quantum theory and the trajectory analysis of PSO [10], Sun et al. proposed a new probabilistic algorithm, quantum-behaved particle swarm optimization (QPSO) algorithm [11]. In QPSO, each particle has a target point, which is defined as a linear combination of the best previous position of the particle and the global best position. The particle appears around the target point following a double exponential distribution. The QPSO algorithm essentially belongs to the BBPSO family, and its update equation uses an adaptive strategy and has fewer parameters to be adjusted [1214]. The QPSO has been shown to perform well in finding the optimal solutions for continuous optimization problems and successfully applied to a wide range of areas such as multiobjective optimization [15, 16], clustering [1719], neural network training [2022], image processing [23, 24], engineering design [25], and dynamic optimization [26].

PSO and QPSO have been effective tools for solving global optimization problems, but they were originally developed for continuous search spaces. Kennedy and Eberhart introduced a binary version of PSO for discrete problems named binary PSO (BPSO) [27], where the trajectories are defined as changes in the probability that each particle changes its state to 1. Binary PSO has simple structure and is easy to implement; hence, it is extensively employed in the optimization problems [2830]. But it also suffers from some disadvantages when solving the complex and high-dimension problems [28]. Sun et al. proposed binary QPSO (BQPSO), in which the target point is obtained by using the crossover operator at the best previous position of the particle and the global best position. Experiment results show that BQPSO can find better solution generally than BPSO [31].

In recent years, BQPSO has been used successfully in many fields [3234]. However, although BQPSO broadens the application fields of QPSO, it did not show the same advantage as in the continuous space. QPSO algorithm should have better performance in solving the problems based on discrete space. This paper analyzes the main factors that impact the search ability of QPSO and converts the particle movement formula to the mutation condition by the introduction of rejection region. It then designed a new binary coding QPSO, which has crossover and mutation operator and is like genetic algorithm (GA) in form; that is, the proposed algorithm is a new genetic algorithm but incorporates the core idea of QPSO. So it was named swarm optimization genetic algorithm (SOGA).

Compared with the GA, the SOGA has no selection operator, and each individual participates in evolution based on the information of the population and its own information. At the same time, the mutation probability of the SOGA is not fixed. In the early stage of the algorithm, the probability of mutation is large and the population can keep the diversity, with the iteration of the algorithm, the mutation probability tends to zero, and the algorithm can finally converge.

The rest of this paper is organized as follows. Section 2 is a brief introduction of PSO and binary PSO; Section 3 summarizes QPSO and binary QPSO; Section 4 introduces the mutation condition of binary coding converted from the particle movement formula in QPSO; Section 5 proposes the new binary QPSO algorithm, SOGA, and then discusses the difference between this algorithm and QPSO, GA; Section 6 presents the experiment results from the benchmark functions; finally, the paper is concluded in Section 7.

2. Particle Swarm Optimization

Particle swarm optimization (PSO) algorithm is a population-based optimization technique used in continuous spaces. It can be mathematically described as follows.

Assume the size of the population is and the dimension of the search space is ; then the th particle of the swarm can be represented by a position vector ; the velocity of a particle is denoted by vector ; vector is the best previous position of particle , called personal best position, and is the best position of the population, called global best position.

The velocity of particle is calculated accordingly:where , ,   is population size, is the number of iterations, is inertia weight, and are acceleration coefficients, and and are random numbers in the interval .

Then the next position is updated as follows:The PSO algorithm is applied to solve optimization problems in the real search space, but many optimization problems are set in discrete space. Kennedy and Eberhart proposed a discrete binary version of PSO, named binary PSO (BPSO), where the particle position has two possible values, “0” or “1.” The velocity formula in BPSO remains unchanged, and the particle position is updated as follows:where rand is a random number in the interval and the function is a Sigmoid function as

3. Quantum-Behaved Particle Swarm Optimization

Inspired by trajectory analyses of PSO in [10], Sun et al. proposed a novel variant of PSO, named quantum-behaved particle swarm optimization (QPSO), which outperforms the traditional PSO in search ability.

QPSO sets a target point for each particle; denote as the target point for particle , of which the coordinates arewhere is a random number in the interval . the trajectory analysis in [10] shows that is the local attractor of particle ; that is, in PSO, particle converges to it.

The position of particle is updated as follows:where is a random number in the interval and is known as the mean best position that is defined by the average of the personal best position of all particles, accordingly,Parameter is called Contraction-Expansion Coefficient, which can be tuned to control the convergence speed of the algorithms.

Because the iterations of QPSO are different from those of PSO, the methodology of BPSO cannot be applied to QPSO. Sun et al. introduced the crossover operator of GA into QPSO and proposed binary QPSO (BQPSO). In BQPSO, still represents the position of particle , but it is necessary to emphasize that is a binary string rather than a vector, and is the th substring of , not the th bit in the binary string. Assume the length of each substring is ; then the length of is .

The target point for particle is generated through crossover operator; that is, BQPSO exerts crossover operation on the personal best position and the global best position to generate two offspring binary strings, and is randomly selected from them.

Definewhere is the number of iterations and is the Hamming distance between and . Compared with the two bit strings, the Hamming distance is the count of bit difference in the two strings. is the th substring of the mean best position, and the th bit of is determined by the states of the th bit of all particles’ personal best positions. If more particles take on 1 at the th bit, the th bit of is 1; otherwise the bit will be 0.

For each bit of , when execute operations as follows: if the state of the bit is 1, then set its state to 0; else set its state to 0.

4. A Mutation Condition Using in Binary Space

The reason why the QPSO algorithm has better global search capability than the traditional PSO algorithm is that it changes the velocity-displacement model of the traditional PSO algorithm; in QPSO, the movement of particle to its target point has no determined trajectory; it can appear at any position in the whole feasible search space with a certain distribution, which is the double exponential distribution [13, 14]. Such a position can be far from the target point and may be superior to the current global best position of the population. This should also be reflected in the construction of binary QPSO algorithm.

The probability density function of particle in QPSO isSet , and ; then (9) can be rewritten asThat is, obeys the double exponential distribution, of which the mean and variance are and . The graph of probability density function (10) is Figure 1. Since the domain of is , particle can appear in any position of the search space, but the probability that a particle appears in a position far away from its target point is small. When , the variance which means that converge to with probability 1.

When the position of a particle uses binary encoding, it is hard to describe the relative position of two points using the measure of two binary strings. Similar to set a rejection region, we set a threshold value . When the value of falls into the rejection region, as shown in Figure 2, set ; that is, , else . means mutation operation on .

For any , which is a random number in the interval , the condition that does not fall into the rejection region isThe left side of Condition (11) can be written asThus Condition (11) means , accordingly:In order to ensure that the algorithm can converge, setwhere is the mean best position of the population.

Then Condition (13) iswhere is used to measure the difference of two binary strings. Hamming distance can be used here.

Assume ; then

For (16), when the value of is small, the function has fast rates of change as shown in Figure 3, so Condition (15) suffers from the effect of the initial value of . So Condition (15) can be changed into its equivalent form:where parameter is a constant that is greater than zero.

5. Swarm Optimization Genetic Algorithm

Based on the mutation condition (17), mutation operator is introduced into BQPSO. still represents the position of particle , is the personal best position of particle , is the global best position, and is the mean best position which is defined the same as in BQPSO.

Different from BQPSO, crossover or mutation operation process is applied to the whole binary string, instead of bits. Because the procedure of the algorithm is similar to GA, it is named as swarm optimization genetic algorithm (SOGA). The process can be described as follows.(1)Initialize a population of particles in binary space;(2)Set personal best position , and compute ;(3)Evaluate the fitness of particles and determine the global best position ;(4)while terminate condition is not reached do(5)for each particle do(6)Exert crossover operation on and to generate two offspring binary strings,(7) is randomly selected from them.(8)if condition (17) is true,(9)Exert mutation operation on ;(10)end if(11)Set ;(12)Compute the fitness of particles , and update ,(13)end for (14)Update and the mean best position ;(15)end whileCompared to the GA with the same crossover and mutation operator, SOGA has the following characteristics: (1)SOGA does not have selection operator and crossover probability and its crossover operator is exerted directly on and . Therefore, the form of the fitness function has no effect on the algorithm, and the target function of the maximization problem can be set as the fitness function.(2)Condition (17) can be turned intoSince , the range of is , and is a random number in the interval ; thus Condition (18) is equivalent to an adaptive mutation probability:where is a constant that is greater than zero and decreases with the increase of iteration times. Therefore, is shrunk, which causes the algorithm to converge.(3) is the only parameter of SOGA, which can be tuned to control the convergence speed of the algorithms as Contraction-Expansion Coefficient in BQPSO.

When the value of is 0.5, 1, and 2, the curves of mutation probability changing with are shown in Figure 4. The figure demonstrates that the smaller the value of , the faster the convergence speed of the algorithm. It also can be seen that the global searching ability of the algorithm is reduced when is too small. So set in SOGA.

6. Experimental Results

The proposed SOGA is compared with BPSO, BQPSO, and GA. They are tested on the following 10 benchmark problems to be minimized [28, 35]:

() Sphere Function

() Schwefel’s Problem 2.22

() Schwefel’s Problem 1.2

() Step Function

() Schwefel’s Problem 2.21

() Minima Function

() Schwefel’s Problem 1.2

() Ackley Function

() Generalized Penalized Function

() Griewank FunctionIn these functions, are unimodal and are multimodal. Their optimum values are all zeros except and . The minimum values of and are −78.3323 and , respectively, where is the dimension of a function.

In the experiments, the dimension of each function is 8, and the binary code length of each continuous variable is 15, so the length of particle is 120 for each function. The size of population is 50 and the total number of iterations is set to 500. The parameters of algorithms are listed in Table 1, where is crossover probability and is mutation probability in GA.

Four algorithms ran independently 30 times on the benchmark functions, and the best target function value was recorded at each run. To compare the four algorithms, 30 data sets were analyzed using the following statistic parameters: the mean, the standard deviation (STD), the best, the worst, and the median; these results are reported in Tables 2 and 3.

Moreover, the statistical test is conducted in order to determine whether the average best results are different with a statistical significance. The confidence level is fixed at 0.95, and the tests return value which are shown in Tables 4 and 5. We use the SAS for statistical testing; in the SAS system, if the value is less than 0.0001, the system displays . The value of in Tables 4 and 5 shows the result of pairwise comparison; indicates the previous comparison algorithm is significantly better than the latter; represents no significant difference between the two compared algorithms; indicates the previous comparison algorithm is significantly worse than the latter.

The results of SOGA compared with BPSO and BQPSO are listed in Tables 2 and 4. The results show that SOGA surpasses BPSO and BQPSO in minimizing the ten benchmark functions except . Figure 5 illustrates the convergence process of the best target function value of population in one running. As shown in Figure 5, the SOGA converges faster than BPSO and BQPSO.

Since SOGA has almost the same form as GA, the same crossover and mutation operator, single-point crossover and single-point mutation, are used in both algorithms. In GA, the elitist strategy is applied to improve the convergence and optimization results. It should be noted that the GA can not converge after 500 iterations for most of the functions; for better comparison, the number of iterations of the GA is set to 2000 in Table 3 to ensure that the algorithm is fully convergent.

For high-dimension functions, assume is the binary string of the particle (or individual) , where is the number of dimensions and is the th substring of . It is easy for GA or SOGA to exert crossover and mutation operation on each substring in turn, instead of on the whole . For instance, in SOGA, Condition (17) can be written asfor each substring of , where are the th substring of . Then the process of SOGA when the crossover and mutation operation act on substring can be described as follows: the same operation can also be used in GA.(1)Initialize a population of particles in binary space;(2)Set personal best position , and compute ;(3)Evaluate the fitness of particles and determine the global best position ;(4)while terminate condition is not reached do(5)for each particle do(6)for each substring of particle do(7)Exert crossover operation on and to generate two offspring binary strings,(8) is randomly selected from them.(9)if condition (30) is true,(10)Exert mutation operation on ;(11)end if(12)Set ;(13)end for (14)Compute the fitness of particles , and update ,(15)end for (16)Update and the mean best position ;(17)end while

The convergence processes of SOGA and GA, when crossover and mutation operation act on substrings of particles (or individual), are shown in Figure 6; the results of SOGA and GA are listed in Tables 3, 4, and 5. The experimental results show that SOGA is obviously superior to the GA on the solution accuracy and the convergence. For high-dimension functions, it is effective to improve the convergence speed and optimization ability, exerting crossover and mutation operation on substrings; as shown in Tables 3 and 4, it significantly improves the convergence rate of GA. But its influence is not significant for SOGA; according to Table 5, it has better performance in minimizing functions , , , and , especially for function .

7. Conclusions

In this study, SOGA, a binary swarm intelligence algorithm, which is based on QPSO and binary QPSO, is introduced. It converts the movement formula of QPSO to mutation conditions, thus introducing the mutation operator of GA. SOGA has the similar form to GA but does not need to set the crossover and mutation probability, so it has fewer parameters to control. SOGA integrates strongpoint of GA and PSO. The experimental results show that SOGA is distinctly superior to BPSO, BQPSO, and GA in terms of solution accuracy and convergence. Furthermore, since SOGA has the same crossover and mutation operator as GA, many improvements on the GA can be applied to it; therefore, this algorithm has better applications and research prospects.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The work was supported by the National Natural Science Foundation of China (551276199).