Computational Intelligence and Neuroscience

Volume 2017, Article ID 2782679, 15 pages

https://doi.org/10.1155/2017/2782679

## A Swarm Optimization Genetic Algorithm Based on Quantum-Behaved Particle Swarm Optimization

^{1}College of Pipeline and Civil Engineering, China University of Petroleum, Qingdao 266580, China^{2}Shengli College, China University of Petroleum, Dongying, Shandong 257000, China

Correspondence should be addressed to Ming-hai Xu; nc.ude.cpu@iahgnim

Received 30 January 2017; Revised 10 April 2017; Accepted 20 April 2017; Published 25 May 2017

Academic Editor: Ezequiel López-Rubio

Copyright © 2017 Tao Sun and Ming-hai Xu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Quantum-behaved particle swarm optimization (QPSO) algorithm is a variant of the traditional particle swarm optimization (PSO). The QPSO that was originally developed for continuous search spaces outperforms the traditional PSO in search ability. This paper analyzes the main factors that impact the search ability of QPSO and converts the particle movement formula to the mutation condition by introducing the rejection region, thus proposing a new binary algorithm, named swarm optimization genetic algorithm (SOGA), because it is more like genetic algorithm (GA) than PSO in form. SOGA has crossover and mutation operator as GA but does not need to set the crossover and mutation probability, so it has fewer parameters to control. The proposed algorithm was tested with several nonlinear high-dimension functions in the binary search space, and the results were compared with those from BPSO, BQPSO, and GA. The experimental results show that SOGA is distinctly superior to the other three algorithms in terms of solution accuracy and convergence.

#### 1. Introduction

Particle swarm optimization (PSO) algorithm is a population-based optimization method, which was originally introduced by Eberhart and Kennedy in 1995 [1]. In PSO, the position of a particle is represented by a vector in search space, and the movement of the particle is determined by an assigned vector called the velocity vector. Each particle updates the velocity based on its current velocity, the best previous position of the particle, and the global best position of the population. PSO is extensively used for the optimization problems because it has simple structures and is easy to implement. However, it has some disadvantages, such that it easily falls into local optima when solving the complex and high-dimension problems [2, 3]. Hence a number of variant algorithms have been proposed to overcome the disadvantages of PSO [4, 5].

The particle swarm algorithm based on the probability convergence is one of the variant algorithms. This kind of particle swarm algorithm allows the particles to move according to probability instead of using velocity-displacement particle movement way. The Bare Bones PSO (BBPSO) family is a typical class of probabilistic PSO algorithms [6–8]. The Gaussian distribution was used in the original version of BBPSO, which was proposed by Kennedy [6]. then several new BBPSO variants used other distributions which seem to generate better results [7–9].

Inspired by the quantum theory and the trajectory analysis of PSO [10], Sun et al. proposed a new probabilistic algorithm, quantum-behaved particle swarm optimization (QPSO) algorithm [11]. In QPSO, each particle has a target point, which is defined as a linear combination of the best previous position of the particle and the global best position. The particle appears around the target point following a double exponential distribution. The QPSO algorithm essentially belongs to the BBPSO family, and its update equation uses an adaptive strategy and has fewer parameters to be adjusted [12–14]. The QPSO has been shown to perform well in finding the optimal solutions for continuous optimization problems and successfully applied to a wide range of areas such as multiobjective optimization [15, 16], clustering [17–19], neural network training [20–22], image processing [23, 24], engineering design [25], and dynamic optimization [26].

PSO and QPSO have been effective tools for solving global optimization problems, but they were originally developed for continuous search spaces. Kennedy and Eberhart introduced a binary version of PSO for discrete problems named binary PSO (BPSO) [27], where the trajectories are defined as changes in the probability that each particle changes its state to 1. Binary PSO has simple structure and is easy to implement; hence, it is extensively employed in the optimization problems [28–30]. But it also suffers from some disadvantages when solving the complex and high-dimension problems [28]. Sun et al. proposed binary QPSO (BQPSO), in which the target point is obtained by using the crossover operator at the best previous position of the particle and the global best position. Experiment results show that BQPSO can find better solution generally than BPSO [31].

In recent years, BQPSO has been used successfully in many fields [32–34]. However, although BQPSO broadens the application fields of QPSO, it did not show the same advantage as in the continuous space. QPSO algorithm should have better performance in solving the problems based on discrete space. This paper analyzes the main factors that impact the search ability of QPSO and converts the particle movement formula to the mutation condition by the introduction of rejection region. It then designed a new binary coding QPSO, which has crossover and mutation operator and is like genetic algorithm (GA) in form; that is, the proposed algorithm is a new genetic algorithm but incorporates the core idea of QPSO. So it was named swarm optimization genetic algorithm (SOGA).

Compared with the GA, the SOGA has no selection operator, and each individual participates in evolution based on the information of the population and its own information. At the same time, the mutation probability of the SOGA is not fixed. In the early stage of the algorithm, the probability of mutation is large and the population can keep the diversity, with the iteration of the algorithm, the mutation probability tends to zero, and the algorithm can finally converge.

The rest of this paper is organized as follows. Section 2 is a brief introduction of PSO and binary PSO; Section 3 summarizes QPSO and binary QPSO; Section 4 introduces the mutation condition of binary coding converted from the particle movement formula in QPSO; Section 5 proposes the new binary QPSO algorithm, SOGA, and then discusses the difference between this algorithm and QPSO, GA; Section 6 presents the experiment results from the benchmark functions; finally, the paper is concluded in Section 7.

#### 2. Particle Swarm Optimization

Particle swarm optimization (PSO) algorithm is a population-based optimization technique used in continuous spaces. It can be mathematically described as follows.

Assume the size of the population is and the dimension of the search space is ; then the th particle of the swarm can be represented by a position vector ; the velocity of a particle is denoted by vector ; vector is the best previous position of particle , called personal best position, and is the best position of the population, called global best position.

The velocity of particle is calculated accordingly:where , , is population size, is the number of iterations, is inertia weight, and are acceleration coefficients, and and are random numbers in the interval .

Then the next position is updated as follows:The PSO algorithm is applied to solve optimization problems in the real search space, but many optimization problems are set in discrete space. Kennedy and Eberhart proposed a discrete binary version of PSO, named binary PSO (BPSO), where the particle position has two possible values, “0” or “1.” The velocity formula in BPSO remains unchanged, and the particle position is updated as follows:where rand is a random number in the interval and the function is a Sigmoid function as

#### 3. Quantum-Behaved Particle Swarm Optimization

Inspired by trajectory analyses of PSO in [10], Sun et al. proposed a novel variant of PSO, named quantum-behaved particle swarm optimization (QPSO), which outperforms the traditional PSO in search ability.

QPSO sets a target point for each particle; denote as the target point for particle , of which the coordinates arewhere is a random number in the interval . the trajectory analysis in [10] shows that is the local attractor of particle ; that is, in PSO, particle converges to it.

The position of particle is updated as follows:where is a random number in the interval and is known as the mean best position that is defined by the average of the personal best position of all particles, accordingly,Parameter is called Contraction-Expansion Coefficient, which can be tuned to control the convergence speed of the algorithms.

Because the iterations of QPSO are different from those of PSO, the methodology of BPSO cannot be applied to QPSO. Sun et al. introduced the crossover operator of GA into QPSO and proposed binary QPSO (BQPSO). In BQPSO, still represents the position of particle , but it is necessary to emphasize that is a binary string rather than a vector, and is the th substring of , not the th bit in the binary string. Assume the length of each substring is ; then the length of is .

The target point for particle is generated through crossover operator; that is, BQPSO exerts crossover operation on the personal best position and the global best position to generate two offspring binary strings, and is randomly selected from them.

Definewhere is the number of iterations and is the Hamming distance between and . Compared with the two bit strings, the Hamming distance is the count of bit difference in the two strings. is the th substring of the mean best position, and the th bit of is determined by the states of the th bit of all particles’ personal best positions. If more particles take on 1 at the th bit, the th bit of is 1; otherwise the bit will be 0.

For each bit of , when execute operations as follows: if the state of the bit is 1, then set its state to 0; else set its state to 0.

#### 4. A Mutation Condition Using in Binary Space

The reason why the QPSO algorithm has better global search capability than the traditional PSO algorithm is that it changes the velocity-displacement model of the traditional PSO algorithm; in QPSO, the movement of particle to its target point has no determined trajectory; it can appear at any position in the whole feasible search space with a certain distribution, which is the double exponential distribution [13, 14]. Such a position can be far from the target point and may be superior to the current global best position of the population. This should also be reflected in the construction of binary QPSO algorithm.

The probability density function of particle in QPSO isSet , and ; then (9) can be rewritten asThat is, obeys the double exponential distribution, of which the mean and variance are and . The graph of probability density function (10) is Figure 1. Since the domain of is , particle can appear in any position of the search space, but the probability that a particle appears in a position far away from its target point is small. When , the variance which means that converge to with probability 1.