Journal of Optimization

Volume 2017 (2017), Article ID 4685923, 9 pages

https://doi.org/10.1155/2017/4685923

## A Novel Distributed Quantum-Behaved Particle Swarm Optimization

^{1}Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, International Research Center for Intelligent Perception and Computation, Joint International Research Laboratory of Intelligent Perception and Computation, Xidian University, Xi’an, Shaanxi Province 710071, China^{2}School of Computer and Software, Nanjing University of Information Science and Technology (NUIST), Nanjing 210044, China

Correspondence should be addressed to Yangyang Li

Received 29 December 2016; Accepted 4 April 2017; Published 3 May 2017

Academic Editor: Gexiang Zhang

Copyright © 2017 Yangyang Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Quantum-behaved particle swarm optimization (QPSO) is an improved version of particle swarm optimization (PSO) and has shown superior performance on many optimization problems. But for now, it may not always satisfy the situations. Nowadays, problems become larger and more complex, and most serial optimization algorithms cannot deal with the problem or need plenty of computing cost. Fortunately, as an effective model in dealing with problems with big data which need huge computation, MapReduce has been widely used in many areas. In this paper, we implement QPSO on MapReduce model and propose MapReduce quantum-behaved particle swarm optimization (MRQPSO) which achieves parallel and distributed QPSO. Comparisons are made between MRQPSO and QPSO on some test problems and nonlinear equation systems. The results show that MRQPSO could complete computing task with less time. Meanwhile, from the view of optimization performance, MRQPSO outperforms QPSO in many cases.

#### 1. Introduction

With the development of information science, more and more data is stored, such as web content and bioinformatics data. For this reason, many basic problems have become more and more complex, which makes great troubles to current intelligent algorithm. As one of the most important issues in artificial intelligence, optimization problem in real-world applications also becomes harder and harder to be solved.

In the past 30 years, evolutionary algorithm is becoming one of the most effective intelligent optimization methods. In order to face the new challenge, distributed evolutionary algorithms (dEAs) have been blossomed rapidly. The paper [1] provides a comprehensive survey of the distributed EAs and some models are summarized. Master-slave, island, cellular, hierarchical, pool, coevolution, and multiagent models are listed and introduced. And the different models are analyzed from aspects like parallelism level, communication cost, scalability, and fault-tolerance. And some hotspots of dEAs, such as cloud and MapReduce-based implementations and GPU and CUDA-based implementations, are listed. But no results of dEAs on distributed computing devices are reported. Cloud can be applied in many aspects, and [2–8] have realized various specific applications of cloud. The paper [9] gives a review of the parallel and distributed genetic algorithms in graphics processing unit (GPU). Some works along this idea are reported, such as [10–12]. Cloud and MapReduce is a new and effective technology to deal with big data, which is proposed by Google in 2004 [13]. To respond to the requirement of parallelization and distribution, this physical platform is very convenient to deploy an algorithm to update to be parallel. The programmers only need to consider the map function and reduce function, and the other details are provided by the model itself. Many practical problems are solved with MapReduce model and cluster of servers, such as path problem in large-scale networks [14], seismic signal analysis [15], image segmentation [16], and location recommendation [17]. But the study about MapReduce-based EAs is still in initial stage. Although some genetic algorithms [18–23] and particle swarm optimization realized by MapReduce [24] have been proposed. There are still many kinds of EAs which are not implemented with distributed model and parallel potential of these algorithms is not released. Based on these considerations, in our previous work [25], MapReduce is combined with coevolutionary particle swarm optimization, which shows that MapReduce-based CPSO obtain much better performance than CPSO. In another work [26], the quantum-behaved particle swarm optimization is transformed on MapReduce successfully. And the idea of this paper is based on it and continues extending that the background is introduced and a practical application is added.

Quantum mechanics and trajectory analysis gained extensive attention of scholars recently and sparkled in many areas, such as image segmentation [27], neural network [28], and population-based algorithms [29, 30]. In [31], Zhang presents a systematic review of quantum-inspired evolutionary algorithms. Quantum-behaved particle swarm optimization is a kind of PSO proposed by Sun et al. in 2004 [32]. Inspired by movement of particle in quantum space, a new reproduction operator of solution is proposed in this algorithm. Because a particle could arrives at any location in quantum space with a certain probability, a new solution at any location in feasible space also could be generated with a certain probability in QPSO. This mechanism is helpful for particles to avoid falling in local optimum. Some more detailed analysis has been reported in [33]. Unfortunately, when the algorithm faces large-scale and complex problem, the increasing computational cost becomes the bottleneck of this algorithm and without enough computing resource premature phenomenon could not been avoided, which urges the original algorithm to be parallel.

In order to follow this trend and enhance the capabilities of a standard QPSO, the MapReduce quantum-behaved particle swarm optimization is developed. The MRQPSO transplants the QPSO on MapReduce model and makes the QPSO parallel and distributed through partitioning the search space. Through the comparisons of MRQPSO and standard QPSO, it could be found that the proposed MRQPSO decreases the time of same function evaluations. And on some test problems QPSO increases the performance of solution and is more robust than QPSO.

The rest of this paper is organized as follows. Section 2 introduces the PSO and QPSO. Section 3 gives a brief presentation of MapReduce model. Section 4 describes the details of QPSO implementing on MapReduce. In Section 5, we show and analyze results of experiments, including the comparison with QPSO. Finally Section 6 concludes the work in this paper.

#### 2. PSO and QPSO

##### 2.1. The Particle Swarm Optimization Algorithm

Inspired by bird and fish flocks, Kennedy and Eberhart proposed PSO algorithm in 1995 [34]. This algorithm is a population-based intelligent search algorithm. In order to find the food as quickly as possible, the birds in a flock would trace their companions that are near to the food firstly. Then they would determine accurate area of food. The individual of PSO searches the optimum like the bird in a flock. Each particle has velocity and position. And the two parameters would be updated according to best value and global best value of the particles. The velocity and position of particle at the dimension are presented by and , respectively. The updating equation could be described aswhere and are the velocity and position. represents the th iteration. and are best value and global best value of the particle, respectively. and are random number uniformly distributed in . , , and are three parameters of the algorithm. is initial weight proposed by Shi and Eberhart in 1998 [35] to control the balance of local and global optimum. and are the accelerated coefficients or learning factors. Usually, .

From the above equations, it could be found that few parameters are used in PSO, which makes PSO easy to be controlled and used. Meanwhile, it has better convergence performance and quicker convergence speed. These advantages make the PSO algorithm gain a lot of research attention. However, the PSO is not a global optimization algorithm [36]. The limited velocity constrains the search space in a limited area. So the PSO could not always find the global optimum. In other words, the premature convergence is the most serious drawback of the PSO.

##### 2.2. The Quantum-Behaved Particle Swarm Optimization Algorithm

To overcome the shortcoming of the original PSO algorithm, Sun et al. proposed the quantum-behaved particle swarm optimization (QPSO) in 2004 [32]. This algorithm has a more superior performance comparing to the PSO. QPSO algorithm transfers the search space from classical space to quantum space. Particles can appear at any position, which implement the full search in the solution space.

According to uncertainty principle, the velocity and position of a particle cannot be determined simultaneously. In quantum space, a probability function of the position where a particle appears could be obtained from Schrodinger equation. The true position of one particle could be measured by Monte Carlo method. Based on these ideas, in QPSO, a local attractor is constructed by particle best solutions and global best solution as (2) for each particle. where is a local attractor of the particle at the dimension is random number distributed in . is the particle best solution. is the current global best solution.

The position of the particle is updated bywhere is the only parameter in the algorithm called creativity coefficient, which is a positive real number, to adjust the balance of local and global search. The definition of refers to (4). is the maximum number of iterations. is random number distributed in , and is the mean position and defined as follows: where is the size of population. is the global extremum of particle .

In QPSO, first step is initializing the population randomly, which concludes the position of each particle, particle best value, and global best value. Next, calculate the mean position of th dimension according to (5). Then, particle is evaluated again and the best and global best solution of the particles would be updated according to the fitness value. After that, the particle is updated as (2) and (3). When the number of iterations or accuracy requirements is satisfied, stop running and output the optimum.

Although the QPSO algorithm is superior to PSO, it still has some disadvantages. Because the particles in the QPSO fly discretely, the narrow area where the optimum is may be missed. In the case of too much computation, QPSO may spend too much time.

#### 3. MapReduce

MapReduce [13] is a programming model proposed by Dean and Ghemawat. Inspired by the* map* and* reduce* primitives present in Lisp and many other functional languages, this model is created for processing the large-scale data in parallel. The infrastructure of MapReduce provides detailed implementations of communications, load balancing, fault-tolerance, resource allocation, file distribution, and so forth [1]. Programmers do not need a lot of knowledge and experiments about parallel and distributed programming. They only need to pay attention to* map* and* reduce *function which the model consists of and then can implement algorithm to parallelization easily.

In this model, the computation takes a set of key/value pairs. The* map* function processes the* input* key/value pairs and then emits new lists of key/value pairs, called* intermediate* key/value pairs. The type of these two lists may be from different domain. The* map* function is called independently, and the parallelization is implemented in this way. After all* map* functions’ processions are completed, the* reduce* function is called.* Intermediate* key/value pairs are grouped and passed to reduce function. The reduce phase merges and integrates these intermediate key/value pairs and outputs the* output* key/value pairs finally. And the type of* intermediate* and* output* pairs must be the same. The type of* map* and* reduce* functions can be written as follows:

Because Google has not released the system to the public, Hadoop, the Apache Lucene project developed, has been used generally. This Java-based open-source platform is a clone of MapReduce infrastructure, and we can use it to design and implement our distributed computation.

#### 4. The MRQPSO Algorithm

The particle swarm optimization algorithm [34] is one of the popular evolutionary algorithms. It has attracted much attention because of the merits of simple concept, rapid convergence, and good quality of solution. However, this algorithm is bothered by some weakness, such as premature phenomenon. Focusing on the shortcoming of original PSO, Sun et al. proposed an uncertain and global random algorithm named quantum-behaved particle swarm optimization (QPSO) in 2004 [32]. The new one puts the search space into quantum space to let the particle move to any location with different probability. Through this strategy, the premature phenomenon could be solved to a certain degree.

Although the QPSO has satisfying progress on premature phenomenon, it has not been prepared to challenge of problems with complex landscape or needing huge computation to be solved. Due to the particles of the QPSO flying discretely, they may miss the narrow area where the global optimum is. And as the problem is getting complex, the computational cost increases. So we implement the QPSO parallel and distributed by transplanting the algorithm on MapReduce model and we name this algorithm MRQPSO. The framework of MRQPSO could be described as in Algorithm 1, and the flowchart is shown as in Figure 1.