Computational Intelligence and Neuroscience

Volume 2018, Article ID 8430175, 12 pages

https://doi.org/10.1155/2018/8430175

## Multiscale Quantum Harmonic Oscillator Algorithm for Multimodal Optimization

^{1}School of Computer Science and Technology, Southwest University for Nationality, Chengdu, China^{2}Chengdu Institution of Computer Application, China Academy of Science, Chengdu, China^{3}University of Chinese Academy of Sciences, Beijing, China^{4}School of Computer Science and Technology, Huaiyin Normal University, Huaian, China^{5}School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China^{6}Jiangsu Key Laboratory of Media Design and Software Technology, Jiangnan University, Wuxi, China

Correspondence should be addressed to Yan Huang; moc.qq@821peh

Received 2 February 2018; Accepted 3 April 2018; Published 13 May 2018

Academic Editor: Massimo Panella

Copyright © 2018 Peng Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper presents a variant of multiscale quantum harmonic oscillator algorithm for multimodal optimization named MQHOA-MMO. MQHOA-MMO has only two main iterative processes: quantum harmonic oscillator process and multiscale process. In the two iterations, MQHOA-MMO only does one thing: sampling according to the wave function at different scales. A set of benchmark test functions including some challenging functions are used to test the performance of MQHOA-MMO. Experimental results demonstrate good performance of MQHOA-MMO in solving multimodal function optimization problems. For the 12 test functions, all of the global peaks can be found without being trapped in a local optimum, and MQHOA-MMO converges within 10 iterations.

#### 1. Introduction

Many real-world optimization problems are multimodal optimization problems, such as classification problems in machine learning [1] and inversion of teleseismic waves [2]. Multimodal optimization problems always contain several high quality global or local solutions which have to be identified and the most appropriate solution should be chosen. Global optimization of a continuous multimodal function aims at finding its several global minima or the most appropriate solution, without being trapped in a local optimum. When facing complex multimodal optimization problems, traditional optimization methods, such as gradient descent, quasi-Newton method, and Nelder–Mead’s simplex methods, which may exploit all local information in an effective way, can easily be trapped into the local optimum. If a point-by-point classical optimization approach is used for this task, the approach must have to be applied several times, each time hoping to find a different optimal solution. There are two main reasons for us to find such optima as many as possible. Firstly, an optimal solution currently favorable in the future may not remain to be so. With the knowledge of another optimal solution for the problem, users can simply switch to this new optimal solution when such a predicament occurs. Secondly, the sheer knowledge of multiple optimal solutions in the search space may provide useful insights to the properties of optimal solutions of the problem. Evolutionary algorithms (EAs) and particle swarm optimization (PSO) are used to tackle multimodal optimization problems.

Due to the population-based approach, EAs have natural advantage over classical optimization techniques. EAs maintain a population of candidate solutions, which are processed in every generation. If several distinct solutions can be preserved over all these generations, we will get multiple good solutions, rather than the only best solution. In recent years, there are several attempts to improve EAs so as to deal with multimodal fitness landscapes. Niching methods are widely used in genetic algorithms (GA), differential evolution (DE), and other evolutionary algorithms for multimodal optimization [1, 3–16].

Similar to EAs, PSO is also an iterative, population-based optimization technique. The principle of PSO is that each particle has learning ability. It can learn from itself (pbest) and its best neighbor (gbest). According to the learning approaches of particles, PSO can be divided into two models. One is the global model, the other one is the local model. In the local PSO model, each particle learns from the best particle in its neighborhood while in the global model every particle learns from the best particle in the whole population. To ensure different particles in the population converge into different optima in the solution space, the way of choosing neighborhood topology structure is crucial. This property leads to the application of PSO for multimodal optimization problems in recent years [17, 18]. Owing to PSO’s features of easy-to-implement and robust adaptability, the PSO converges quickly. But once it gets stuck into the local optimum, it will be very difficult to get out from the local optimum. To overcome this problem, quantum theories are introduced into PSO system. Quantum behaved Particle Swarm Optimization (QPSO) is the quantum model of a PSO. In QPSO, individual particles have quantum behavior [19, 20]. Instead of position and velocity, wavefunction [21, 22] is used to depict the state of a particle in QPSO [23]. Though QPSO performs better in global optimization than standard PSO, it also has the problem of premature convergence.

A novel optimization algorithm named multiscale quantum harmonic oscillator algorithm (MQHOA) is proposed in 2013 [24]. The population parameter and sampling parameter are researched in [24]. The uncertainty principle, zero energy and quantum tunnel effect of MQHOA are researched in [25]. MQHOA was inspired by the wavefunction of quantum harmonic oscillator. It tranforms the optimization problems to find the low energy state of potential . The complex objective function’s second order Taylor approximation is Harmonic oscillator potential. According to quantum theory, the wavefunction of quantum harmonic oscillator represents the distribution of optimal solution. Different spring coefficients in quantum harmonic oscillator correspond with different search scales. Different spring coefficients vary inversely with search scales.

MQHOA’s structure is elegant and pithy. It only includes two iteration processes: Quantum harmonic oscillator process (QHO process) and multiscale process (M process). The goal of optimization problem is searching the lowest energy position (where is global minimum position). QHO process simulates the quantum harmonic oscillator annealing from high energy level to ground state. In M process, MQHOA chooses decreasing with a series of 1/2 to get an increased series spring coefficient. With the same , in QHO process, MQHOA defines a new wavefunction to get sufficient sampling points in the global optimal area. The new wavefunction is defined as the summation of Gaussian probability-density functions. MQHOA’s wavefunction in scale is the sum of Gaussian probability-density functions which take as centers. It depicts the probability distribution of optimal solutions in domain. The equation can be written as

The experimental results of 15 typical two-dimensional test functions show that MQHOA performs well in finding global optima [24].

In this paper, we present a variant of MQHOA for multimodal optimization named MQHOA-MMO. Similar to PSO’s local version, in the proposed MQHOA-MMO, for each scale, every sampling point just needs to compare with the sampling points which are of the same Gaussian distribution.

This paper is organized as follows. Section 2 describe the framework of MQHOA-MMO. Test functions and comparasion algorithms are presented in Section 3. The results of experiments are discussed in Section 4. Finally, Section 5 concludes the paper.

#### 2. The Framework of MQHOA-MMO

This section presents the framework of MQHOA-MMO. We define the symbols as follows:(i) is the number of swarms and Gaussian distributions.(ii) is the number of sampling points of each Gaussian distribution.(iii) is the accuracy of optimization.(iv) is the standard deviation for all .(v) is the standard deviation for all new .(vi) is the absolute value of the difference between and .(vii) is the current scale for iteration, the initial value is defined as the domain length.(viii) is the swarm of particles such that indicates particle . is randomly generated in domain. For every , generate , which are following probability distribution . sampling positions are needed by every iteration. optimal positions are stored in .(ix) is the optimal position selected from sampling positions .(x) is the optimal position the algorithm has found.

MQHOA-MMO includes just two nested iteration processes: QHO process and M process. In MQHOA-MMO, the QHO process is nested inside the M process. The convergence conditions of QHO process and M process are and respectively. The framework of MQHOA-MMO is described in Algorithm 1.