Complexity

Complexity / 2020 / Article

ResearchArticle | Open Access

Volume 2020 |Article ID 2010545 | https://doi.org/10.1155/2020/2010545

Feng Qian, Mohammad Reza Mahmoudi, Hamïd Parvïn, Kim-Hung Pho, Bui Anh Tuan, "An Adaptive Particle Swarm Optimization Algorithm for Unconstrained Optimization", Complexity, vol. 2020, Article ID 2010545, 18 pages, 2020. https://doi.org/10.1155/2020/2010545

An Adaptive Particle Swarm Optimization Algorithm for Unconstrained Optimization

Academic Editor: Carlos Aguilar-Ibanez
Received08 Dec 2019
Revised15 Mar 2020
Accepted28 Apr 2020
Published09 Sep 2020

Abstract

Conventional optimization methods are not efficient enough to solve many of the naturally complicated optimization problems. Thus, inspired by nature, metaheuristic algorithms can be utilized as a new kind of problem solvers in solution to these types of optimization problems. In this paper, an optimization algorithm is proposed which is capable of finding the expected quality of different locations and also tuning its exploration-exploitation dilemma to the location of an individual. A novel particle swarm optimization algorithm is presented which implements the conditioning learning behavior so that the particles are led to perform a natural conditioning behavior on an unconditioned motive. In the problem space, particles are classified into several categories so that if a particle lies within a low diversity category, it would have a tendency to move towards its best personal experience. But, if the particle’s category is with high diversity, it would have the tendency to move towards the global optimum of that category. The idea of the birds’ sensitivity to its flying space is also utilized to increase the particles’ speed in undesired spaces in order to leave those spaces as soon as possible. However, in desirable spaces, the particles’ velocity is reduced to provide a situation in which the particles have more time to explore their environment. In the proposed algorithm, the birds’ instinctive behavior is implemented to construct an initial population randomly or chaotically. Experiments provided to compare the proposed algorithm with the state-of-the-art methods show that our optimization algorithm is one of the most efficient and appropriate ones to solve the static optimization problems.

1. Introduction

Optimization is a subfield of artificial intelligence [110] in which a raised problem could be transformed into an optimization function with a random initial solution which is improved through the next designed steps [1, 2]. Since there is more than one solution to a problem, we can receive the best response to an arbitrary problem using mathematical global optimization tools [11]. In order to determine the best answer in local optimization, some factors should be taken into account including the solution, the amount of the permissible error, and the problems [1214] such as the best work of art, the most beautiful landscapes, and the most pleasant piece of music. Some popular stochastic algorithms, such as naturally motivated ones and population-based ones, emulate problem-solving strategies in nature including the race of some creatures with the main goal of survival which makes these creatures to be evolved in several shapes. An artificial intelligence technique based on the collective behavior in decentralized and self-organized systems, which usually includes a number of simple agents locally interacting with each other and with the environment they are in, is swarm intelligence (SI). In these algorithms, although there is not a usual centralized control for the agents’ behaviors, a collective behavior is often provided due to the local interactions of the agents.

Pavlov, a Russian physiologist, recognized that the dogs splashed their saliva only when they see someone who had fed them before even if those people had not any food in their hand the next time. Based on this observation, he proposed the conditioning learning in which the animal finds out how to relate a reward or punishment with a natural motive. This theory is used to move the swarms with high spatial distribution diversity to the global optimum and also to move the swarms with low level of diversity to the local optima in the problem space. In addition, in order to modify the proposed algorithm to an improved version of the PSO algorithm, some other ideas related to the instinctive behavior of the birds and their speed are considered.

There are some defects in the original PSO algorithm including the following instances: first, this algorithm is weak in creation of the random initial population; second, quality of the problem space is not considered, and speed of the particles with that quality is not adjusted; third, PSO is time wasting to achieve an optimal solution by choosing a point between the local and global optima. These mentioned drawbacks are all resolved in this work using the instinctive conditioning behavior of the birds. The rest of this paper is organized as follows. In Section 2, a summary of the related previous works is outlined. The proposed algorithm of the paper is presented in Section 3. Simulation results and the consequences are provided in Section 4, and Section 5 deals with the conclusions and future works.

2. Literature

There are two groups of the naturally motivated metaheuristic optimization algorithms. The first one is single-solution-based algorithms which offer a gradually improving single random solution. The second group, which is more popular in the literature, is called the multisolution-based algorithm. The group which we focus on offers multiple solutions that are gradually enhanced as a whole.

One of the best evolutionary algorithms, proposed by Holland [15] in 1975, is the genetic algorithm (GA) which is a random evolutionary optimization algorithm inspired by the evolution of creatures in nature. This is a general purpose optimizer because it has an extraordinary optimization performance. At the beginning of this algorithm, an initial population of the chromosomes or potential solutions which is usually a chromosome or a binary vector is considered and then a relevant fitness function is used to measure the merit of each chromosome which results in production of a new solution set [16] by choosing the best chromosomes and entering them into a mating stage where they are crossed over and mutated.

Similar to other related articles, we can show that optimization problems of this work lead to cost function minimizations. Using instinctive and collective behavior of the flying birds, a naturally motivated optimization algorithm is provided, whose mechanism is similar to that of the particle swarm optimization algorithm. Conditioning learning behavior of the animals, which is the basic of this article and is a type of consistent learning, is expected to result in a mechanism to be able to solve any complex problem. The proposed algorithm offers some useful advantages. One advantage in the proposed PSO version is that this algorithm utilizes a beneficial collective behavior inspired by nature, and another one is that in comparison with other optimization algorithms, many of its specific parameters can be automatically set without significant loss of performance. But, a drawback of this proposed algorithm is that it is highly probable that an improper problem for the original PSO version is an improper problem for the proposed optimization algorithm.

The social behavior of birds to search for food had been the motive for proposing of a population-based general purpose optimization algorithm in 1995 [17], namely, PSO, which is a computation-oriented random intelligence optimization algorithm. Many advantages, such as computational efficiency, search mechanism, simple concept, and ease of implementation, have introduced SI-based algorithms as helpful tools in optimization applications. Each particle in PSO represents a population member with small or low-volume random mass which induces reaching a better behavior with speed and acceleration. Therefore, a particle could be a solution in a multidimensional space which can set its position in the search space based on the best position it has reached by itself (pbest), the best position reached by swarm (gbest) during the search process, and its speed. SI-based method has been used in some applications of the PSO algorithm [1820] to calculate the trajectory in the binary search space to solve the knapsack problem (KP). Some problems could also be solved using a combination of PSO with guided local search and some concepts derived from the GA [21]. A binary PSO algorithm has been proposed to solve the KP using the mutation operator.

Optimization algorithms motivated by the bees’ behavior result in some SI algorithms different from PSO; the most well-known one is called the artificial bee colony (ABC) optimization algorithm. It is a prevalent optimization algorithm developed from the bees’ SI foraging behavior [22]. ABC algorithm includes the artificial bee colony containing three types of worker, onlooker, and scout bees. Waiting in a dance area, onlooker bees create and choose a food source, worker bees go to the food sources which are previously recognized, and the scout bees make a random search for new food sources. A possible solution to the optimization problem is represented by location of a food source, and quality of the solution depends on the quantity of the nectar from the food source. Also, a virtual swarm of bees moving randomly in the two-dimensional search space is supposed, and when some target solution is found, bees have interacted with each other. Some other methods are derived from the original ABC method. Job-shop scheduling problem [23] (JSSP) has been presented when behavioral change of the onlooker behavior has changed, and SI is used to convert continuous values to binary ones. Also, the combinatorial ABC (CABC) algorithm has been presented [24] with discrete coding for the travel salesman path (TSP) problem.

Another algorithm, namely, the bees algorithm (BA), introduced on a mathematical function [25], includes a D-dimensional bee vector, which corresponds to the problem variables, and is named as a candidate solution here, which represents a site (food source) visit possessing a certain fitness value. In BA, scout bees randomly searching for new sites and worker bees frequently searching the neighbors for higher values of fitness function balance the exploration-exploitation of the algorithm. Best bees are those with the best fitness values, and elite sites are the sites visited by them. This algorithm assigns the majority of the bees near to the best sites to search neighbors of the best selected sites. Adjoined optimization functions, scheduling tasks [26], and binary data clustering [27] are examples of BA applications.

A metaheuristic algorithm searching for suitable conditions during improvisation of jazz music in the natural process of musical performance is called the harmony search algorithm (HSA) [28]. An assigned form by the standard of aesthetics is searched by the improvisation to find harmony in a piece of jazz music (a proper condition), which is equivalent to an optimization process in which a global solution (a proper condition) found by a specified objective function is searched.

Motivated by the behavior of a set of imperialists who compete with each other to overcome the colonies, an evolutionary optimization technique called the imperialist competitive algorithm (ICA) has been introduced. This algorithm also begins with an initial population whose individual members are classified into two groups of colonists and imperialists. Power of the colonies assigns them to their related imperialists. Power of any state is reversely related with its cost, so that the imperialists with more power are more dominant [29]. Interaction of the imperialist powers and their colonies has a characteristic in which the culture of the colonies is changed after a while so that they gradually become similar to one of their ruling imperialists. This is named the attraction policy which means the colonies move towards their imperialists and is executed by imperialist countries after the nineteenth century. In recent years, many other algorithms in line with improving famous optimization algorithms such as the particle bee optimization algorithm (PBOA) [30], novel particle swarm optimization algorithm (NPSOA) [31], cuckoo search optimization algorithm (CSOA or CS) [32], differential search optimization algorithm (DSOA or DSA) [33], and bird mating optimization algorithm (BMOA or BMO) [34] are presented. In order to improve the PSO algorithm [35], many algorithms have been presented using a dynamic multipopulation method. Some of these algorithms are the sinusoidal differential evolution optimization algorithm (SDEOA) [36], joint operation optimization algorithm (JOOA) [37], and dynamic multiswarm particle swarm optimizer with cooperative algorithm (DMSPSOCA) [35].

In order to make different algorithms more efficient, parameters and ideas from other methods could be used. For instance, the CSOA could be enhanced by employing chaos parameters [38]. In 2015, the PSO algorithm was enhanced employing some cognitive learning mechanisms for solving an optimization problem [39]. In 2016, the ant colony optimization algorithm (ACOA) was integrated with GA, and a new optimization algorithm was introduced [40]. A study used GA to discover the closest solution to the best one to deal with nonlinear multimodal optimization problems [41]. There are also other methods among the latest comparable optimizers [31, 4245].

3. Adaptive PSO

The proposed method is presented in this section. The population is initialized at first. After that, the population is randomly partitioned into a set of subpopulations. Also, the problem space is classified into several virtual subspaces. Each subspace is a hypercube. Indeed, each dimension out of all of the dimensions is partitioned into equal-size slices. Therefore, we have subspaces. Then, the particles will be transferred with lower speed in the more valuable subspaces. Also, in each subpopulation, a special set of the movement coefficients ( and ) is used. Also, the special set of the movement coefficients for each subpopulation adaptively changes during optimization. Finally, the best solution produced during optimization is considered to be the optimal solution to the problem found by the proposed optimization algorithm called also as the adaptive particle swarm optimization algorithm (APSOA).

The pseudocode of the APSOA is depicted in Algorithm 1. Variables , , , , , , , , and function are inputs of this algorithm. Here, and are up-bound and low-bound vectors of the problem space. is the number of dimensions. is the population size. is the number of subpopulations. Finally, shows the objective function. First, the algorithm calls the initialization function, denoted by . The pseudocode of the initialization function is depicted in Algorithm 2. It takes variables , , , , , and function . It returns a population , the best particle , and a sparse subspace rating vector . is an array whose key is a string. If it is called with a numeric string as its key, it will return the value of the th subspace where the value of is an integer in . As it is a sparse subspace rate array, if a subspace has still no value, we assume its value as 1 by default. If is called with “sum” as its key, it will return the summation of all values associated with all subspaces. Indeed, is at first. Each time the best particle of a subpopulation is located in the th subspace, its value is added by one unit. Also, each time the best particle of the whole population is located in the th subspace, its value is added by one unit. is a population of individuals, where stands for the th individual in population . Each individual in population is an object containing the following fields:: the position of the th individual: the velocity of the th individual: the position of the best place met by the th individual: index of the subspace in which the th individual is located: fitness of the th individual: fitness of the best memory of the th individual

: adaptive particle swarm optimization function
 Input:
  PS: population size
  N: subpopulation Size
  α: coefficient update rate
  D: problem size
  θ: fragmentation size
  MG: maximum generations of the algorithm
  MaxV: an array of D values; MaxVd is the maximum value in the domain of the dth dimension of the problem space
  MinV: an array of D values; MinVd is the minimum value in the domain of the dth dimension of the problem space
  C1: an array of N values; C1(i) is the first movement coefficient of the ith subpopulation
  C2: an array of N values; C2(i) is the second movement coefficient of the ith subpopulation
  F: a given objective function
  Output:
  ĝ: the best found particle
(1)
(2)For i = 1 : MG
(2.1) For p = 1 : PS
(2.1.1)     
(2.1.2)     
(2.1.3)     
(2.1.4)     
(2.2)
(2.3)
(2.4)
(2.5)
(2.6)
(2.7)
(2.8)
(2.8.1)     
(2.8.2)     
(2.8.3)     ;
(2.8.4)     
(2.8.5)     
(2.8.6)     
(2.8.7)     
(2.9)
(2.9.1)     
: initialization function
 Input:
PS: population size
D: problem size
MaxV: an array of D values; MaxVd is the maximum value in the domain of the dth dimension of the problem space
MinV: an array of D values; MinVd is the minimum value in the domain of the dth dimension of the problem space
θ: fragmentation size
: a given objective function
 Output:
: a population
: global best particle
: sparse subspace rate array
(1)
(2)For
(2.1)
(2.2)For d = 1 : D
(2.2.1)     r = A random or chaotic value from uniform distribution in interval [0, 1]
(2.2.2)     
(2.2.2)     
(2.2.2)     
(2.2.3)     
(2.2.4)     r = A random or chaotic value from uniform distribution in interval [0, 1]
(2.2.5)     
(2.3)
(2.4)
(2.5)
(3)
(4)
(5)
(6)
(7)

After the proposed method (depicted in Algorithm 1) called the initialization function, it iterates a loop times (statement 2 in Algorithm 1). Each time, all population individuals are updated using an individual update function (depicted in Algorithm 3 and explained in the following paragraph) (statement 2.1 in Algorithm 1). After updating all population individuals, the global best, i.e., , will be updated. Then, of the subspace where the global best is located is updated (statement 2.2 to statement 2.6 in Algorithm 1). In the following step, of the subspaces where the local bests of subpopulations are located is updated (statement 2.8 in Algorithm 1). Finally, the coefficients of any subpopulation are updated using the coefficient update function (depicted in Algorithm 4 and explained in the following paragraph) (statement 2.9 in Algorithm 1).

: velocity and location update function
 Input:
: an individual or particle
: global best particle
D: problem size
C1: first coefficient
C2: second coefficient
θ: fragmentation size
: sparse subspace rate array
MaxV: an array of D values; MaxVd is the maximum value in the domain of the dth dimension of the problem space
MinV: an array of D values; MinVd is the minimum value in the domain of the dth dimension of the problem space
: a given objective function
 Output:
: an individual or particle
(1)
(2)For d = 1 : D
(2.1) [r1, r2] = two random or chaotic values of uniform distribution in interval [0, 1]
(2.2)
(2.3)
(2.4)
(2.5)
(2.6)
(3)
(4)
(5)
(5.1)
(5.2)
: coefficient update function
 Input:
C1: first coefficient
C2: second coefficient
α: coefficient change rate
π: an exploration flag
 Output:
C1: first coefficient
C2: second coefficient
(1)if π
(1.1)
(1.2)
  else
(1.3)
(1.4)

For the individual update function , which is depicted in Algorithm 3, the only different section from the original PSO is the statement 2.2 in Algorithm 3 where a coefficient (denoted by ) is computed to be multiplied in the movement equation in the following statement. It speeds up the individuals in the useless subspaces and slows them down in the useful subspaces. The coefficient update function , which is depicted in Algorithm 4, adaptively changes the values of coefficients of each subpopulation.

4. Implementation of Experimentations

In this section, simulation results of the proposed method are provided in several parts, and they are compared with other similar methods. The results are provided in four parts, and different modern methods have been compared in each part on different problems which include a real-world industrial application.

In the first three parts, the average value and standard deviation of the cost function value, i.e., , are calculated to evaluate the performance of each algorithm, where is the minimum value (of the th objective function) found by an algorithm in a run and is the actual optimal value (of the th objective function).

4.1. Experimental Results: CEC 2009

Here, problems defined by the CEC 2009 benchmark [46] are provided in Table 1. By comparison with the proposed method performance when the chaotic number generator (CNG) is used against the proposed method performance when the random number generator (RNG) is used (on objective functions of the CEC 2009 benchmark), we concluded that slightly better results are obtained when CNG is used. We only used the logistic map [47] with parameter (as CNG) and uniform distribution (as RNG). Therefore, though we can use both throughout this paper, we totally experimented using the logistic map [47] with parameter (as CNG).


FunctionDomain

F1 (Beale)
F2 (Easom)
F3 (Matyas)
F4 (Colville)
F5 (Zakharov)
F6 (Schwefel 2.22)
F7 (Schewefel 1.2)
F8 (Dixon-price)
F9 (Step)
F10 (Sphere)
F11 (SumSquares)
F12 (Quartic)
F13 (Schaffer)
F14 (6 H Camel)
F15 (Bohachevsky2)
F16 (Bohachevsky3)
F17 (Shubert)
F18 (Rosenbrock)
F19 (Griewank)
F20 (Ackley)
F21 (Bohachevsky1)
F22 (Booth)
F23 (Michalewicz2)
F24 (Michalewicz5)
F25 (Michalewicz10)
F26 (Rastrigin)

The proposed method in this part is compared with the following algorithms: GA [15], differential evolution (DE) [48], PSO [17], BA [25], PBOA [30], NPSO [31], moderate-random-search strategy PSO (MRPSO) [49], ensemble of mutation strategies and control parameter S with DE (EPSDE) [50], cooperative coevolution inspired ABC (CCABC) [51], and firefly algorithm (FA) [52]. The results of each method are derived for the same parameters used by its main corresponding method. But, some of the shared parameters are fixed in all methods including the population size () and the number of fitness evaluations (), respectively, equal to and , where is the problem size and is always set to 50. We set the target objective cost function (denoted by ) always as where is an integer in and is the optimal target value of the th objective function. We have used mean and std. dev as two criteria in presenting the results which are, respectively, defined as the average of the best costs of the particles and the standard deviation of the best costs of the particles in different runs on all of the 26 functions shown in Table 1. Table 2 shows the results for assessment of other methods on all of the 26 functions shown in Table 1 in detail; also, it validates the results using the Friedman test with a value of 3.12E − 03. The results presented in Table 2 have been summarized in Table 3.


FunctionF1F2F3F4F5F6F7F8F9F10F11F12F13F14F15F16F17F18F19F20F21F22F23F24F25F26

GA1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)7 × (1.49E − 2 ± 7.36E − 3)11 × (1.34E − 2 ± 4.53E − 3)8 × (1.1E + 1 ± 1.39)11 × (7.40E + 3 ± 1.14E + 3)11 × (1.22E + 3 ± 2.66E + 2)11 × (1.17E + 3 ± 7.66E + 1)11 × (1.11E + 3 ± 7.42E + 1)11 × (1.48E + 2 ± 1.24E + 1)11 × (1.81E − 1 ± 2.71E − 2)11 × (4.24E − 3 ± 4.76E − 3)1 + (0 ± 0)11 × (6.83E − 2 ± 7.82E − 2)1 + (0 ± 0)1 + (0 ± 0)11 × (1.96E + 5 ± 3.85E + 4)11 × (1.06E + 1 ± 1.16)11 × (1.47E + 1 ± 1.78E − 1)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)9 × (4.29E − 2 ± 9.79E − 2)8 × (1.63E − 1 ± 1.41E − 1)11 × (5.29E+1 ± 4.56)
MRPSO1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)10 × (1.14 ± 4.58E − 1)10 × (1.13E − 2 ± 3.65E − 2)10 × (7.27E − 1 ± 7.3E − 10)7 × (5.54 ± 5.71)4 × (4.58E − 1 ± 1.03E − 8)9 × (3.55 ± 3.12E − 1)10 × (1.12 ± 4.47E − 1)1 + (0 ± 0)2 − (1.52E − 5 ± 1.75E − 5)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)6 × (1.63E + 1 ± 2.24E + 1)10 × (4.57E − 1 ± 7.21E − 1)1 + (0 ± 0)1 + (0 ± 0)1 +  (0 ± 0)1 + (0 ± 0)11 × (2.91 ± 7.86E − 1)7 × (1.15E − 1 ± 1.99E − 2)6 × (3.36E + 1 ± 2.18E + 1)
EPSDE10 × (1.48E − 5 ± 1.64E − 5)11 × (1.04E − 1 ± 3.56E − 5)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)6 × (5.26 ± 5.04)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)10 × (1.02E − 1 ± 1.56E − 1)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)7 × (1.67E + 1 ± 2.19E + 1)8 × (8.25E − 2 ± 8.69E − 2)10 × (2.14E − 1 ± 5.49E − 1)1 + (0 ± 0)11 × (1.32E − 3± 1.49E − 3)1 + (0 ± 0)6 × (1.67E − 3 ± 3.6E − 4)4 × (6.68E − 3 ± 1.49E − 2)7 × (3.88E + 1 ± 1.11E + 1)
PSO1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)10 × (6.67E − 1 ± 1.03E − 8)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)6 × (1.16E − 3 ± 2.81E − 4)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)5 × (1.51E + 1 ± 2.42E + 1)7 × (1.74E − 2 ± 2.08E − 2)9 × (1.65E − 1 ± 4.94E − 1)1 + (0 ± 0)1 + (0 ± 0)11 × (2.28E − 1 ± 1.2E − 1)10 × (2.2 ± 2.57E − 1)11 × (5.65 ± 5.03E − 1)8 × (4.4E + 1 ± 1.17E + 1)
FA1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)9 × (2.73E − 1 ± 1.15E − 11)10 × (1.47E + 2 ± 4.49E + 2)5 × (6.67E − 1 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)9 × (3.66E − 3 ± 1.4E − 3)1 + (0 ± 0)1+ (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)9 × (2.02E+1 ± 1.15)1 + (0 ± 0)6 × (6.56E − 10 ± 1.24E − 9)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)7 × (3.18E − 3 ± 9.27E − 2)9 × (3.65E − 1 ± 2.82E − 1)10 × (4.79E + 1 ± 1.61E + 1)
BA11 × (1.88E − 5 ± 1.94E − 5)10 × (6.13E − 5 ± 4.5E − 5)1 + (0 ± 0)9 × (1.12 ± 4.66E − 1)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)9 × (6.67E − 1 ± 1.16E − 9)10 × (5.12 ± 3.92E − 1)1 + (0 ± 0)1 + (0 ± 0)1 − (1.72E − 6 ± 1.85E − 6)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)10 × (2.88E + 1 ± 1.06E − 1)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)10 × (5.33E − 4 ± 7.47E − 4)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)
CCABC1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)11 × (1.32 ± 7.66E − 1)1 + (0 ± 0)1 + (0 ± 0)8 × (6.57 ± 6.13)3 × (2.35E − 1 ± 2.66E − 10)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)4 × (4.25E − 4 ± 7.65E − 3)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)4 × (1.44E + 1 ± 2.96E + 1)9 × (9.74E − 2 ± 9.77E − 2)8 × (1.14E − 1 ± 3.26E − 1)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)5 × (5.52E − 4 ± 7.85E − 3)10 × (5.45E − 1 ± 2.89E − 2)9 × (4.5E + 1 ± 1.19E + 1)
DE1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)8 × (4.09E − 2 ± 8.2E − 2)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)8 × (6.67E − 1 ± 1E − 9)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)7 × (1.36E − 3 ± 4.2E − 4)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)8 × (1.82E + 1 ± 5.04)5 × (1.48E − 3 ± 2.96E − 3)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)8 × (4.22E − 3±1.25E − 2)6 × (6.91E − 2 ± 6.42E − 2)5 × (1.17E + 1 ± 2.54)
PBOA1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1+(0 ± 0)11 × (7.59E − 1 ± 7.1E − 10)1 + (0 ± 0)7 × (6.67E − 1 ± 5.65E − 10)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)8 × (6.78E − 3 ± 1.33E − 3)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)3 × (4.28 ± 5.79)6 × (4.68E − 3 ± 6.72E − 3)7 × (3.12E − 8 ± 3.98E − 8)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)
NPSO1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)9 × (8.51 ± 8.77)5 × (6.67E − 1 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)5 × (9.7E − 4 ± 1.25E − 3)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)2 × (1.04E − 7 ± 2.95E − 7)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)1 + (0 ± 0)4 × (6.68E − 3 ± 1.49E − 2)1 + (0 ± 0)
APSOA1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)3(3.47E − 5 ± 2.33E − 6)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)1(0 ± 0)

The value below means that the average of the best values found by the method on 30 independent runs over function is and the standard deviation is . It also means that the rank of the method among all methods over function is . The sign & indicates that the proposed method performance is superior/inferior/equal to that of the method , if its value is  + /−/ × .

GAMRPSOEPSDEPSOFABACCABCDEPBOANPSOAPSOA

7.08 (184)4.42 (115)4.04 (105)3.62 (94)3.50 (91)3.38 (88)3.35 (87)2.81 (73)2.38 (62)1.77(46)1.08 (28)

The results show that, in almost all of the functions in the CEC 2009 benchmark, the proposed method converges to the optimum as the particles’ average cost is counted. Since the proposed algorithm fully converges to the optimal point, the particles will have a zero standard deviation at the end of the assessments in different methods which means that the results of the proposed method together with other methods have presented the best performance. The results also show that there are some functions such as F8, F12, and F18 in which only the proposed method achieves the optimal point which proves the superiority of the proposed method. In the process of the Friedman test, the results’ process is proved to be normal, and it is proved that those results are not obtained by random.

4.2. Experimental Results: CEC 2005

The proposed method in this part is compared with the following algorithms: the modified bat algorithm hybridizing by differential evolution (MBADE [10]), rain optimization algorithm (ROA) [42], CS [32], teaching-learning-based optimization (TLBO) algorithm [53], DSA [33], and BMO algorithm [34]. In this part, we have compared the proposed method based on 25 problems defined by the CEC 2005 benchmark [54]. All of the problems in the CEC 2005 benchmark [54] are summarized in Table 4. The results of each method are derived for the same parameters used by its main corresponding method. But, some of the shared parameters are fixed in all methods including and , respectively, equal to and , where is always set to 50. We set the target objective cost function always as where is an integer in and is the optimal target value of the th objective function. We use mean and std. dev as two criteria in presenting the results which are, respectively, defined as the average of the best costs of the particles and the standard deviation of the best costs of the particles in different runs on all of the 25 functions shown in Table 4. Table 5 shows the results for assessment of other methods on all of the 25 functions shown in Table 4 in detail; also, it validates the results using the Friedman test with a value of 7.92E − 05. The results presented in Table 5 have been summarized in Table 6.


FunctionDomainType and

F1 (Shifted Sphere Function)U, −450
F2 (Shifted Schwefel’s Problem 1.2)U, −450
F3 (Shifted Rotated High-Conditioned Elliptic Function)U, −450
F4 (Shifted Schwefel’s Problem 1.2 with Noise in Fitness)U, −450
F5 (Schwefel’s Problem 2.6 with Global Optimum on Bounds)U, −310
F6 (Shifted Rosenbrock’s Function)M, 390
F7 (Shifted Rotated Griewank’s Function without Bounds)M, −180
F8 (Shifted Rotated Ackley’s Function with Global Optimum on Bounds)M, −140
F9 (Shifted Rastrigin’s Function)M, −330
F10 (Shifted Rotated Rastrigin’s Function)M, −330
F11 (Shifted Rotated Weierstrass Function)M, 90
F12 (Schwefel’s Problem 2.13)M, −460
F13 (Expanded Extended Griewank’s plus Rosenbrock’s Function (F8F2))E, −130
F14 (Expanded Rotated Extended Scaffe’s F6)E, −300
F15 (Hybrid Composition Function 1)HC, 120
F16 (Rotated Hybrid Composition Function 1)HC, 120
F17 (Rotated Hybrid Composition Function 1 with Noise in Fitness)HC, 120
F18 (Rotated Hybrid Composition Function 2)HC, 10
F19 (Rotated Hybrid Composition Function 2 with a Narrow Basin for the Global Optimum)HC, 10
F20 (Rotated Hybrid Composition Function 2 with the Global Optimum on the Bounds)HC, 10
F21 (Rotated Hybrid Composition Function 3)HC, 360
F22 (Rotated Hybrid Composition Function 3 with High Condition Number Matrix)HC, 360
F23 (Noncontinuous Rotated Hybrid Composition Function 3)HC, 360
F24 (Rotated Hybrid Composition Function 4)HC, 260
F25 (Rotated Hybrid Composition Function 4 without Bounds)HC, 260

U means unimodal function, M means multimodal function, E means extended function, and HC means hybrid composition function.

FunctionF1F2F3F4F5F6F7F8F9F10F11F12F13F14F15F16F17F18F19F20F21F22F23F24F25

DSA3 + (2.18E − 28 ± 0)7 + (3.07E + 3 ± 9.49E + 2)7 + (5.71E + 7 ± 4.82E + 6)7 + (8.32E + 3 ± 2.61E + 3)6 + (9.58E + 3 ± 5.83E + 2)7 + (5.72E + 1 ± 8.91E + 1)5 + (7.9E − 2 ± 7.26E − 3)5 + (9.17 ± 8.97E − 1)1 + (0 ± 0)3 + (1.44E + 2 ± 6.28E + 1)7 + (7.16E + 1 ± 4.1)7 + (6.46E + 4 ± 8.48E + 3)4 + (3.19 ± 4.78E − 1)7 + (6.71E + 1 ± 8.7E − 1)2 + (7.32E + 1 ± 3.02E + 1)7 + (7E + 2 ± 4.26E + 1)6 + (6.48E+2 ± 9.64E + 1)4 + (9.29E + 2 ± 1.21E + 1)3 + (3.98E + 2 ± 8.29)4 + (5.59E + 2 ± 1.75)4 + (7.23E + 2 ± 1.95E − 1)7 + (3.11E + 3 ± 7.84E + 1)5 + (2.02E + 3 ± 9.71E − 5)4 + (6.28E + 2 ± 3.58E − 6)7 + (9.85E + 2 ± 6.69)
BMO4 + (8.55E − 28 ± 1.3E − 29)6 + (9.66E − 1 ± 6.46E − 2)6 + (8.19E + 6 ± 8.24E + 6)6 + (7.6E + 3 ± 6.22E + 3)5 + (4.52E + 3 ± 6.96E + 3)5 + (5.34E + 1 ± 7.31E + 1)6 + (9.14E − 2 ± 4.87E − 2)6 + (9.82 ± 7.24E − 1)4 + (3.28 ± 4.11)4 + (1.5E + 2 ± 8.16E + 1)4 + (3.36E + 1 ± 8.57)5 + (8.48E + 3 ± 2.71E + 3)5 + (3.4 ± 8.41E − 1)6 + (4.51E + 1 ± 7.38E − 1)5 + (4.12E + 2 ± 8.93E + 2)6 + (4.78E + 2 ± 8.13E + 1)7 + (7.62E + 2 ± 9.26E + 1)1 − (4.49E + 2 ± 3.67)5 + (9.03E + 2 ± 2.3)3 + (4.3E + 2 ± 2.7)7 + (4.31E + 3 ± 7.33)3 + (8.77E + 2 ± 9.82E + 1)7 + (8.38E + 03 ± 7.31)6 + (1.01E + 3 ± 5.31)4 + (3.69E + 2 ± 3.21)
TLBO6 + (4.63E − 27 ± 6.52E − 27)4 + (1.89E − 9 ± 4.84E − 9)3 + (1.31E + 6 ± 6.83E + 5)4 + (3.35E + 2 ± 8.4E + 2)3 + (3.68E + 3 ± 1E + 3)4 + (3.32E + 1 ± 5.38E + 1)3 + (3.64E − 2 ± 4.71E − 2)3 + (7.29 ± 6.29E − 1)7 + (1.01E + 2 ± 3.36E + 1)5 + (1.7E + 2 ± 3.48E + 1)5 + (3.41E + 1 ± 5.42)4 + (5.97E + 3 ± 1.52E + 3)6 + (4.85 ± 1.46)5 + (1.93E + 1 ± 4.2E − 1)7 + (5.37E + 2 ± 5.97E + 1)5 + (3.33E + 2 ± 2.05E + 2)3 + (2.91E + 2 ± 1.69E + 2)6 + (1.05E + 3 ± 4.41E + 1)6 + (9.51E + 2 ± 3.96E + 1)5 + (9.85E + 2 ± 2.27E + 1)5 + (1.2E + 3 ± 3.84E + 2)4 + (9.47E + 2 ± 4.27E + 1)6 + (2.05E + 3 ± 2.75E + 1)2 + (2.9E + 2 ± 2.5E + 2)6 + (4.31E + 2 ± 4.77E + 2)
CS5 + (3.13E − 27 ± 1.26E − 26)5 + (2.73E − 3 ± 3.91E − 3)5 + (2.8E + 6 ± 8.04E + 5)5 + (3.25E + 3 ± 1.34E + 3)4 + (4.1E + 3 ± 7.32E + 2)3 + (3.05E + 1 ± 3.17E + 1)2 + (4.3E − 3 ± 7.28E − 3)2 + (7.06 ± 6.28E − 1)6 + (3.54E + 1 ± 5.6)6 + (2.08E + 2 ± 4.31E + 1)3 + (3.31E + 1 ± 2.86)6 + (1.39E + 4 ± 3.13E + 3)7 + (6.73 ± 2.11)4 + (1.81E + 1 ± 2.81E − 1)6 + (4.18E + 2 ± 8.28E + 1)4 + (3.2E + 2 ± 7.12E + 1)4 + (3.28E + 2 ± 6.14E + 1)7 + (1E + 10 ± 6.89E + 7)7 + (9.85E + 2 ± 3.31)6 + (1.04E + 3 ± 3.51E + 1)3 + (6.81E + 2 ± 6.55E + 1)5 + (1.02E + 3 ± 2.73E + 1)2 − (6.56E + 2 ± 2.68E + 1)5 + (6.93E + 2 ± 3.72E + 2)2 − (2.7E + 2 ± 2.67)
ROA7 + (2.98E − 25 ± 5.33E − 26)2 − (9.18E − 17 ± 1.83E − 17)4 + (2.06E + 6 ± 6.51E + 5)1 − (5.65E − 3 ± 3.08E − 4)7 + (1E + 4 ± 2.28E + 3)2 + (1.44E + 1 ± 3.91)4 + (6.7E − 2 ± 1.93E − 2)4 + (7.93 ± 9.73E − 1)5 + (8.6 ± 2.47)7 + (3.63E + 2 ± 6.9E + 1)6 + (3.6E + 1 ± 3.84)3 + (1.64E + 3 ± 3.96E + 2)3 + (2.36 ± 4.76E − 1)3 + (1.43E + 1 ± 4.03E − 1)3 + (3.5E + 2 ± 2.64E + 2)2 + (2.46E + 2 ± 1.98E + 2)2 + (2.31E + 2 ± 1.37E + 1)5 + (9.64E + 2 ± 3.79)2 + (1.78E + 2 ± 1.31E + 1)7 + (1.37E + 3 ± 8.87E + 1)6 + (1.41E + 3 ± 2.98E + 2)6 + (1.63E + 3 ± 6.99E + 1)4 + (1.64E + 3 ± 2.44E + 2)7 + (1.87E+3 ± 4.27E − 2)1 − (1.64E + 2 ± 4.31E + 2)
MBADE2 + (1.48E − 28 ± 7.33E − 29)1 − (8.75E − 17 ± 3.22E − 17)2 + (2.77E + 4 ± 9.69E + 3)3 + (5.69E − 2 ± 3.18E − 4)2 + (7.23E − 1 ± 8.2E − 2)6 + (5.56E + 1 ± 1.3E + 1)7 + (4.44E + 2 ± 7.7E + 1)7 + (3.7E + 1 ± 2.08)1 × (0 ± 0)1 + (1.05E + 1 ± 5.07)2 + (8.61 ± 3.99)1 − (5.98E + 2 ± 7.97E + 1)2 + (1.1 ± 7.98E − 1)1 − (5.61 ± 9.78)4 + (3.56E+2 ± 8.42E + 1)3 + (2.82E + 2 ± 3.04E + 1)5 + (5.8E + 2 ± 6.6E + 1)3 + (7.9E + 2 ± 2.69E + 1)4 + (8.99E + 2 ± 5.51E + 1)1 − (8.39E + 2 ± 7.01E + 1)2 + (3.45E + 2 ± 9.3E + 1)2 + (6.25E + 2 ± 2.03E + 1)1 − (1.5E + 2 ± 9.41E + 1)3 + (4.35E + 2 ± 5.65E + 1)5 + (3.97E + 2 ± 7.24E + 1)
APSOA1(0 ± 0)3(2.83E − 16 ± 8.84E − 17)1(2.27E + 4 ± 9.19E + 3)2(1.12E − 2 ± 5.63E − 4)1(3.44E − 1 ± 4.39E − 2)1(1.27E + 1 ± 1.46)1(5.94E − 4 ± 5.22E − 4)1(6.65 ± 2.95E − 01)1(0 ± 0)2(6.39E + 1 ± 2.64E + 1)1(5.73 ± 2.99)2(6.74E + 2 ± 6.5E + 1)1(1.02 ± 1.7E − 1)2(7.94 ± 7.56E − 1)1(7.25E + 1 ± 6.14E + 1)1(1.43E + 2 ± 1.19E + 1)1(1.53E + 2 ± 6.78E + 1)2(7.36E + 2 ± 2.27)1(4E + 1 ± 3.32)2(8.43E + 2 ± 1.64)1(3.04E + 2 ± 1.14E − 5)1(3.73E + 2 ± 3.39E + 1)3(7.83E + 2 ± 4.63E − 1)1(1.47E + 2 ± 7.3E − 5)3(2.73E + 2 ± 4.73)

The value below column means that the average of the best values found by the method on 30 independent runs over function is and the standard deviation is . It also means that the rank of the method among all methods over function is . The sign & indicates that the proposed method performance is superior/inferior/equal to that of the method , if its value is +/−/ × .

DSABMOTLBOCSROAMBADEAPSOA

5.16 (129)5.04 (126)4.68 (117)4.56 (114)4.12 (103)2.84 (71)1.48 (37)

According to Table 5, it is easy to conclude that the proposed algorithm has a better quality in almost all functions compared with other methods; it is always in top-3 methods. Studying Table 5 also shows that the proposed method has the best performance in 16 functions.

The proposed approach has the best performance due to reaching the zero error global optimum with the CS algorithm in F1 and also with DSA in F9. There are only 9 functions in which the proposed method does not have the best performance. We can also conclude that the proposed method has a desirable ability to result a satisfactory diversity in the problem space. It is shown by the statistical test that results of the proposed method are with the best values among different methods.

Finally, the proposed algorithm complexity according to the specified guidelines of CEC 2005 [54] is shown in Table 7.



100.11838.3638.561.69
300.11847.7147.992.37
500.11854.3654.823.90

All the mentioned methods are tested on different numbers of fitness evaluations, and the results are shown in Figure 1, where , , and are, respectively, equal to 30, , and . We can see from Figure 1 that regardless of the changes in , the proposed algorithm converges to a better solution, and also it needs less rather than other methods to converge to a solution with the same quality in most of the fitness functions. This test also shows that the proposed algorithm is one of the best methods on all different numbers of fitness assessments and meets the best cost among them. In addition, when the fitness assessment number is increased, better results are derived.

In Figure 2, the results of the proposed method in terms of some criteria for different in the set of are presented for the first six objective functions in the CEC 2005 benchmark. The considered population size in this experiment is 20, for fitness function assessments. Figure 2 includes the best and the mean costs of the population members and also their standard deviation and the execution time when applying the APSOA method on test benchmark functions F1–F6 for different dimension sizes. Using the presented results in Figure 2, we can express that increasing dimension sizes makes the problem more complicated which, in turn, causes the best cost value to be raised.

4.3. Experimental Results: CEC 2010 and Real-World Problems

In this part, we are going to compare our proposed method with other recently proposed metaheuristic methods. Here, we use the benchmark functions of the test series CEC 2010 [10] and set and to 40 and , respectively. 20 famous objective functions of the CEC 2010 [10] benchmark are used as F1–F20 for assessment of this section (F1–F3 are separable, F4–F8 are 1-group N-nonseparable, F9–F13 are -groups N-nonseparable, F14–F18 are -groups N-nonseparable, and finally F19–F20 are nonseparable). is 1,000 throughout this section; besides, , i.e., the number of variables in each nonseparable subcomponent, is 50 here. Also, a set of 4 real-world problems is used as F21–F24 for assessment of this section. The first two real-world problems, i.e., F21 and F22, are, respectively, problem number 1 and problem number 7 in CEC 2011 [6]. F23 and F24 are the problem of the linear equation system [7] and the problem of the polynomial fitting system [8]. It is worthy to be mentioned that 51 independent runs are performed, and the results are averaged over them. The comparison is taken with the following methods: SDEOA [36], JOOA [37], diversity neighborhood search-enhanced particle swarm optimization (DNSPSO [9]), and D-PSO-C [35].

According to the results presented in Table 8, except for function F13 in which APSOA is not in top-3, APSOA is always among the top-3. It is the best at the end of the specified number of fitness function evaluations, for half of our problems. The summary of the results presented in Table 8 is shown in Table 9. The proposed APSOA exhibits the best performance among all methods according to the results presented in Table 9. The results in Table 8 are validated by the Friedman test with a value of 1.92E − 02.


FunctionF1F2F3F4F5F6F7F8F9F10F11F12F13F14F15F16F17F18F19F20F21F22F23F24

DNSPSO5 + (3.12E + 6 ± 1.09E + 6)4 + (3.21E + 3 ± 8.29E + 1)5 + (1.34E + 1 ± 1.34E − 1)4 + (1.15E + 12 ± 1.74E + 11)4 + (1.38E + 8 ± 1.42E + 7)4 + (1.9E + 6 ± 4.09E + 5)5 + (4.83E+5 ± 6.76E+4)5 + (3.31E + 7 ± 2.82E+7)4 + (2.84E + 8 ± 5.29E + 7)5 + (3.2E + 3 ± 5.54E + 2)5 + (1.72E + 2 ± 8.16)5 + (8.34E+5 ± 8.23E+4)5 + (1.35E + 7 ± 6.86E + 7)4 + (7.18E + 8 ± 8.44E + 7)3 + (3.2E + 3 ± 3.33E + 1)5 + (1.17E + 2 ± 1.87E − 1)4 + (2.25E + 5 ± 1.84E + 4)5 + (4.35E + 9 ± 2.79E + 9)4 + (1.35E + 6 ± 5.8E + 4)5 + (5.1E + 9 ± 2.62E + 9)5 + (9.59 ± 6.68)4 + (1.76 ± 2.91E − 1)1 + (2.79 ± 1.69)2 − (1.4E + 1 ± 1.04E + 1)
D-PSO-C4 + (3.62E + 5 ± 9.13E + 5)5 + (8.66E + 3 ± 1.46E + 2)4 + (6.36 ± 6.05E − 1)5 + (4.02E + 12 ± 2.41E + 12)5 + (1.83E + 8 ± 4.62E + 7)5+(4.95E+6 ± 9.4E+5)4 + (2.81E + 5 ± 5.37E + 4)3 + (2.23E + 7 ± 1.15E + 7)5 + (8.01E+8 ± 1.38E+8)4 + (3.12E+3 ± 9.1E + 1)4 + (1.61E + 2 ± 7.16E − 1)4 + (3.7E+5 ± 6.72E + 4)3 − (2.27E + 4 ± 1.18E + 4)5 + (2.62E + 9 ± 3.47E + 8)5 + (4.52E + 3 ± 1.55E + 2)3 + (1.21E + 2 ± 4.88E − 1)5 + (1.38E + 6 ± 8.36E + 4)4 + (1.03E + 6 ± 7.01E + 5)5 + (1.03E + 7 ± 1.28E + 6)4 + (1.81E + 6 ± 8.0E + 5)4 + (8.71 ± 7.04)5 + (1.83 ± 6.44E − 2)5 + (1.58E + 2 ± 9.4E + 1)4 + (7.42E+1 ± 7.4E + 1)
SDEOA3 + (1.18E − 2±4.02E − 2)3 + (6.3E+2 ± 3.98E + 1)2 − (1.5 ± 1.93E − 1)3 + (8.39E+11 ± 3.36E + 11)1 − (2E + 7 ± 3.25E + 6)2 + (7.31E − 1 ± 8.63E − 2)3 + (7.63E + 4 ± 6.26E+4)4 + (2.88E + 7 ± 2.47E + 5)3 + (3.07E + 7 ± 3.98E+6)2+(8.61E+2 ± 3.14E+1)1 − (1.48 ± 5E − 1)3 + (6.04E + 4 ± 1.38E + 4)2 − (8.37E + 2 ± 2.78E + 2)3 + (1.75E + 8 ± 5.23E + 6)2 + (1.83E + 3 ± 1.24E + 2)1 − (1.16E + 1 ± 3.17)3 + (1.63E + 5 ± 9.96E + 3)3 + (5.89E+3 ± 3.13E + 3)2 + (1.02E + 6 ± 5.9E + 4)3 + (2.41E + 3 ± 1.73E + 2)2 + (1.12 ± 1.31)2 − (1.52 ± 9.87E − 2)3 + (1.94E + 1 ± 8.24)1 − (1.1E + 1 ±1.86E + 1)
JOOA2 + (1.18E − 19 ± 6.17E − 19)2 + (4.12E+2 ± 3.33E + 1)1 − (1.42 ± 1.46E − 1)2 + (1.22E + 11 ± 3.74E + 10)2 − (2.43E + 7 ± 5.46E + 6)3 + (1.63E + 6 ± 1.52E + 6)2 + (1.25E − 5 ± 2.48E − 4)1 − (2.42E − 3 ± 3.64E − 2)2 + (2.51E + 7 ± 4.03E + 6)3 + (1.67E + 3 ± 3.71E + 2)3 + (6.26E+1 ± 6.26)2 + (1.13E + 3 ± 1.3E+2)1 − (2.78E + 2 ± 8.58E + 1)1 − (7.16E + 7 ± 4.11E + 6)4 + (3.86E + 3 ± 4.37E + 2)4 + (1.27E+2 ± 1.02E + 1)1 − (1.81E + 4 ± 1.51E + 3)2 + (8.5E+2 ± 1.07E + 2)3 + (1.13E + 6 ± 4.31E + 4)1 − (8.37E + 2 ± 6.71E + 1)3 + (1.48 ± 1.71)1 − (1.37 ± 1.01E − 1)4 + (4.92E + 1 ± 2.67E + 1)5 + (1.49E + 2 ± 2.22E + 2)
APSOA1(7.49E − 23 ± 5.37E − 23)1(2.71E + 1 ± 8.71E + 1)3(2.13 ± 9.04E − 1)1(9.95E + 10 ± 5.45E + 10)3(3.24E + 7 ± 4.18E + 6)1(1.11E − 1 ± 1.28E − 2)1(5.73E − 6 ± 5.21E − 6)2(6.65 ± 2.95E − 01)1 + (7.38E + 6 ± 1.96E + 6)1(6.69E + 2 ± 2.63E + 1)2(3.75 ± 2.98E − 1)1(6.85E + 2 ± 7.47E + 1)4(5.42E + 4 ± 5.07E + 4)2(8.94E + 7 ± 5.56E + 6)1(6.27E + 2 ± 3.19E + 2)2(9.43E + 1 ± 8.74)2(1.88E + 4 ± 6.34E + 3)1(7.37E + 2 ± 3.27E + 2)1(6.44E + 5 ± 6.32E + 5)2(8.41E + 2 ± 1.46)1(1.04 ± 1.13)3(1.73 ± 3.38E − 1)2(1.08E + 1 ± 4.62E − 1)3(1.57E + 1 ± 1.73E + 1)

The value below column means that the average of the best values found by the method on 30 independent runs over function is and the standard deviation is . It also means that the rank of the method among all methods over function is . The sign & indicates that the proposed method performance is superior/inferior/equal to that of the method , if its value is +/−/×.

DNSPSOD-PSO-CSDEOAJOOAAPSOA

4.25 (102)4.33 (104)2.38 (57)2.29 (55)1.75 (42)

4.4. Experimental Results: A Real-World Problem

Artificial intelligence is an appropriate candidate in solving many of the real-world electrical problems [5571]. In this part, we have solved a problem of combined heat and power economic dispatch [6, 71] using our proposed method and different optimization algorithms. A particle is shown as a vector with size 9 denoted by P = [pow1, pow2, pow3, pow4, pöw1, pöw2, hëat1, hëat2, heat1]. Then, the cost function of the problem issubject towhere the functions , , , , , , and are defined as follows:

Also, two other conditions and , for two cogeneration power-heat units depicted in Figure 3 should be met. In Figure 3, the valid work regions for cogeneration power-heat unit numbers 1 and 2 are depicted.

According to the parameters defined in the papers related to different optimization algorithms, their population size has been set here. Let and represent the best solution in the population of a method after fitness evaluations and its cost function value, respectively. We have used and the related cost value in method for the results to be fair. For this problem, the provided solutions using different optimization algorithms are shown in Table 10 in which we can see that the proposed method has the best performance.


ABCEPPSORCGAProposed

43.9561.3618.4674.6844.56
98.5995.12124.2697.9698.54
112.9399.94112.78167.23112.67
209.77208.73209.82124.91209.82
98.8098.8098.8198.8095.15
44.0044.0044.0144.0040.00
12.1018.0757.9258.1021.50
78.0277.5532.7632.4175.00
59.8854.3759.3259.4953.50
Cost1031710390106131066710097
Consumed time5.165.285.386.471.84

5. Conclusions and Future Works

Nature-inspired social and solitary behaviors have motivated numerous algorithms in different scientific studies which are usually successful and efficient. In this paper, instinctive behaviors of birds are used to provide a more accurate, more target-oriented, and more controlled algorithm than the basic swarm algorithms. According to the classical conditioning learning behavior, a model is presented in which a normal task based on a natural stimulant for each particle is implemented in the search space. This model implies that when a particle experiences a low diversity category, it will move towards the local optimal point, while if it lies in a high diversity category, it will be moved towards the global optimum of its category.

An initial population according to the elite particles is also generated based on the assumption that those birds which have no sufficient energy will encounter flight problems. Another goal in the proposed algorithm is to provide more exploitation time for the particles in valuable spaces which motivated us to reduce their velocity by creating some changes in the velocity equation and vice-versa. Simulation results of the proposed method provided in four parts proved that it is an efficient and reliable algorithm in static functions compared with other algorithms. We concluded that our method uses a mechanism that finds more accurate solutions in a simpler and faster way, and it also has a better operation in industrial applications in comparison with prior methods.

For future works, we propose to use the following ideas based on what this paper was accomplished: applying the idea of chaos theory in the initial population, studying the quantum particles in the abovementioned algorithm, and implementing the algorithm of this paper to solve dynamic optimization problems.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. M. Song and D. Chen, “An improved knowledge-informed NSGA-II for multi-objective land allocation (MOLA),” Geo-spatial Information Science, vol. 21, no. 4, pp. 273–287, 2018. View at: Publisher Site | Google Scholar
  2. T. Pukkala, “Optimized cellular automaton for stand delineation,” Journal of Forestry Research, vol. 30, no. 1, pp. 107–119, 2019. View at: Publisher Site | Google Scholar
  3. A. Kalantari, A. Kamsin, S. Shamshirband, A. Gani, H. Alinejad-Rokny, and A. T. Chronopoulos, “Computational intelligence approaches for classification of medical data: state-of-the-art, future challenges and research directions,” Neurocomputing, vol. 276, pp. 2–22, 2018. View at: Publisher Site | Google Scholar
  4. R. Andonie, A. T. Chronopoulos, D. Grosu, and H. Gâlmeanu, “An efficient concurrent implementation of a neural network algorithm,” Concurrency and Computation: Practice and Experience, vol. 18, no. 12, pp. 1559–1573, 2006. View at: Publisher Site | Google Scholar
  5. G. E. Phillips-Wren, L. S. Iyer, U. R. Kulkarni, and T. Ariyachandra, “Business analytics in the context of big data: a roadmap for research,” Communications of the Association for Information Systems, vol. 37, p. 23, 2015. View at: Publisher Site | Google Scholar
  6. S. Das and P. N. Suganthan, Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems, Jadavpur University, Kolkata, India, 2010.
  7. C. García-Martínez, M. Lozano, F. Herrera, D. Molina, and A. M. Sánchez, “Global and local real-coded genetic algorithms based on parent-centric crossover operators,” European Journal of Operational Research, vol. 185, no. 3, pp. 1088–1113, 2008. View at: Publisher Site | Google Scholar
  8. F. Herrera and M. Lozano, “Gradual distributed real-coded genetic algorithms,” IEEE Transactions on Evolutionary Computation, vol. 4, no. 1, pp. 43–63, 2000. View at: Publisher Site | Google Scholar
  9. K. Tang, X. Li, P. N. Suganthan, Z. Yang, and T. Weise, “Benchmark functions for the CEC’2010 special session and competition on large-scale global optimization,” Nature Inspired Computer Application Lab, University of Science and Technology of China, Hefei, China, 2009, http://goanna.cs.rmit.edu.au/xiaodong/publications/lsgo-cec10.pdf. View at: Google Scholar
  10. G. Yildizdan and Ö. K. Baykan, “A novel modified bat algorithm hybridizing by differential evolution algorithm,” Expert Systems with Applications, vol. 141, p. 112949, 2020. View at: Publisher Site | Google Scholar
  11. R. L. Haupt and S. E. Haupt, Practical Genetic Algorithms, John Wiley & Sons, Hoboken, NJ, USA, 2nd edition, 2004.
  12. S. B. L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, UK, 2004.
  13. W. Sun and Y. Yuan, Optimization Theory and Methods: Nonlinear Programming, Springer, Berlin, Germany, 2006.
  14. J. Nocedal and S. J. Wright, Numerical Optimization, Berlin, Germany, 2nd edition, 2006.
  15. J. Holland, “Genetic algorithms and the optimal allocation of trials,” SIAM Journal on Computing, vol. 2, pp. 88–105, 1973. View at: Publisher Site | Google Scholar
  16. S. Binitha and S. S. Sathya, “A survey of bio inspired optimization algorithms,” International Journal of Soft Computing and Engineering, vol. 2, 2012. View at: Google Scholar
  17. J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of the 4th IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, December 1995. View at: Google Scholar
  18. N. F. Wan and L. Nolle, “Solving a multi-dimensional knapsack problem using hybrid particle,” in Proceedings of the 23rd European Conference on Modelling and Simulation, Lancaster, UK, October 2008. View at: Google Scholar
  19. K. B. Deep, “A socio-cognitive particle swarm optimization for multi-dimensional,” in Proceedings of the First International Conference on Emerging Trends in Engineering, pp. 355–360, Nagpur, India, July 2008. View at: Publisher Site | Google Scholar
  20. X. Shen, Y. Li, C. Chen, J. Yang, and D. Zhang, “Greedy continuous particle swarm optimisation algorithm for the knapsack problems,” International Journal of Computer Applications in Technology, vol. 44, no. 2, pp. 37–144, 2012. View at: Publisher Site | Google Scholar
  21. H. S. Lopes and L. S. Coelho, “Particle swarm optimization with fast local search for the blind traveling salesman problem,” in Proceedings of the Fifth International Conference on Hybrid Intelligent Systems, pp. 245–250, Rio de Janeiro, Brazil, November 2005. View at: Publisher Site | Google Scholar
  22. D. Karaboga and B. Basturk, “A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm,” Journal of Global Optimization, vol. 39, no. 3, pp. 459–471, 2007. View at: Publisher Site | Google Scholar
  23. A. Banharnsakun, B. Sirinaovakul, and T. Achalakul, “Job shop scheduling with the best-so-far ABC,” Engineering Applications of Artificial Intelligence, vol. 25, no. 3, pp. 583–593, 2012. View at: Publisher Site | Google Scholar
  24. D. Karaboga and B. Gorkemli, “A combinatorial artificial bee colony algorithm for traveling salesman problem,” in Proceedings of the International Symposium on Intelligent Systems and Applications, pp. 50–53, Istanbul, Turkey, June 2011. View at: Publisher Site | Google Scholar
  25. D. T. Pham, A. Ghanbarzadeh, E. Koc, S. Otri, S. Rahim, and M. Zaidi, The Bees Algorithm. Technical Note, Cardiff University, Cardiff, UK, 2005.
  26. D. Pham, E. Koc, J. Lee, and J. Phrueksanant, “Using the bees algorithm to schedule jobs for a machine,” in Proceedings of Eighth International Conference on Laser Metrology CMM and Machine, Cardiff, UK, 2007. View at: Google Scholar
  27. D. T. Pham, S. Otri, A. Afify, M. Mahmuddin, and H. Al-Jabbouli, “Data clustering using the bees algorithm,” in Proceedings of the 40th CIRP International Seminar on Manufacturing Systems, Liverpool, UK, May 2007. View at: Google Scholar
  28. Z. Geem, J. Kim, and G. Loganathan, “A new heuristic optimization algorithm: harmony search,” Simulation, vol. 76, 2001. View at: Publisher Site | Google Scholar
  29. X. Miao, J. Chu, L. Zhang, and J. Qiao, “An evolutionary neural network approach to simple prediction of dam deformation,” Journal of Information & Computational Science, vol. 10, pp. 1315–1324, 2013. View at: Publisher Site | Google Scholar
  30. M.-Y. Cheng and L.-C. Lien, “Hybrid artificial intelligence-based PBA for benchmark functions and Facility Layout Design optimization,” Journal of Computing in Civil Engineering, vol. 26, no. 5, pp. 612–624, 2012. View at: Publisher Site | Google Scholar
  31. W. Feng and C. Liu, “A novel particle swarm optimization algorithm for global optimization,” Computational Intelligence and Neuroscience, vol. 2016, Article ID 9482073, 9 pages, 2016. View at: Publisher Site | Google Scholar
  32. X. S. Yang and S. Deb, “Cuckoo search via Levy flights,” in Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), pp. 210–214, Coimbatore, India, December 2009. View at: Publisher Site | Google Scholar
  33. P. Civicioglu, “Transforming geocentric Cartesian coordinates to geodetic coordinates by using differential search algorithm,” Computers & Geosciences, vol. 46, no. 9, pp. 229–247, 2012. View at: Publisher Site | Google Scholar
  34. A. Askarzadeh, “Bird mating optimizer: an optimization algorithm inspired by bird-mating strategies,” Communications in Nonlinear Science and Numerical Simulation, vol. 19, no. 4, pp. 1213–1228, 2014. View at: Publisher Site | Google Scholar
  35. X. Xu, Y. Tang, J. Li, C. Hua, and X. Guan, “Dynamic multi-swarm particle swarm optimizer with cooperative learning strategy,” Applied Soft Computing, vol. 29, pp. 169–183, 2015. View at: Publisher Site | Google Scholar
  36. A. Draa, S. Bouzoubia, and I. Boukhalfa, “A sinusoidal differential evolution algorithm for numerical optimisation,” Applied Soft Computing, vol. 27, pp. 99–126, 2015. View at: Publisher Site | Google Scholar
  37. G. Sun, R. Zhao, and Y. Lan, “Joint operations algorithm for large-scale global optimization,” Applied Soft Computing, vol. 38, pp. 1025–1039, 2016. View at: Publisher Site | Google Scholar
  38. J. Wang, B. Zhou, and S. Zhou, “An improved Cuckoo search optimization algorithm for the problem of chaotic systems parameter estimation,” Computational Intelligence and Neuroscience, vol. 2016, Article ID 2959370, 8 pages, 2016. View at: Publisher Site | Google Scholar
  39. E. R. Tanweer, S. Suresh, and N. Sundararajan, “Self-regulating particle swarm optimization algorithm,” Innovative Applications of Artificial Neural Networks in Engineering, vol. 294, pp. 182–202, 2015. View at: Google Scholar
  40. F. T. Zhao, Z. Yao, J. Luan, and X. Son, “A novel fused optimization algorithm of genetic algorithm and Ant colony optimization,” Mathematical Problems in Engineering, vol. 2016, Article ID 2167413, 10 pages, 2016. View at: Publisher Site | Google Scholar
  41. M. Thankur, “A new genetic algorithm for global optimization of multimodal continuous functions,” Journal of Computational Science, vol. 5, no. 2, pp. 298–311, 2014. View at: Publisher Site | Google Scholar
  42. N. Zare, H. Shameli, and H. Parvin, “An innovative natural-derived meta-heuristic optimization method,” Applied Intelligence, 2016. View at: Publisher Site | Google Scholar
  43. C. Cubukcuoglu, I. Chatzikonstantinou, M. Tasgetiren, I. Sariyildiz, Q.-K. Pan, and Q. K. Pan, “A multi-objective harmony search algorithm for sustainable design of floating settlements,” Algorithms, vol. 9, no. 3, p. 51, 2016. View at: Publisher Site | Google Scholar
  44. I. Obagbuwa and A. Abidoye, “Binary Cockroach swarm optimization for combinatorial optimization problem,” Algorithms, vol. 9, no. 3, p. 59, 2016. View at: Publisher Site | Google Scholar
  45. R. M. Rizk Allah., “Hybridization of Fruit fly optimization algorithm and Firefly algorithm for solving Nonlinear Programming problems,” International Journal of Swarm Intelligence and Evolutionary Computation, vol. 5, no. 2, 2016. View at: Publisher Site | Google Scholar
  46. Q. Zhang, A. Zhou, S. Zhao, P. Suganthan, W. Liu, and S. Tiwari, “Multiobjective optimization test instances for the CEC 2009 special session and competition,” Tech. Rep., American Society of Mechanical Engineers, New York, NY, USA, 2009, Technical Report CES-487. View at: Google Scholar
  47. R. M. May, “Simple mathematical models with very complicated dynamics,” Nature, vol. 261, no. 5560, pp. 459–467, 1976. View at: Publisher Site | Google Scholar
  48. R. Storn and K. Price, “Differential evolution a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, vol. 11, no. 4, pp. 341–359, 1997. View at: Publisher Site | Google Scholar
  49. H. Gao and W. Xu, “A new particle swarm algorithm and its globally convergent modifications,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 41, no. 5, pp. 1334–1351, 2011. View at: Publisher Site | Google Scholar
  50. R. Mallipeddi, P. N. Suganthan, Q. K. Pan, and M. F. Tasgetiren, “Differential evolution algorithm with ensemble of parameters and mutation strategies,” Applied Soft Computing, vol. 11, no. 2, pp. 1679–1696, 2011. View at: Publisher Site | Google Scholar
  51. Y. Liang, Y. Liu, and L. Zhang, “An improved Artificial Bee Colony (ABC) algorithm for large scale optimization,” in Proceedings of the 2nd International Symposium on Instrumentation and Measurement, Sensor Network and Automation (IMSNA), Toronto, Canada, December 2013. View at: Google Scholar
  52. X. S. Yang, Nature-Inspired Metaheuristic Algorithms, Luniver Press, Washington, DC, USA, Second edition, 2011.
  53. S. C. Satapathy and A. Naik, “Modified Teaching-Learning-Based Optimization algorithm for global numerical optimization-a comparative study,” Swarm and Evolutionary Computation, vol. 16, pp. 28–37, 2014. View at: Publisher Site | Google Scholar
  54. P. N. Suganthan, N. Hansen, and J. J. Liang, Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real Parameter Optimization, Nanyang Technological University, Nanyang, Singapore, 2005.
  55. M. Gitizadeh, S. Goodarzi, and A. R. Abbasi, “An efficient linear network model for transmission expansion planning based on piecewise McCormick Relaxation,” IET Generation Transmission & Distribution, vol. 13, no. 23, pp. 5404–5412, 2019. View at: Publisher Site | Google Scholar
  56. A. R. Abbasi, M. R. Mahmoudi, and Z. Avazzadeh, “Diagnosis and clustering of power transformer winding fault types by cross-correlation and clustering analysis of FRA results, IET Generation,” Transmission & Distribution, vol. 12, no. 19, pp. 4301–4309, 2018. View at: Publisher Site | Google Scholar
  57. A. Abbasi and A. Seifi, “A novel method mixed power flow in transmission and distribution systems by using master-slave splitting method,” Electric Power Components and Systems, vol. 36, no. 11, pp. 1141–1149, 2008. View at: Publisher Site | Google Scholar
  58. A. R. Abbasi and A. R. Seifi, “Considering cost and reliability in electrical and thermal distribution networks reinforcement planning,” Energy, vol. 84, pp. 25–35, 2015. View at: Publisher Site | Google Scholar
  59. A. R. Abbasi and A. R. Seifi, “A new coordinated approach to state estimation in integrated power systems,” International Journal of Electrical Power & Energy Systems, vol. 45, no. 1, pp. 152–158, 2013. View at: Publisher Site | Google Scholar
  60. A. R. Abbasi and A. R. Seifi, “Energy expansion planning by considering electrical and thermal expansion simultaneously,” Energy Conversion and Management, vol. 83, pp. 9–18, 2014. View at: Publisher Site | Google Scholar
  61. A. R. Abbasi and A. R. Seifi, “Unified electrical and thermal energy expansion planning with considering network reconfiguration, IET Generation,” Transmission & Distribution, vol. 9, no. 6, pp. 592–601, 2015. View at: Google Scholar
  62. A. Zare, A. Kavousi-Fard, A. Abbasi, and F. Kavousi-Fard, “A sufficient stochastic framework to capture the uncertainty of load models in the management of distributed generations in power systems,” Journal of Intelligent & Fuzzy Systems, vol. 28, no. 1, pp. 447–456, 2015. View at: Publisher Site | Google Scholar
  63. A. R. Abbasi and A. R. Seifi, “Simultaneous Integrated stochastic electrical and thermal energy expansion planning,” IET Generation, Transmission & Distribution, vol. 8, no. 6, pp. 1017–1027, 2014. View at: Publisher Site | Google Scholar
  64. A. R. Abbasi and A. R. Seifi, “Fast and perfect damping circuit for ferroresonance phenomena in coupling capacitor voltage transformers,” Electric Power Components and Systems, vol. 37, no. 4, pp. 393–402, 2009. View at: Publisher Site | Google Scholar
  65. A. Kavousi-Fard, S. Abbasi, A. Abbasi, and S. Tabatabaie, “Optimal probabilistic reconfiguration of smart distribution grids considering penetration of plug-in hybrid electric vehicles,” Journal of Intelligent & Fuzzy Systems, vol. 29, no. 5, pp. 1847–1855, 2015. View at: Publisher Site | Google Scholar
  66. M. Javidsharifi, T. Niknam, J. Aghaei, G. Mokryani, and P. Papadopoulos, “Multi-objective day-ahead scheduling of microgrids using modified grey wolf optimizer algorithm,” Journal of Intelligent & Fuzzy Systems, vol. 36, no. 3, pp. 2857–2870, 2019. View at: Publisher Site | Google Scholar
  67. M. Mohammadi, S. Soleymani, T. Niknam, and T. Amraee, “Stochastic multi-objective distribution automation strategies from reliability enhancement point of view in the presence of plug in electric vehicles,” Journal of Intelligent & Fuzzy Systems, vol. 36, no. 3, pp. 2933–2945, 2019. View at: Publisher Site | Google Scholar
  68. M. Saeidi, T. Niknam, J. Aghaei, and M. Zare, “Multi-objective coordination of local and centralized volt/var control with optimal switch and distributed generations placement,” Journal of Intelligent & Fuzzy Systems, vol. 36, no. 6, pp. 6605–6617, 2019. View at: Publisher Site | Google Scholar
  69. M. Ahmadi, K. Kazemi, A. Aarabi, T. Niknam, and M. S. Helfroush, “Image segmentation using multilevel thresholding based on modified bird mating optimization,” Multimedia Tools and Applications, vol. 78, no. 16, pp. 23003–23027, 2019. View at: Publisher Site | Google Scholar
  70. S. Pirouzi, J. Aghaei, T. Niknam et al., “Power conditioning of distribution networks via single-phase electric vehicles equipped,” IEEE Systems Journal, vol. 13, no. 3, pp. 3433–3442, 2019. View at: Publisher Site | Google Scholar
  71. M. J. Mokarram, T. Niknam, J. Aghaei, M. Shafie-khah, and J. P. S. Catalão, “Hybrid optimization algorithm to solve the nonconvex multiarea economic dispatch problem,” IEEE Systems Journal, vol. 13, no. 3, pp. 3400–3409, 2019. View at: Publisher Site | Google Scholar

Copyright © 2020 Feng Qian et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles