Abstract

Shuffled frog leaping algorithm, a novel heuristic method, is inspired by the foraging behavior of the frog population, which has been designed by the shuffled process and the PSO framework. To increase the convergence speed and effectiveness, the currently improved versions are focused on the local search ability in PSO framework, which limited the development of SFLA. Therefore, we first propose a new scheme based on evolutionary strategy, which is accomplished by quantum evolution and eigenvector evolution. In this scheme, the frog leaping rule based on quantum evolution is achieved by two potential wells with the historical information for the local search, and eigenvector evolution is achieved by the eigenvector evolutionary operator for the global search. To test the performance of the proposed approach, the basic benchmark suites, CEC2013 and CEC2014, and a parameter optimization problem of SVM are used to compare 15 well-known algorithms. Experimental results demonstrate that the performance of the proposed algorithm is better than that of the other heuristic algorithms.

1. Introduction

In these years, a large number of complex nonlinear optimization problems are solved using mathematical tools by mathematic models. In these cases, traditional approaches often cannot obtain a better solution, which promotes the development of the heuristic optimization techniques. Metaheuristic approach is inspired by the characteristics of the different species in nature, which has rapidly progressed by the theory of biology, physics, society, and so on. Not only one solution but also several solutions are obtained by the heuristic process to find the exact or approximate global optimum. In order to accelerate the convergence speed and find the global solution, researchers have proposed more and more new or improved algorithms. These approaches can be divided into three categories: swarm intelligence (SI), evolutionary algorithms (EAs), and physical phenomena (PP) algorithms.

Swarm intelligence algorithms are inspired by the collective behaviors of natural species such as insects, animals, microorganism, and human. In SI algorithms, the population is a set of individuals (solutions) distributed in search space, which can work in cooperation by survival or competition mechanism to solve problems. In these years, more SI algorithms have been proposed, such as spotted hyena optimizer (SHO) [1], forest optimization algorithm (FOA) [2], particle swarm optimization (PSO) [3], whale optimization algorithm (WOA) [4], artificial bee colony (ABC) algorithm [5], grey wolf optimizer (GWO) [6], grasshopper optimization algorithm (GOA) [7], teaching-learning-based optimization (TLBO) [8], invasive tumor growth optimization (ITGO) algorithm [9], artificial algae algorithm (AAA) [10], and Salp swarm algorithm [11].

Evolutionary algorithms (EAs) are inspired by the Darwinian principles of nature’s capability to evolve living beings well adapted to their environment. The key is to imitate the individual’s evolution through mutation, selection, and crossover operation, thus resulting in a better solution. The typical evolutionary algorithms are genetic algorithms [12, 13], evolutionary strategies [14, 15], evolutionary programming [16], and genetic programming [17]. Fusing the mutation mechanism of PSO, differential evolution (DE) and its improved versions are proposed and have achieved greater success in many fields for the continuous optimization problems, such as the typical differential evolution (DE) [18] and adaptive differential evolution (SHADE, LSHADE, and iL-SHADE) [1921].

Physical phenomena (PP) algorithms mimic physical rules in some physical phenomena. Contrast to SIs or EAs, each individual in a population moves and communicates in the search space according to the physical rules. The typical PP algorithms are thermal exchange optimization (TEO) [22], gravitational search algorithm (GSA) [23], lightning attachment procedure optimization (LAPO) [24], black hole (BH) [25], ray optimization (RO) algorithm [26], and so on.

In recent decades, more than hundreds of population-based optimization algorithms have been proposed yet. We are puzzled: is it necessary to propose a new or improved optimization algorithm? Fortunately, the No Free Lunch (NFL) theorem [27] has been proposed. This theorem has logically proved that there is no metaheuristic algorithm best suited for solving all the optimization problems. It is to say that a particular metaheuristic approach may show very good results on a set of problems, but the same algorithm may show poor performance on a different set of problems. Therefore, we need to propose a new approach to solve the continuous optimization problems, which can better achieve the balance between the exploitation ability and the exploration ability. In addition, we want that the proposed method can solve more different optimization problems whether they are in the coordinate system or in the rotated coordinate system. In this paper, we focus on the research about the advantages and disadvantages of the shuffled frog leaping algorithm (SFLA) [28] and the related technique in order to propose a new approach for the continuous optimization problems.

The shuffled frog leaping algorithm is a SI algorithm inspired by the foraging behavior of frogs, which combines the mechanism of meme diffusion for the global exploration and search by PSO for the local exploitation. Due to its simplicity and efficiency, it has been widely applied to many real-world optimization problems such as the traveling salesman problem (TSP) [28], vehicle routing problem [29], economic dispatch problem [30, 31], 0/1 knapsack problem [32], resource-constrained project scheduling problem [33, 34], flow shop scheduling problem [35, 36], and grid task scheduling problem [37]. The version of SFLA for the discrete optimization problems has achieved great success, but the version of SFLA for the continuous optimization problems is not performing well due to its low convergence speed and premature problem. In this case, some improved versions of SFLA have been proposed. For instance, Xia Li et al. [38] considered the historical information for the local exploration and proposed an extremal optimization process, which was completed by the fine-grained Gaussian mutation and the coarse-grained Cauchy mutation. In order to enhance the diversity of population and the local exploration ability, the opposition-based learning (OBL) strategy was used in literature [39]. The initial population was produced by the basic uniform distribution and the opposition-based learning process, which reinforces the diversity of the population. In addition, the frog leaping rule was improved by the normal and opposition-based search. Morteza Alinia Ahandani et al. [40] diversified the search rule of SFAL by the differential evolution operator substituted for the basic frog leaping rule. Sharma S et al. [41] found that it was not enough that each worst frog was guided only by the best frog in each subpopulation. So, the centroid of three new individuals was considered for the frog leaping rule. Hong-bo Wang et al. [42] combined the historical information, information of the local frog and global frog substituted for the basic frog leaping search method, and the mutation operation by the normal distribution and Cauchy distribution was used for the globally best frog and the worst frog. Liu C et al. [43] used the chaotic opposition-based learning to achieve the population initialization, and then the adaptive nonlinear inertia weight and the perturbation operator strategy based on Gaussian mutation were used for the balance between the exploration and the exploitation. Paper [44] presents grouped SFLA for solving continuous optimization problems combined with the excellent characteristics of cloud model transformation between qualitative and quantitative research. Deyu Tang et al. [45] proposed a lévy flight mutation operator for the frog leaping rule and an interaction learning rule for the global search. Wenjuan Li et al. [46] used quantum movement equations to search for the optimal location according to the co-evolution of the quantum frog colony.

As mentioned above, we find that the shuffled frog leaping algorithm has been successfully applied to many combinatorial optimization problems, but it is not efficient for the continuous optimization problem due to the weakness of balance between the exploration and the exploitation, such as the weakness of the exploitation ability (the local search ability) and the loss of the exploration operator (the global search operator). Only using the shuffled strategy and the guidance by the best local frog is not enough for the complex problems such as the multimodal optimization problems. Therefore, researchers used the opposition-based learning strategy [39], chaotic strategy [43], and so on to enhance the diversity of the population. Meanwhile, the differential operators [40] and the reserving the historical information strategy [38] are adopted for the exploration. In addition, the first version of quantum inspired SFLA by Q-bits and Q-gate [46] was proposed, which can be seen as the quantum computing approach. However, the quantum simulation approach has not been utilized for SFLA. So far, the improved versions of SFLA are still based on the PSO framework. In addition, we find that some improved SFLA algorithms can be used successfully to solve the optimization problems in the coordinate system but lose to solve these problems in the rotated coordinate system or contrary. Therefore, we attempt to establish a new framework of SFLA based on quantum evolution and eigenvector evolution in order to achieve the balance between the exploitation (quantum evolution) and the exploration (eigenvector evolution) to solve more different optimization problems whether they are in the coordinate system or in the rotated coordinate system.

The major contributions are as follows. First, we propose a two-stage search framework of SFLA based on the quantum evolution and eigenvector evolution for the balance between the exploitation and the exploration. Second, the exploitation is achieved by a quantum evolutionary operator with historical information for the frog leaping rule. Third, the exploration is achieved by the adaptive eigenvector evolutionary operator.

The rest of the paper is organized as follows. In Section 2, the shuffled frog leaping algorithm is introduced. Related work is reviewed and discussed in Section 3. In Section 4, we propose the evolutionary frog leaping algorithm. Experimental results and analysis are shown in Section 5. Finally, conclusions and further discussions are given in Section 6.

2. Shuffled Frog Leaping Algorithm (SFLA)

The shuffled frog leaping algorithm is a heuristic approach inspired by the frog foraging behavior, which is designed according to the memetic evolution principle and the PSO framework. Suppose each frog has thought and is living by the meme information (culture information). The frog population is divided into different memeplexes (communities or groups) according to the common thought (or meme). In each global iteration step, memeplexes are divided again by the shuffled process according to the memetic evolution principle, which can be seen as the global exploration process. In each submemeplex, the frog leaping process is achieved by the simplified PSO, which can be seen as the local exploration process. In each local step, the worst frog and the best frog are obtained, and the worst frog is guided by the best frog to find the best food. The memetic evolution process and the frog leaping process are completed alternately corresponding to the balance between the exploration and the exploitation. The SFLA can be described as follows: Firstly, the initial population is generated randomly and divided into submemeplexes in a descending sort. The shuffled process can be represented as equation (1) where () is an integer which indicates the population size, indicates the number of memeplexes, and indicates the number of frogs in each submemeplex. The fitness f(i) for the ith frog can be evaluated and sorted in descending order to form m memeplexes H1H2, …, Hb, …, Hm, which can be constructed bywhere is a set of solutions in a memeplex and indicates a solution and it is a vector.

Second, the frogs finish the updating step in each submemeplex according to the following equation:where represents the position of the worst frog, represents the position of the best frog, denotes a random number of uniform distribution between (0, 1), and and belong to the same submemeplex. If the fitness of is better than , is updated. Otherwise, is replaced by the position of the globally best frog ; if the fitness of is better than , is updated. If there is still no improvement, a feasible solution is generated to replace . This updating step can be seen as a local search step, which continues until the threshold value reaches the predefined number of iteration within each memeplex. The local search step and the shuffling processes alternate until a predefined convergence criterion is satisfied. The time complexity of getting the best frog and the worst frog in a submemeplex is , where is the number of frogs in each submemeplex. Thus, the total time complexity of is as follows:where is the population size, is the number of submemeplexes, k is the local iteration number in each submemeplex, is the number of frogs in each submemeplex, , is the number of the total iteration, and D is the dimension. Pseudo code of SFLA is shown in Algorithm 1.

(1)The pseudo code of SFLA ( )
(2)FES = 0;//fitness evaluation number
(3)Randomized initialization and evaluate fitness values f(xk), for k = 1, 2, …, ps.
(4)FES = FES + ps;
(5)While FES< = MAX_FES//the max fitness evaluation number
(6)Sort and arrange population (ps = ) according to the fitness, where m is the number of memeplexes, n is the number of frogs in each memeplex, and k is the local iteration number.
(7)Get the global best frog ;
(8)For i = 1 to m do//m memeplexes
(9) For j = 1 to k do//k is the local iteration number in ith memeplex
(10)  Get the worst and best frog xworst, xbest in the ith memeplex; //n frogs in ith memeplex
(11)    =  + rand (X best − X worst);
(12)   FES = FES + 1;
(13)  If < f
(14)     = 
(15)  ELSEIF  >= 
(16)    =  + rand ( −−);
(17)   If < f
(18)     = 
(19)   End IF
(20)   FES = FES + 1;
(21)   ELSEIF  > =
(22)      =  + rand (1, D) (); //lb, ub denote the search boundary
(23)    END IF
(24)  End for//m frogs
(25) End for//n memeplexes
(26) End while

The balance between the exploitation and the exploration is a core task for metaheuristic approach. For this goal, some approaches use one operator with more units to achieve this task, such as PSO. The search operator of PSO is accomplished by the global search unit and historical search unit, in which the global search unit achieves the exploitation task and historical search unit achieves the exploration task. Another approach uses two or more operators to achieve the balance between the exploitation and the exploration, such as ABC, CS, and TLBO. For the ABC, the operator of employed bees and the operator of scout bees achieve exploration task and the operator of onlooker bees achieves exploitation task. For the CS, lévy flight operator achieves exploitation task with a mutation method and nest selection operator achieves the exploration task. For TLBO, the teaching operator achieves the exploitation task and the learning operator achieves the exploration task. As we know, the SFLA has only one operator, which achieves the exploitation task. The operator for exploration of SFLA is missing. Therefore, the framework by the two stage for SFLA can be considered.

3.1. Quantum Simulation (QS) Methods

In 2004, J. Sun et al. [47] proposed the first quantum behaved particle swarm optimization, which simulates the quantum evolutionary process under the particle swarm optimization framework. After that, many improved QS algorithms are developed such as weighed mean best position [48], Gaussian probability distribution for the local attractor [49], group search optimizer [50], diversity control strategy [51, 52], cooperative mechanism [53], chaotic mutation operator [54], decentralized strategy with cellular structured population [55], memetic algorithm [56], two-stage search method [57], and collaborative attractor [58]. More and more improved versions of QPSO are based on the basic quantum simulation model. The quantum search operator is proposed according to the Schrodinger equation and Monte Carlo method.

The state function in time can be represented by the Schrodinger equation, in which is the Hamiltonian operator and denotes the time.

The mass of one particle in a potential field can be represented as follows:where means the Planck constant.

Using the Monte Carlo method, we can obtain equation (6) according to the Delta potential well model:where denotes the parameter of search, is the potential well, denotes a particle i in D dimensional space, and denotes a random number of uniform distribution between (0, 1).

Considering the convergence, , when . Here, denotes the time. Quantum simulations have not been used for SFLA, so we attempt to achieve the quantum evolution as a component of SFLA for the exploitation.

3.2. Eigenvector Approach

Eigenvector information of the covariance matrix of a set can rotate the coordinate system, and it can be used for a multivariate statistical method of principal component analysis (PCA) [59], which can reduce the dimension of the handled multivariate data to some extent. The inspiration of PCPSO is derived from a methodology known as the Lagrange point of view [60] for creating and flying in a dynamic coordinate system with the particles. Chu et al. [61] introduced principal component analysis into PSO to remedy the problem caused by the absorbing bound-handling approach. Inspired by the Hamiltonian Monte Carlo (HMC) method, Kuznetsova et al. [62] proposed the PCA-based stochastic optimization (PCA-SO) algorithm. Xinchao Zhao et al. [63] proposed the improved version of PSO by the PCA and the line search method. All these approaches are based on the principle of PCA, in which eigenvector has not been used directly for the optimization problem. In 2015, literature [64] first proposed an eigenvector-based crossover operator for differential evolution (DE) in order to improve the performance of nonrotationally invariant crossovers by rotating the coordinate system to make the function landscape be pseudo-separable. In 2016, Noor H. Awad et al. [65] used the eigenvector-based crossover operator and other methods to improve the LSHADE version and proposed the LSHADE-EpSin algorithm. However, the eigenvector search method is sensitive to the parameter setting. In this paper, we attempt to achieve a simple eigenvector evolutionary operator without parameter setting as a component of SFLA for the exploration.

To sum up, the SFLA has the idea of balance between the exploitation and the exploration. However, the exploitation by frog leaping rule is only guided by the best frogs, which can speed up the convergence but is easy to fall into the local optimum. More importantly, the real exploration operator in SFLA is missing. In quantum simulation approach, the individual is guided by the potential well not the best individual, which can enhance the diversity of population. So, it can be considered as the component of SFLA for exploitation. The eigenvector crossover operator has achieved the rotationally invariant of differential evolution for complex optimization problems. However, the real eigenvector search approach without parameter setting is not proposed. If an eigenvector evolutionary operator can be completed, it can be considered as the component of SFLA for the exploration operator.

4. The Proposed Approach

In SFLA, frogs only have jumping behavior to spread information like humans, which is not enough to simulate their social behaviors. Indeed, interactive learning among a population is popular in society. Therefore, interactive learning characteristic can be modeled to simulate the social behaviors of frogs. In this paper, we propose a two-stage search framework of SFLA for the balance between the exploitation and the exploration. In the first search stage, the local search is achieved in terms of the quantum evolutionary operator instead of the PSO operator, which simulates the jumping behavior of frogs in the quantum space. In the second search stage, the global search is achieved in terms of the adaptive eigenvector evolutionary operator instead of only using the shuffled operator, which simulates the interactive learning characteristic of frogs.

4.1. Quantum Evolutionary Operator

The quantum evolutionary operator is achieved in different submemeplexes by the shuffled process as equation (1). Therefore, it can be considered as a local search process. According to the quantum simulation model introduced above, we know that the quantum evolutionary operator is achieved by the Monte Carlo method according to the potential well model. In equation (6), considering the convergence, , when , where is considered as the potential well. The basic quantum evolutionary operator only uses one potential well, which accelerates the search speed but is easy to fall into the local optimum. For this, we propose the second potential well with memory to enhance the search ability for quantum evolution. The new search operator can be represented by the following equation.where denotes the first potential well, denotes the second potential well, is the search parameter, and is the population size. rand denotes a random number of the uniform distribution in [0, 1]. is a vector in dimensional space, and is also a vector in dimensional space.

In SFLA, the frog leaping rule equation (2) can be considered as the first potential well because the location of the search is located in a rectangle area between the worst frog and the best frog. It can be represented as the following equation:

The second potential well can be represented by equations (9) and (10).where indicates the means of positions of frogs in a submemeplex (a group), and it is a local center.

Equation (10) denotes a recurrence formula, is an integer, and it is increased from 1 to n. The initial value of is set as 1. is the number of frogs in a submemeplex, which is equal to the iteration number of the local search process. The initial is a zero vector.

Equation (11) is a linear function, which controls the search scope. FES is the fitness evaluation number, and MAX_FES is the max numbers of fitness value.

Figure 1 shows an instance of the quantum evolutionary process. First, the population is divided into different submemeplexes according to equation (1). It can be observed in the left section of Figure 1. Twelve frogs in the population are divided into three submemeplexes (different colors such as green, yellow, and purple), and there are four frogs in each submemeplex. The 12 frogs X1, X2, X3, .., X12 are arranged according to the descending order by fitness value. For example, X6 is an ordinary frog, it can be arranged into the purple submemeplex according to equation (1), i.e., X6 = X3 + 3  (2 − 1) (b = 3, k = 2). X3 denotes the best frog, and X12 denotes the worst frog in the purple submemeplex. The quantum evolution operator is achieved by the worst frog in each submemeplex. The search process can be observed in the search space (the right section of Figure 1). The 12 balls in search space are corresponding to the 12 rectangles in fitness space for 12 frogs. The red ball with vertical line denotes the first potential well , the blue ball with horizontal line denotes the second potential well , and the grey ball with horizontal line denotes the historical . is achieved by the current (blue ball) and the historical (grey ball) by the recurrence formula in equation (10). The first potential well is obtained in a rectangle region by a diagonal between and according to equation (8). In fact, it is the basic search operator of the shuffled frog leaping algorithm. The worst frog runs toward the position of the best frog , and it is easy to converge to the local optimum. Fortunately, the worst frog is guided not only by the first potential well but also by the second potential well in the quantum evolutionary operator. The basic second potential well is the centroid of the local submemeplex according to equation (9). It can be observed that can guide the worst frog search in a quadrilateral region by the frog X3, X6, X9, and X12 as shown in Figure 1, which can enhance the diversity of the population. However, it cannot ensure that the worst frog flees from the local optimum. Therefore, we propose an improved potential well instead of , which retains the historical information for each search process. The potential well is produced by many historical (grey balls) and the current (blue ball), which can ensure that it can run out of the quadrilateral region (it can be seen as equation (10) and Figure 1). It can be observed that the search of the worst frog Xworst is guided by the first potential well and the second potential well . In contrast to the old search rule as equation (2), the worst frog Xworst is not easy to fall into the local optimum (blue region). In other words, the worst frog Xworst has more opportunities to run toward the direction of the global optimum (red region) guided by the different potential wells not only guided by the best frog Xbest.

4.2. Adaptive Eigenvector Evolutionary Operator

Interactive learning behavior is achieved in the whole population not in the local population (a submemeplex). Therefore, it is a global search process. It can be represented as the following equation:where are three vectors in dimensional space corresponding to three frogs. are three different integers in a set as , where is the population size. is a solution vector, which can be updated by the difference between and . rand denotes a random number of the uniform distribution in [0, 1]. is the mean of all the in the population, and is the mean of all the in the population.

To compute the eigenvector basis, we factorize the covariance matrix into a canonical form as follows:where is the square matrix (D rows and D columns) whose column is the eigenvector of . is the diagonal matrix whose diagonal elements are the corresponding eigenvalues. The eigenvector evolutionary operator can be represented as follows: where (or ) denotes an individual and it is a vector with one column and D rows, is a square matrix by D rows and D columns, and D denotes the dimension of a solution vector. So, (or) denotes a new individual, and it is a vector with one column and D rows. is a set of integers as {1,2, 3, …, D}, which denotes the randomly selected r rows for the individual , and r is a randomly selected integer from the set as {1,2, 3, …, D}.

When the solution is updated in the eigenvector basis, the updating behavior will become rotationally invariant in the natural basis. To reduce the risk of ineffective behavior of rotationally invariant operator , we introduce an adaptive selection strategy for the original operator and the eigenvector evolutionary operator.where is a random number of uniform distribution in [0, 1] and is a self-adapting selection parameter value. It can be represented as follows:

The initial value and . denotes the number of success by original operator as shown in equation (12), and denotes the number of success by eigenvector evolutionary operator as shown in equation (16).

Here, we explain our approach by a shifted rotated expanded Scaffer F6 function in a two-dimensional space (shown in Figures 2 and 3). Figure 3 shows the major characteristic of the eigenvector basis, and the eigenvector evolution shows the search direction in a rotationally invariant. Considering the complexity of real-world optimization problems, the original operator and the eigenvector evolutionary operator are running alternately according to the success rate, and it is achieved by an adaptive selection mechanism (equations (17) and (18)). Pseudo code of the proposed approach is shown in Algorithm 2.

(1)The pseuco code of EFLA ( )
(2) Parameter setting: population size , m, n Max fitness number (MAX_FES) etc.
(3) Initialize a population of frogs with random solutions and compute the fitness.
(4) while FES< = MAX_FES
(5) //Local search process (exploitation)
(6) sort ; //descending order for the population according to the fitness values
(7) For (i = 1; i < = m; i + +)//m is the number of memeplexes
(8)  For (j = 1; j < = n; j + +)//n is the number of frogs in one submemeplex
(9)  Get the best frog and the worst frog in one memeplex;
(10)  Computing the two potential wells and length of search by equations (8), (9), (10), and (11);
(11)  Updating the position of frogs by equation (7);
(12)   End for
(13)  End for
(14) //Global search process (exploration)
(15) For (i = 1; i< = ; i + +)
(16)  Obtain the new position of frogs by equation (12); // means a vector
(17)End for
(18)Use equations (13), (14), (15), (16), and (17); //the eigenvector basis based search operator
(19)For (i = 1; i < = ; i + +)//
(20)  If rand <
(21)   Xnew1 = Yi;//standard search equation (12)
(22)  Else// , means the same matrix with columns and rows
(23)   ; //eigenvector search (equations (15), (16), and (17))
(24)  End IF
(25)p = p(1−1/ps0) + (1/ps0) (p1/(p1 + p2)); //equation (18), Initial p0 = 0.5; ps0 = 2
(26)// is the number of success by according to the fitness value
(27)// is the number of success by according to the fitness value
(28)  End for
(29)End While

5. Experimental Results and Analysis

To test the performance of the proposed approach, we conduct a real-world parameter optimization problem of SVM and the 30 benchmark functions recommended by literature [45], CEC2013 [66], and CEC2014 [67]. Fifteen functions (F1–F15) belong to the basic optimization problem, nine functions (F16–F24) belong to CEC2013, and six functions (F25–F30) belong to CEC2014. Different problems (unimodal, multimodal, rotated, and composition) are considered. To verify the effectiveness of the proposed algorithm, 12 well-known algorithms including NNA [68], LAPO [69], GbABC [70], SFLA [71], SCA [72], SSA [11], GWO [6], CMAES [73], WQPSO [48], TSQPSO [57], SaDE [74], and AAA [10] are used for benchmark suites. Each algorithm is carried out independently on the same machine by the MATLAB 2009R for 30 runs. For fair comparison, the fitness evaluation number (FES) is used instead of using the number of iteration. The max evaluation number (MAX_FES) is set as , and D is the dimension. We record all the fitness evaluations and the error values , where denotes the global optimum value. The basis analysis such as mean and standard deviation is used. In addition, we adopt the T-test [75], Wilcoxon signed-rank test [76], and Friedman test [76].

Wilcoxon’s test is used in our experimental study, the first step is to compute the R + and R− related to the comparisons between EFLA and the rest of algorithms. Let R + be the sum of ranks for the problems in which the first algorithm outperformed the second, and R− be the sum of ranks for the opposite. Once they have been obtained, their associated p values can be computed. The null hypothesis H0 was used for the Wilcoxon signed-rank tests for purpose of this paper. The statistical significant value used to test H0 hypothesis is  = 0.05. If in any test a p value that is smaller than or equal to significance level value is produced, then the H0 hypothesis for that test is rejected and the alternative hypothesis is selected.

The Friedman test is a nonparametric analog of the parametric two-way analysis of variance, which can be used to detect whether there exist significance among the results of the algorithms. Inside the field of statistics, hypothesis testing can be used to draw inferences about one or more populations from the given results. The null hypothesis for Friedman’s test states equality of results between the algorithms. The alternative hypothesis is defined as the negation of the null hypothesis, so it is nondirectional. The statistical significance value used to test H0 hypothesis is  = 0.05. If in any test a p value that is smaller than or equal to significance level value is produced, then the H0 hypothesis for that test is rejected and the alternative hypothesis is selected. It ranks the algorithms for each problem separately; the best performing algorithm ranks 1, the second best has a rank of 2, and so on.

The details of the 30 benchmark functions are shown in Table 1, and the parameter setting of 13 algorithms is shown in Table 2.

5.1. Experimental Results and Analysis on the First Test Suite for Low Dimension (D = 30)

In this test, we compare the proposed algorithm with the 12 well-known algorithms for the 30 benchmark functions in low dimension (D = 30). Tables 3 and 4 show the comparison results with EFLA and 12 algorithms including two quantum simulation based-PSO versions (TSQPSO and WQPSO), seven SI algorithms (NNA, LAPO, SFLA, SSA, SCA, GWO, and AAA), an improved ABC algorithm (GbABC), and two evolutionary algorithms (SaDE and CMAES). The mean and standard deviation of the error values obtained by the 13 algorithms are listed in Tables 3 and 4, where the performance rank of the 13 algorithms is also represented. The character ‘R’ in the first line of Tables 3 and 4 is the abbreviation of word ‘rank’; each column starting with ‘R’ after each algorithm denotes the rank of the 13 algorithms. It can be observed that the proposed algorithm is a competitive SFLA variant for the first test suite. According to the theorem of ‘No Free Lunch’ [27], one algorithm cannot offer better performance than all the others on every aspect or on every kind of problem. This is also observed in our experimental results. In optimizing the 30 functions, EFLA is ranked the first for 10 times, the second for 7 times, the third for 5 times, and ranked the fourth and fifth for 2 and 1 times, respectively. Compared with the solution accuracy on seven unimodal functions (F1, F2, F3, F4, F16, F17, and F18), EFLA is ranked first 2 times, CMAES is ranked first 2 times, SaDE is ranked first 2 times, LAPO is ranked first 4 times, SaDE and EFLA are ranked first 2 times, and GWO is ranked first 1 time. It indicates that EFLA achieves the global optimum for sphere function (F1) and Schwefel’s problem 1.2 (F3). On many multimodal functions, EFLA represents the better exploration ability than others. Rastrigrin function (F5) and Griewank function (F7) have many local optima, and it can be seen that EFLA obtains the global optimum. Especially, it is observed that EFLA achieves better solution accuracy than the other 12 algorithms for the hybrid function 1 (N = 3) and hybrid function 2 (N = 3), which are complex multimodal and nonseparable problems. We can also observe that EFLA achieves the better solution accuracy than LAPO TSQPSO, WQPSO, GbABC, NNA, SaDE, SFLA, SSA, CMAES, SCA, AAA, and GWO for F1, F3, F5, F7, F10, F11, F23, F26, F27, and F29.

For the thorough analysis, the ‘robustness’ in this paper is used to evaluate the search stability of the algorithms under different condition (such as the rotated and unrotated test function). It means that the proposed algorithm can obtain better performance whether the optimization problem is in the rotated system or in the unrotated system. We know that the 30 optimization problems in this suite are combined by the two types. Type 1 (F1–F15) is the basic unrotated benchmark functions, and Type 2 (F16–F30) is the rotated, composition or hybrid problems. It is observed from Tables 3 and 4 that WQPSO, CMAES, SaDE, and SSA can achieve the better solution accuracy for Type 2 but poor accuracy for Type 1. Contrary, LAPO, SCA, NNA, TSQPSO, GbABC, and GWO can achieve the better accuracy for Type 1 but poor accuracy for Type 2. In comparison with them, it is observed that EFLA achieves the better solution accuracy for more optimization problems including not only Type 1 but also Type 2, which means that it has the better robustness than others.

The T−test method is used to compare the difference between EFLA to the other 12 algorithms for the 30 optimization problems. In these experiments, a two-tailed test with significance level of 0.05 is adopted, and the p value and t-value are recorded and shown in Tables 5 and 6. The better results of EFLA than other algorithms are shown in bold. ‘−’ means that EFLA and the other algorithm obtain almost the same global optimum (it cannot be computed by t−test). The last lines in Table 5 and 6 denote the number of ‘B,’’ ‘N,’ and ‘W.’ ‘B’ means that EFLA is significantly better than the other algorithm, ‘N’ means that there is no significant difference between EFLA and the other algorithm, and ‘W’ means that the other algorithm is significantly better than EFLA. The average excellent rate between EFLA and the other 12 algorithms for the 30 functions is 71.11%, which can be computed as the following equation:

It means the average performance of EFLA is good although it is not always the best for all the functions with respect to the 12 algorithms.

Table 7 indicates more details of comparison results by Wilcoxon’s test. ‘+’ means that the EFLA is better than the other algorithms.‘ = ’ means that the EFLA obtains the equal results than the other algorithms. ‘−’ means that the EFLA is less than the other algorithms. The last three columns show the number of winner (), equal (e), and lose (l) including the p values and the z-values. Table 7 shows that the average excellent rate between EFLA and the other 12 algorithms for the 30 functions is 80.55%, which can be computed as follows:

In addition, Friedman’s test is used. Table 8 shows the average rank of 13 algorithms. We can see that the performance of EFLA is better than that of the other 12 algorithms with the minimal average rank value (3.12). Table 9 shows that the comparison of 13 algorithms has significant difference with p value = 9.43E − 11 (<0.05).

To compare the convergence speed of 13 algorithms, the convergence curve is drawn in Figure 4. We can see that the EFLA outperforms the 12 algorithms for the 6 different optimization problems including unrotated multimodal functions (F3, F7, and F11) and rotated multimodal functions (F23, F27, and F29). It is observed that the proposed algorithm converges faster than the others in overall iteration process for F7, F28, and F29. For rotated Schwefel’s function (F23), SSA converges faster than EFLA in the early iteration. However, in the late iteration, EFLA achieves the best solution than SSA. It means that SSA has the stronger exploitation ability but weak exploration ability. For EFLA, it completes the better balance between exploitation and exploration.

5.2. Scalability Analysis for the Second Test Suite for High Dimension (D-100)

We know that the performance of many heuristic optimization algorithms decreases drastically with the increase in the problem scale. In this experiment, we conduct scalability analysis for the 13 algorithms to test the performance of 100-D functions including two unrotated multimodal functions (F1 and F3) and four rotated multimodal functions (F20, F23, F28, and F29). Experimental settings are the same as those described in Table 2. Table 10 reports the mean and standard deviation of the error values obtained by the 13 algorithms, where the best results are marked in bold. The character ‘R’ in the first line of Table 10 is the abbreviation of word ‘rank’; each column starting with ‘R’ after each algorithm denotes the rank of the 13 algorithms. The grey shading of each cell denotes that the performance of EFLA is better than that of the other algorithm. It can be observed that EFLA is ranked the first for 1 times, the second for 2 times, and the third for 3 times, respectively. There are at least four optimization problems that EFLA outperforms each one in the other 12 algorithms. For hybrid function 4 (N = 4) (F28), EFLA obtains the better accuracy than LAPO, TSQPSO, WQPSO, GbABC, NNA, SaDE, SFLA, SSA, CMAES, SCA, AAA, and GWO. For sphere function (F1), EFLA obtains the better accuracy than LAPO, TSQPSO, WQPSO, GbABC, NNA, SaDE, SFLA, SSA, CMAES, SCA, and AAA except for GWO.

Particularly, for the multimodal functions, the number of local optima increases drastically with the problem dimension, which makes some algorithms vulnerable to premature convergence. As shown in Table 3, Table 4, and Table 10, EFLA is the robust algorithm that can maintain a good global search ability in such cases for rotated Schwefel’s function (F23) and hybrid function 4 (N = 5) (F29), whereas the performance of the other algorithms deteriorates severely. Figure 5 shows the convergence curve of 13 algorithms for the F28 and F29. We can see that EFLA converges faster than 12 algorithms for F23 except F29. Though EFLA converges slowly than some other algorithms such as LAPO and GbABC, for F23, it achieves the best or almost the same fitness value than the other algorithms in the last iteration.

In addition, Friedman’s test is used. Table 11 shows the average rank of 13 algorithms. We can see that the performance of EFLA is better than that of the other 12 algorithms with the minimal average rank value (2.50). Table 12 shows that the comparison of 13 algorithms has significant difference with p value = 0.001 (< 0.05).

5.3. Parameter Sensitivity Analysis

Three parameters in our proposed approach, namely, popsize, m, and n, should be tuned. m is the number of submemeplexes and n is the number of frogs in each submemeplex. It is obvious that population size (popsize) is equal to . There are 12 combinations for the three parameters (popsize, m, and n). The proposed algorithms with 12 different parameter settings are run 30 times for the 30 benchmark optimization problems in Table 13. The results are shown in Table 13 in terms of the mean and standard deviation of the error values obtained in the 30 runs independently by EFLA with different parameter combinations, and the bold font denotes the winner. The character ‘R’ in the first line of Table 13 is the abbreviation of word ‘rank’; each column starting with ‘R’ after each EFLA algorithm means the rank of the different twelve parameter combinations. The last line of Table 13 shows the number of winners for the EFLA by different parameter combinations. In order to analyze the impact of population size (popsize) of EFLA, the twelve parameter combinations are arranged to six groups as popsize = 20 (EFLA01, EFLA02, EFLA03, and EFLA04), popsize = 30 (EFLA05), popsize = 40 (EFLA06), popsize = 50 (FELA07), popsize = 60 (EFLA08), and popsize = 100 (EFLA09, EFLA10, EFLA11, and EFLA12). Generally speaking, the performances of many metaheuristic algorithms are affected by the population size. The larger population size can enhance the diversity of population but can reduce the convergence speed. Therefore, the population size of EFLA like other algorithms should be tuned according to the different optimization problems. In Table 13, it can be observed that EFLAs (EFLA01, EFLA02, EFLA03, and EFLA04) by the minimal population size (popsize = 20) obtain the larger number of winners especially for the Type 1 (basic unrotated benchmark functions, F1–F15). Contrast to the Type 2 (F16–F30, the rotated, composition, and hybrid problems), EFLAs (from EFLA05 to EFLA12) corresponding to the population size (from popsize = 30 to popsize = 100) obtain the larger number of winners especially for the Type 2. It can be observed that EFLAs can obtain better performance for Type 1 and Type 2 problems when the population size is increased from 30 to 60. In addition, the other two parameter combinations (m, n) also affect the performance of EFLA by the same population size. According to equations (6) and (7), we know that the number of submemeplexes (m) is equal to the number of the best solution (), which influences the first potential well . EFLA with smaller number of submemeplexes is easy to trap into the local optimum for the multimodal problems due to the small numbers (parameter m) of the best solution (). In contrast, the larger number of submemeplexes (m) can increase the diversity of the population but cannot ensure that it obtains better solution. Figure 6 shows the performance of EFLAs with different parameter combinations by Friedman’s test. It can be observed that EFLA obtains the poor performance by the smaller values of m (m = 2 for population size 20 or 100) whether the population size is larger or smaller. Table 14 shows the average rank results by the Friedman test. We can see that the best parameter combination of popsize, m, and n is (popsize = 50, m = 10, and n = 5). Table 15 shows the statistical value (p−value = 0.003); it indicates that the performance of EFLA with different four parameters has significant difference. In addition, we compare the performance of the 12 algorithms with that of the EFLA algorithms with the 12 different parameter combinations by Friedman’s test. The statistical value is equal to 200.829, and p value is equal to 2.40E−30. It means that there are significant differences among the 24 algorithms. Figure 7 shows the ranks among the 24 algorithms. It is worth mentioning that the EFLA outperforms all the 12 algorithms no matter which parameter combination is chosen.

5.4. The Time Complexity of EFLA Algorithm
5.4.1. Theoretical Analysis for the Time Complexity

In this paper, the time complexity of the proposed method depends on the number of fitness evaluation, number of iterations, population size, covariance matrix, and eigen decomposition. So, the overall time complexity can be presented as follows:where QE means the time complexity of quantum evolution (QE) operator and AEE means the time complexity of adaptive eigenvector evolution (AEE) operator. For the quantum evolutionary operator, the time complexity of the potential well is , where is the number of frogs in each submemeplex. Thus, the total time complexity of is as follows:where is the population size, is the number of submemeplexes, is the number of frogs, , is the number of iteration, and D is the dimension. For the time complexity of the eigenvector evolutionary operator (EE), the covariance matrix contains elements, each of which requires cycles to be done. The eigen decomposition is solved by Jacobi’s method, of which the time complexity is . In the proposed method, the adaptive evolutionary operator is completed. Thus, in extreme cases, if all the individuals in population achieve the eigenvector evolutionary operator, the overall time complexity of the eigenvector evolution is as follows:

In another case, if all the individuals in the population achieve the basic evolutionary operator , the overall time complexity is as follows:

So, .

Therefore, in the worst case, the overall time complexity of EFLA is as follows:

In the best case, the overall time complexity of EFLA is as follows:

5.4.2. Experimental Results and Analysis for the Running times

All the algorithms are run by Matlab R2016a using the Windows 7 operating system. Hardware configuration of the computer is as follows: four CPU, Intel (R) Core (TM) i5−4200 U, CPU 1.60 GHz, and RAM 4.00 GB. Five optimization problems (F26, F27, F28, F29, and F30) in Table 1 are adopted. We select these five functions because they are hybrid functions or composition functions, which means that these problems are more complex and need more running times. Each algorithm is run 30 times for each optimization problem independently, and all the running times (minutes) are saved. All the parameter settings of the 13 algorithms are adopted as mentioned above. The experimental results are shown in Table 16. The character ‘R’ in the first line of Table 16 is the abbreviation of word ‘rank’; each column starting with ‘R’ after each algorithm denotes the rank of the 13 algorithms. The bold fonts denote the winner for the running times. In addition, the second column and the third column denote the rank and averages rank value by the Friedman test, where the p value = 3.14E − 4, and the statistical value = 36.0791. It can be observed that the rank of the 13 algorithms is GbABC (R_All = 1), LAPO (R_All = 2), AAA (R_All = 3), EFLA (R_All=4), WQPSO (R_All = 5), SFLA (R_All = 6), SSA (TSQPSO) (R_All = 7), NNA (SCA) (R_All = 8), GWO (R_All = 9), CMAES (R_All = 10), and SaDE (R_All = 11). Although LAPO and GbABC obtain the better running time than other algorithms, they cannot obtain the better performance than many algorithms (it can be seen in Table 3). The classical algorithms such as SaDE and CMAES obtain the worst running times although they can obtain the better performance for some problems. EFLA obtains the best performance than the other 12 algorithm (it can be seen in Table 3) by an acceptable running time.

5.5. Experiment on the Parameter Optimization of SVM

Support vector machine (SVM) is a statistical classification method based on the VC (Vapnik–Chervonenkis) statistical learning theory and the structural risk minimization principle proposed by Vapnik et al. [77], which has been applied widely to many fields. Given a set of training data samples as , , where represents the input vector and represents the number of class. The task of classification is to find the maximum margin separating hyper plane. It is an optimal problem, which can be represented as follows:where denotes the kernel function, is the generally used kernel functions, and is a parameter. Experiments show that the parameters should be tuned before SVM is used. It can be seen as a black box optimization problem and can be solved by the metaheuristic algorithms.

In this test, six benchmark data sets chosen from machine learning databases [78] are used for the LIBSVM tool provided by Chih-Jen Lin [79]. To achieve the parameter optimization, the searching space of is set as [1E−1, 1E3] for and [1E−2, 1E3] for . The accuracy of prediction (%) of SVM by the 10-cross validation is used as the fitness value. For fair comparison, the parameter setting including the population size is set as Table 17 according to the related literature, and the max fitness evaluation is set as MAX_FES = 1000. Four methods including LSHADE [20], LSHADE-EpSin [65], TLBO [8], and LAPO [69] are selected for comparison. And the parameter settings of LSHADE and LSHADE-EpSin are used as literatures [20, 65]. Specifically, LSHADE obtains the winner in CEC 2014 and LSHADE-EpSin obtains the winner in CEC 2016. Each algorithm is run 20 times, and the median, mean, and std. are recorded for comparison.

Table 18 shows the details of the six data sets with different dimension. Table 19 shows the median and std. of the five algorithms for six data sets. We can see that EFLA obtains the best accuracy of prediction than LSHADE, LSHADE-cnEpSin, TLBO, and LAPO for PBCW data set (F4) in high dimension and heart data set (F6) in low dimension space. It indicates the better scalability of EFLA to solve the parameter optimization problems of SVM. EFLA obtains the same accuracy of LSHADE, LSHADE-cnEpSin, TLBO, and LAPO for F2, F3, and F5.

In addition, we draw the convergence curve as Figure 8 for the fair comparison including the search speed and the accuracy. In the early iteration, the proposed algorithm does not converge faster than the other algorithms. It means that it has stronger exploration ability and is not easy to trap into the local optimum. In the late iteration, the proposed algorithm jumps out the local optimum and finds the better solution. To sum up, we can see that EFLA has better performance than LSHADE, LSHADE-cnEpSin, TLBO, and LAPO.

6. Conclusions

In this paper, we propose an improved SFLA algorithm (called EFLA) by the quantum evolutionary operator and the adaptive eigenvector evolutionary operator. The quantum evolutionary operator is used as the local search step instead of the traditional guidance mechanism, and the adaptive eigenvector evolutionary operator is used as the global search step instead of only using the shuffled operator. In addition, the basic search rule and the eigenvector search rule are chosen alternately to solve more different optimization problems in the coordinate system or in the rotated coordinate system. The adaptive eigenvector evolutionary operator enhances the search ability to solve the optimization problems in the rotated coordinate system.

Then, we compare the proposed approach with the 15 well−known algorithms by the best parameter setting including NNA, SSA, SCA, SFLA, LAPO, CMAES, GWO, GbABC, WQPSO, TSQPSO, SaDE, AAA, TLBO, LSHADE, and LSHADE−cnEpSin for the 30 benchmark problems and real−world parameter optimization problems of SVM. The T-test, Wilcoxon signed-rank test, and Friedman’s test are used to verify the performance of EFLA. In addition, we analyze the influence of the three major parameters of EFLA: popsize, m, and n. In general, we recommend that the three parameters (popsize, m, and n) can be set as [5, 6, 30] or [5, 10, 50] according to the experimental results and analysis. Finally, we analyze the time complexity of the EFLA algorithm and compare the running time of it with that of the 12 other algorithms. We obtain the accepted running time of EFLA with the best performance than the others (rank = 4).

In future, the proposed algorithm can be considered to solve more real-world continuous optimization problems in different fields. Such as, it may be used for an experimental evolutionary model in the domain of magnetorheological fluids [80]. Second, the quantum evolutionary operator can be used to improve the performance of the other heuristic algorithms. Third, the proposed eigenvector evolution method can be extended to solve other problems.

Data Availability

The data used to support the results of this study are obtained from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Authors’ Contributions

The corresponding author has the greatest contribution. The other authors have equally contributed to the manuscript. All authors read and approved the final manuscript.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (61976239 and 71871069); Guang Dong Provincial Natural Fund project (2020A1515010783); Guang Dong Province Precise Medicine Big Data of Traditional Chinese Medicine Engineering Technology Research Center; Guangdong Special Projects in Key Fields of Artificial Intelligence in universities (2019KZDZX1020); and Key Research Platforms and Projects of Colleges and Universities in Guangdong Province (2020ZDZX3060).