Complexity

Complexity / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 6291968 | 31 pages | https://doi.org/10.1155/2019/6291968

An Improved Squirrel Search Algorithm for Optimization

Academic Editor: Alex Alexandridis
Received15 Feb 2019
Revised05 May 2019
Accepted28 May 2019
Published01 Jul 2019

Abstract

Squirrel search algorithm (SSA) is a new biological-inspired optimization algorithm, which has been proved to be more effective for solving unimodal, multimodal, and multidimensional optimization problems. However, similar to other swarm intelligence-based algorithms, SSA also has its own disadvantages. In order to get better global convergence ability, an improved version of SSA called ISSA is proposed in this paper. Firstly, an adaptive strategy of predator presence probability is proposed to balance the exploration and exploitation capabilities of the algorithm. Secondly, a normal cloud model is introduced to describe the randomness and fuzziness of the foraging behavior of flying squirrels. Thirdly, a selection strategy between successive positions is incorporated to preserve the best position of flying squirrel individuals. Finally, in order to enhance the local search ability of the algorithm, a dimensional search enhancement strategy is utilized. 32 benchmark functions including unimodal, multimodal, and CEC 2014 functions are used to test the global search ability of the proposed ISSA. Experimental test results indicate that ISSA provides competitive performance compared with the basic SSA and other four well-known state-of-the-art optimization algorithms.

1. Introduction

Optimization is the process of determining decision variables of a function in a way that the function is in its maximal or minimal value. Many real-life engineering problems belong to optimization problems [13] in which decision variables are determined in a way that the systems operate in their best optimal point. Usually, these problems are discontinuous, nondifferentiable, multimodal, and nonconevx, and thus the classical gradient-based deterministic algorithms [46] are not applicable.

To overcome the drawbacks of the classical algorithms, a considerable number of stochastic optimization algorithms known as metaheuristic algorithms [79] have been developed in recent decades. These algorithms are mainly inspired by biological behaviors or physical phenomena and can be roughly classified into three categories: evolutionary algorithms, swarm intelligence, and physical-based algorithms. The evolutionary algorithms simulate the evolution of the nature such as reproduction, mutation, recombination, and selection, in which a population tries to survive based on the evaluation of fitness value in a given environment. Genetic algorithm (GA) [10] and evolution strategy (ES) [11] are the most popular evolutionary algorithms. Swarm intelligence algorithms are the second category, which are based on movement of a swarm of creatures and imitate the interaction of swarm and their environment in order to enhance their knowledge of a goal (e.g., food source). The most well-known swarm intelligence algorithms are particle swarm optimization (PSO) [12], artificial bee colony (ABC) [13] algorithm, ant colony optimization (ACO) [14], and grey wolf optimization algorithm (GWO) [15], to name a few. The physical-based algorithms are inspired by the basic physical laws in universe such as gravitational force, electromagnetic force, and inertia force. Several representatives of this category are simulated annealing (SA) [16], big-bang big-crunch (BB-BC) [17], and gravitational search algorithm (GSA) algorithm [18]. Some of the recent nature-inspired algorithms are lightning attachment procedure optimization (LAPO) [19], spotted hyena optimizer (SHO) [20], weighted superposition attraction (WSA) [21], and many more [22, 23].

Unfortunately, most of the abovementioned basic metaheuristic algorithms fail to balance exploration and exploitation, thereby yielding unsatisfactory performance for real-life complicated optimization problems. Exploration stands for global search ability and ensures the algorithm to reach all over the search space and then to find promising regions, whereas exploitation represents local search ability and ensures the searching of optimum within the identified promising regions. Emphasizing the exploration capability only results in a waste of computational resources on searching all over the interior regions of search space and thus reduces the convergence rate; emphasizing the exploitation capability only, by contrast, causes loss of population diversity early and thus probably leads to premature convergence or to be stuck in local optimal. This fact motivates the introduction of various strategies for improving the convergence rate and precision of the basic metaheuristic algorithms. As an illustration, premature convergence of PSO was prevented in CLPSO by proposing a comprehensive learning strategy to maintain the population diversity [24]; a social learning component called fitness-distance-ratio was employed to enhance local search capability of PSO [25]; a self-organizing hierarchical PSO with time-varying acceleration coefficients (HPSO-TVAC) was introduced to efficiently control the local search and convergence to the global optimal solution [26]; a distance-based locally informed PSO (LIPS) enabled the algorithm to quickly converge to global optimal solution with high accuracy [27]. Likewise, many modified versions have been proposed to enhance the global search ability of the basic ABC: an improved-global-best-guide term with a nonlinear adjusting factor was employed to balance the exploration and exploitation [28]; a multiobjective covariance guided artificial bee colony algorithm (M-CABC) was proposed to obtain higher precision with quick convergence speed when solving portfolio problems [29]; the slow convergence and unsatisfactory solution accuracy were improved in the variant IABC [30]. As for fruit fly optimization algorithm (FOA) [31], an escape parameter was introduced in MFOA to escape from the local solution [32]; and modified versions for balancing between exploration and exploitation abilities include, for example, IFOA [33], MSFOA [34], IFFO [35], and CMFOA [36].

Squirrel search algorithm (SSA), proposed by Mohit et al. in 2018, is a new and powerful global optimization algorithm inspired by the natural dynamic foraging behavior of flying squirrels [37]. In comparison with other swarm intelligence optimization algorithms, SSA has the advantages of better and efficient search space exploration because a seasonal monitoring condition is incorporated. Moreover, three types of trees (normal tree, oak tree, and hickory tree) are available in the forest region, preserving the population diversity and thus enhancing the exploration of the algorithm. Test results of 33 benchmark functions and a real-time controller design problem confirm the superiority of SSA in comparison with other well-known algorithms such as GA [10], PSO [25], BA (bat algorithm) [38], and FF (firefly algorithm) [39].

However, SSA still suffers from premature convergence and easily gets trapped in a local optimal solution, especially when solving highly complex problems. The convergence rate of SSA, like other swarm intelligence algorithms, depends on the balance between exploration and exploitation capabilities. In other words, an excellent performance in dealing with optimization problems requires fine-tuning of the exploration and exploitation problem. According to “no free lunch” (NFL) theorem [40], no single optimization algorithm is able to achieve the best performance for all problems, and SSA is not an exception. Therefore, there still exists room for improving the accuracy and convergence rates of SSA.

Based on the discussion above, this study proposes an improved variant of SSA (ISSA) which employs four strategies to enhance the global search ability of SSA. In brief, the main contributions of this research can be summarized as follows:

(i) An adaptive strategy of predator presence probability is proposed, which dynamically adjusts with the iteration process. This strategy discourages premature convergence and improves the intensive search ability of the algorithm, especially at the latter stages of search. In this way, a balance between the exploration and exploitation capabilities can be properly managed.

(ii) The proposed ISSA employs a normal cloud generator [41] to generate new locations for flying squirrels during the course of gliding, which improves the exploration capability of SSA. This is motivated by the fact that the gliding behaviors of flying squirrels have characteristics of randomness and fuzziness, which can be simultaneously described by the normal cloud model [42].

(iii) A selection strategy between successive positions is proposed to maintain the best position of a flying squirrel individual throughout the optimization process, which enhances the exploitation ability of the algorithm.

(iv) A dimensional search enhancement strategy is originally put forward and results in a better quality of the best solution in each iteration, thereby strengthening the local search ability of the algorithm.

The general properties of ISSA are evaluated against 32 benchmark function including unimodal, multimodal, and CEC 2014 functions [43]. Meanwhile, its performance is compared with the basic SSA and other four well-known state-of-the-art optimization algorithms.

The rest of this paper is organized as follows: Section 2 briefly recapitulates the basic SSA. Next, the proposed ISSA is presented in detail in Section 3. Experimental comparisons are illustrated in Section 4. Finally, Section 5 gives the concluding remarks.

2. The Basic Squirrel Search Optimization Algorithm

SSA mimics the dynamic foraging behavior of southern flying squirrels via gliding, an effective mechanism used by small mammals for travelling long distance, in deciduous forest of Europe and Asia [37]. During warm weather, the squirrels change their locations by gliding from one tree to another in the forest and explore for food resources. They can easily find acorn nuts for meeting daily energy needs. After that, they begin searching hickory nuts (the optimal food source) that are stored for winter. During cold weather, they become less active and maintain their energy requirements with the storage of hickory nuts. When the weather gets warm, flying squirrels become active again. The abovementioned process is repeated and continues throughout the life space of the squirrels, which serves as a foundation of the SSA. According to the food foraging strategy of flying squirrels, the optimization SSA can be modeled by the following phases mathematically.

2.1. Initialize the Algorithm Parameters

The main parameters of the SSA are the maximum number of iteration , the population size , the number of decision variables n, the predator presence probability , the scaling factor , the gliding constant , and the upper and lower bounds for decision variable and . These parameters are set in the beginning of the SSA procedure.

2.2. Initialize Flying Squirrels’ Locations and Their Sorting

The flying squirrels’ locations are randomly initialized in the search apace as follows:where is a uniformly distributed random number in the range .

The fitness value of an individual flying squirrel’s location is calculated by substituting the value of decision variables into a fitness function:Then the quality of food sources defined by the fitness value of the flying squirrels’ locations is sorted in an ascending order:After sorting the food sources of each flying squirrel’s location, three types of trees are categorized: hickory tree (hickory nuts food source), oak tree (acorn nuts food source), and normal tree. The location of the best food source (i.e., minimal fitness value) is regarded as the hickory nut tree (), the locations of the following three food sources are supposed to be the acorn nuts trees (), and the rest are considered as normal trees ():

2.3. Generate New Locations through Gliding

Three scenarios may appear after the dynamic gliding process of flying squirrels.

Scenario 1. Flying squirrels on acorn nut trees tend to move towards hickory nut tree. The new locations can be generated as follows:where is random gliding distance, is a function which returns a value from the uniform distribution on the interval , and is a gliding constant.

Scenario 2. Some squirrels which are on normal trees may move towards acorn nut tree to fulfill their daily energy needs. The new locations can be generated as follows:where is a function which returns a value from the uniform distribution on the interval .

Scenario 3. Some flying squirrels on normal trees may move towards hickory nut tree if they have already fulfilled their daily energy requirements. In this scenario, the new location of squirrels can be generated as follows:where is a function which returns a value from the uniform distribution on the interval .

In all scenarios, gliding distance is considered to be in the interval between 9 and 20 m [37]. However, this value is quite large and may introduce large perturbations in (7)-(9) and hence may cause unsatisfactory performance of the algorithm. In order to achieve acceptable performance of the algorithm, a scaling factor () is introduced as a divisor of and its value is chosen to be 18 [37].

2.4. Check Seasonal Monitoring Condition

The foraging behavior of flying squirrels is significantly affected by season variations [43]. Therefore, a seasonal monitoring condition is introduced in the algorithm to prevent the algorithm from being trapped in local optimal solutions.

A seasonal constant and its minimum value are calculated firstly: Then the seasonal monitoring condition is checked. Under the condition of , the winter is over, and the flying squirrels which lose their abilities to explore the forest will randomly relocate their searching positions for food source again:where distribution is a powerful mathematical tool to enhance the global exploration capability of most optimization algorithms [44]:where and are two functions which return a value from the uniform distribution on the interval , is a constant ( = 1.5 in this paper), and is calculated as follows:where .

2.5. Stopping Criterion

The algorithm terminates if the maximum number of iterations is satisfied. Otherwise, the behaviors of generating new locations and checking seasonal monitoring condition are repeated.

2.6. Procedure of the Basic SSA

The pseudocode of SSA is provided in Algorithm 1.

Set , , n, , , , and
Randomly initialize the flying squirrels locations
Calculate fitness value
while  
Generate new locations
for  t = 1: n1 (n1 = total number of squirrels on acorn trees)
if  
else
end
end
for  t =1: n2 (n2 = total number of squirrels on normal trees moving towards acorn trees)
if  
else
end
end
for  t = 1: n3 (n3 = total number of squirrels on normal trees moving towards hickory trees)
if  
else
end
end
if  
end
Calculate fitness value of new locations
end

3. The Improved Squirrel Search Optimization Algorithm

This section presents an improved squirrel search optimization algorithm by introducing four strategies to enhance the searching capability of the algorithm. In the following, the four strategies will be presented in detail.

3.1. An Adaptive Strategy of Predator Presence Probability

When flying squirrels generate new locations, their natural behaviors are affected by the presence of predators and this character is controlled by predator presence probability . In the early search stage, flying squirrels’ population is often far away from the food source and its distribution range is large; thus it faces a great threat from predators. With the evolution going on, flying squirrels’ locations are close to the food source (an optimal solution). In this case, the distribution range of flying squirrels’ population is increasingly smaller and less threats from predators are expected. Thus, to enhance the exploitation capacity of the SSA, an adaptive , which dynamically varies as a function of iteration number, is adopted as follows:where and are the maximum and minimum predator presence probability, respectively.

3.2. Flying Squirrels’ Random Position Generation Based on Cloud Generator

Under the condition of , the flying squirrels randomly proceed gliding to the next potential food locations, different individuals generally have different judgments and their gliding directions and routines vary. In other words, the foraging behavior of flying squirrels has the characteristics of randomness and fuzziness. These characteristics can be synthetically described and integrated by a normal cloud model. In the model, a normal cloud model generator instead of uniformly distributed random functions is used to reproduce new location for each flying squirrel. Thus, (7)-(9) are replaced by the following equations:where (Entropy) represents the uncertainty measurement of a qualitative concept and (Hyper Entropy) is the uncertain degree of entropy [42]. Specifically, in -, stands for the search radius, and is used to represent the stability of the search. In the early iterations, a large is requested because the flying squirrels’ location is often far away from an optimal solution. Under the condition of final generations where the population location is close to an optimal solution, a smaller is appropriate for the fine-tuning of solutions. Therefore, the search radius dynamically changes with iteration number:where is the maximum search radius.

3.3. A Selection Strategy between Successive Positions

When new positions of flying squirrels are generated, it is possible that the new position is worse than the old one. This suggests that the fitness value of each individual needs to be checked after the generation of new positions by comparing with the old one in each iteration. If the fitness value of the new position is better than the old one, the position of the corresponding flying squirrel is updated by the new position. Otherwise, the old position is reserved. This strategy can be mathematically described by

3.4. Enhance the Intensive Dimensional Search

In the basic SSA, all dimensions of one individual flying squirrel are updated simultaneously. The main drawback of this procedure is that different dimensions are dependent and the change of one dimension may have negative effects on others, preventing them from finding the optimal variables in their own dimensions. To further enhance the intensive search of each dimension, the following steps are taken for each iteration: (i) find the best flying squirrel location; (ii) generate one more solution based on the best flying squirrel location by changing the value of one dimension while maintaining the rest dimensions; (iii) compare fitness values of the new-generated solution with the original one, and reserve the better one; (iv) repeat steps (ii) and (iii) in other dimensions individually. The new-generated solution is produced by

3.5. Procedure of ISSA

The pseudocode of SSA is provided in Algorithm 2.

Set , , n, , , , and
Randomly initialize the flying squirrels locations
Calculate fitness value
while  
Generate new locations
for  t = 1: n1 (n1 = total number of squirrels on acorn trees)
if  
else
end
end
for  t = 1: n2 (n2 = total number of squirrels on normal trees moving towards acorn trees)
if  
else
end
end
for  t = 1: n3 (n3 = total number of squirrels on normal trees moving towards hickory trees)
if  
else
end
end
if  
end
Calculate fitness value of new locations
if  
end
Enhance intensive dimensional search
Find  
for  j = 1:n
Calculate fitness value of the new solution
if  
end
end
end

4. Experimental Results and Analysis

The performance of proposed ISSA is verified and compared with five nature-inspired optimization algorithms, including the basic SSA, PSO [12], fruit fly optimization algorithm (FOA) [31], and its two variations: improved fruit fly optimization algorithm (IFFO) [35] and cloud model based fly optimization algorithm (CMFOA) [36]. 32 benchmark functions are tested with a dimension being equal to 30, 50, or 100. These functions are frequently adopted for validating global optimization algorithms, among which F1-F11 are unimodal, F13-F25 belong to multimodal, and F26-F32 are composite functions in the IEEE CEC 2014 special section [43]. Each function is calculated for ten independent runs in order to better compare the results of different algorithms. Common parameters are set the same for all algorithms, such as population size NP = 50; maximal iteration number = 10,000. Meanwhile, the same set of initial random populations is used. The algorithm-specific parameters are chosen the same as those used in the literature that introduces the algorithm at the first time. The parameters of PSO, FOA, IFFO, CMFOA, and SSA are chosen according to [12], [31], [35], [36], and [37], respectively. Table 1 summarizes both common and algorithm-specific parameters for ISSA and other five algorithms. The error value, defined as (f(x) – Fmin), is recorded for the solution x, where f(x) is the optimal fitness value of the function calculated by the algorithms, and Fmin is the true minimal value of the function. The average and standard deviation of the error values over all independent runs are calculated.


ParameterISSASSAPSOCMFOAIFFOFOA

100001000010000100001000010000
505050505050
1.91.9----
1818----
0.1-----
0.001-----
-0.1----
and --2---
--0.9---
-----
-----
----0.00001-
-----

4.1. Test 1: Unimodal Functions

Unimodal benchmark functions (Table 2) have one global optimum only and they are commonly used for evaluating the exploitation capacity of optimization algorithms. Tables 35 list the mean error and standard deviation of the results obtained from each algorithm after ten runs at dimension n = 30, 50, and 100, respectively. The best values are highlighted and marked in italic. It is noted that difficulty in optimization arises with the increase in the dimension of a function because its search space increases exponentially [45]. It is clear from the results that on most of unimodal functions ISSA has better accuracy and convergence precision than other five counterpart algorithms, which confirms that the proposed ISSA has good exploitation ability. As for F2 and F5, ISSA can obtain the same level of accurate mean error as IFFO, while the former outperforms the latter under the condition of n = 100. It is also found that both ISSA and CMFOA can achieve the true minimal value of F3 at n = 30 and 50, while ISSA is superior at n = 100.


FunctionRangeFmin

0
0
-1
0
0
0
0
0
0
0
0


FunISSASSAPSOCMFOAIFFOFOA

F1Mean2.2612E-461.4374E-131.8031E+033.8546E-225.3419E-125.8887E+01
Std3.9697E-461.0187E-131.6038E+021.1464E-222.6506E-121.0717E+01
F2Mean5.3333E-016.0000E-017.0475E+046.6667E-016.6667E-011.2221E+02
Std2.8109E-012.1082E-011.8027E+041.1102E-163.6414E-113.5452E+01
F3Mean0.0000E+000.0000E+004.7300E-010.0000E+008.6042E-141.8586E-02
Std1.8504E-162.6168E-164.2225E-029.0649E-172.7669E-141.9010E-03
F4Mean2.1268E-393.9943E-109.2617E+073.7704E-182.0345E-081.1112E+07
Std5.6486E-393.2855E-101.8784E+072.4165E-181.4277E-083.0272E+06
F5Mean4.3164E-039.3054E-025.5376E+003.2231E-032.2754E-031.7965E-02
Std1.9931E-032.3058E-021.0210E+001.3321E-031.1046E-033.1904E-03
F6Mean5.5447E-142.9695E+011.0669E+077.6347E+008.9052E+001.2862E+04
Std5.4894E-143.4074E+011.8313E+065.9003E+007.1150E+004.4338E+03
F7Mean1.1996E-446.5616E-121.6688E+057.1055E-226.0744E-125.4708E+03
Std3.0449E-445.4085E-121.7265E+041.6506E-222.9515E-124.9070E+02
F8Mean3.5080E-131.7850E-034.5639E+012.7020E-112.6440E-067.8561E+00
Std7.0894E-134.3281E-042.0578E+006.0413E-122.4822E-071.7334E+00
F9Mean5.5772E-242.2148E-077.1792E+012.3880E-112.0415E-063.0453E+01
Std7.9227E-246.7302E-083.0539E+014.1360E-123.6636E-074.6515E+01
F10Mean2.6748E-444.4745E-131.3098E+044.2105E-234.4042E-133.6501E+02
Std7.8461E-443.7055E-131.3116E+031.0825E-231.5887E-134.3608E+01
F11Mean6.1803E-1881.7833E-601.8391E-037.8265E-255.1176E-156.7271E-07
Std0.0000E+003.7202E-601.1147E-036.5934E-257.6076E-154.6836E-07


FunISSASSAPSOCMFOAIFFOFOA

F1Mean1.9373E-452.4408E-078.9004E+033.6571E-213.2632E-112.8288E+02
Std4.0611E-457.2161E-081.0186E+039.0310E-228.2802E-121.8277E+01
F2Mean6.6667E-012.1716E+008.0535E+051.2514E+006.6667E-018.2661E+02
Std8.2003E-162.2142E+001.1670E+051.2485E+002.5423E-107.7814E+01
F3Mean0.0000E+007.6318E-118.6602E-010.0000E+004.1100E-135.1246E-02
Std2.2204E-162.2120E-111.8360E-022.3984E-161.4800E-134.0430E-03
F4Mean3.0671E-389.7033E-044.6756E+081.9432E-171.0621E-073.8893E+07
Std6.9186E-383.4464E-046.0135E+071.1627E-177.6083E-087.3301E+06
F5Mean7.1557E-032.9566E-015.3164E+011.0084E-027.3580E-031.0458E-01
Std2.3021E-033.3200E-026.4915E+002.6523E-031.8100E-032.6586E-02
F6Mean4.3706E-119.5471E+017.0118E+077.7331E+014.7194E+016.5331E+04
Std9.5151E-113.5358E+015.3302E+063.4463E+014.0976E+011.3721E+04
F7Mean1.2947E-412.3079E-059.1659E+056.9771E-216.4025E-112.8862E+04
Std3.7876E-415.8814E-069.3287E+043.1808E-212.6901E-112.8375E+03
F8Mean6.0872E-111.2706E-016.7093E+018.4930E-117.1576E-061.1919E+01
Std2.5158E-113.3391E-022.5011E+001.1107E-116.9032E-071.0404E+00
F9Mean1.8289E-233.8822E-042.0565E+106.0338E-116.3442E-061.1434E+05
Std2.8884E-237.2525E-056.3864E+106.4165E-129.0797E-072.4588E+05
F10Mean1.2924E-441.4594E-064.1629E+042.2846E-222.0175E-121.0536E+03
Std2.5807E-446.0662E-073.7125E+034.1424E-235.2761E-135.9785E+01
F11Mean4.4745E-1631.9208E-589.2852E-031.1169E-245.1458E-151.6917E-06
Std0.0000E+002.2612E-583.5776E-031.4674E-248.2130E-159.4517E-07


FunISSASSAPSOCMFOAIFFOFOA

F1Mean5.2002E-441.3601E-016.4559E+046.7630E-205.4042E-102.3688E+03
Std8.9565E-442.8398E-022.7675E+031.8636E-201.0843E-101.1782E+02
F2Mean2.5837E+002.8507E+018.9058E+061.0018E+011.1850E+011.3921E+04
Std4.0923E+001.2482E+016.4125E+059.8260E+006.7378E+002.1479E+03
F3Mean1.9984E-151.7506E-059.9937E-011.9984E-153.4570E-121.9460E-01
Std2.7940E-163.3607E-061.9874E-043.9686E-165.7323E-131.1259E-02
F4Mean1.4073E-383.0139E+022.6151E+091.4261E-161.0212E-062.0499E+08
Std1.5382E-389.0551E+012.6783E+087.5737E-173.3853E-073.5388E+07
F5Mean1.7615E-021.4345E+005.9603E+023.7550E-022.9349E-021.2443E+00
Std4.2239E-031.3985E-015.8412E+011.2602E-024.5825E-031.8471E-01
F6Mean1.1417E+015.8422E+024.2299E+081.7988E+021.6578E+025.6078E+05
Std3.0258E+019.7884E+024.8581E+073.9022E+014.6685E+016.5477E+04
F7Mean1.6881E-411.2984E+016.4707E+061.1852E-197.4831E-102.2865E+05
Std3.4134E-412.2729E+003.3435E+053.1718E-209.5033E-112.0650E+04
F8Mean4.5259E-083.9819E+008.5137E+011.7042E-042.9244E-053.3956E+01
Std2.9104E-084.1522E-011.2566E+007.8665E-052.7018E-064.7713E+00
F9Mean1.2222E-223.6814E-017.2469E+322.5070E-102.3795E-051.4112E+27
Std8.4369E-234.1963E-022.0633E+332.3874E-111.3879E-064.4627E+27
F10Mean3.4254E-423.7261E-011.4940E+052.0714E-211.8486E-114.3041E+03
Std9.6608E-429.2561E-024.4643E+034.6407E-222.3956E-122.4348E+02
F11Mean1.6400E-1231.5040E-523.7278E-026.5780E-243.1174E-141.0720E-05
Std5.1861E-1231.6670E-521.1165E-027.2960E-243.6245E-145.9475E-06

Figures 13 show several representative convergence graphs of ISSA and its competitors at n = 30, 50, and 100, respectively. It can be observed that ISSA is able to converge to the true value for most unimodal functions with the fastest convergence speed and highest accuracy, while the convergence results of PSO and FOA are far from satisfactory. The IFFO and CMFOA, with the improvements of search radius though, yield better convergence rates and accuracy in comparison with FOA, but still cannot outperform the proposed ISSA. It is also found that ISSA greatly improves the global convergence ability of SSA mainly because of the introduction of an adaptive strategy of , a selection strategy between successive positions, and enhancement in dimensional search. In addition, the accuracy of all algorithms tends to decrease as the dimension increases, particularly on F6 and F11.