Abstract

A new local search technique is proposed and used to improve the performance of particle swarm optimization algorithms by addressing the problem of premature convergence. In the proposed local search technique, a potential particle position in the solution search space is collectively constructed by a number of randomly selected particles in the swarm. The number of times the selection is made varies with the dimension of the optimization problem and each selected particle donates the value in the location of its randomly selected dimension from its personal best. After constructing the potential particle position, some local search is done around its neighbourhood in comparison with the current swarm global best position. It is then used to replace the global best particle position if it is found to be better; otherwise no replacement is made. Using some well-studied benchmark problems with low and high dimensions, numerical simulations were used to validate the performance of the improved algorithms. Comparisons were made with four different PSO variants, two of the variants implement different local search technique while the other two do not. Results show that the improved algorithms could obtain better quality solution while demonstrating better convergence velocity and precision, stability, robustness, and global-local search ability than the competing variants.

1. Introduction

Optimization comes to focus when there are needs to plan, take decisions, operate and control systems, design models, or seek optimal solutions to varieties of problems faced from day to day by different people. A number of these problems, which can be formulated as continuous optimization problems, are often approached with limited resources. Dealing with such problems, most especially when they are large scale and complex, has attracted the development of different nature-inspired optimization algorithms. These algorithms display problem-solving capabilities for researchers to solve complex and challenging optimization problems with many success stories. Swarm-based techniques are a family of nature-inspired algorithms and are population-based in nature; they are also known as evolutionary computation techniques. Particle swarm optimization (PSO) technique is a member of swarm-based techniques which is capable of producing low cost, fast, and robust solutions to several complex optimization problems. It is a stochastic, self-adaptive, and problem-independent optimization technique and was originally proposed in 1995 by Eberhart and Kennedy as simulation of a flock of bird or the sociological behavior of a group of people [1, 2]. From the time this concept was brought into optimization, it has been used extensively in many fields which include function optimization and many difficult real-world optimization problems [35].

PSO technique was initially implemented with few lines of codes using basic mathematical operations with no major adjustment needed to adapt it to new problems and it was almost independent of the initialization of the swarm [6]. It needs few parameters to operate with for successful and efficient behavior in order to obtain quality solutions. To implement this technique, a number of particles, which are characterized by positions and velocities, called swarm are required to be randomly distributed in a solution search space depending on the boundaries defined for the design variables of the problem being optimized. The number of design variables determines the dimensionality of the search space. If -dimensional space is considered, the position and velocity of each particle are represented as the vectors and , respectively. Every particle has a memory of its personal experience which is communicated to all reachable neighbours in the search space to guide the direction of movement of the swarm. Also, the quality of each particle (solution) is determined by the objective function of the problem being optimized and the particle with best quality is taken as the global solution towards which other particles will converge. The common practice is for the technique to maintain a single swarm of particles throughout its operation. This process of seeking optimal solution involves the adjustments of the position and velocity of each particle in each iteration using In (1), and are vectors representing the th particle personal best and swarm global best positions, respectively; and ; and are acceleration factors known as cognitive and social scaling parameters that determine the magnitude of the random forces in the direction of and ; and are random numbers between 0 and 1; is iteration index. The symbol is the inertia weight parameter which was introduced into the original PSO in [7]. The purpose of its introduction was to help the PSO algorithm balance its global and local search activities.

There are possibilities of the positions and velocities of the particles in the swarm increasing in value beyond necessary when they are updated. As a measure, the positions are clamped in each dimension to the search range of the design variables, where and represent the lower and upper bounds of a particle's position, respectively, while their velocities are controlled to be within a specified range , where and represent the lower and upper bounds of a particle's velocity, respectively. The idea of velocity clamping which was introduced by [1, 2, 8] and extensively experimented with in [9] has led to significant improvement as regards the performance of PSO. This is so because the particles could concentrate, taking reasonably sized steps to search through the search space rather than bouncing about excessively.

A major feature that characterizes an efficient optimization algorithm is the ability to strike a balance between local and global search. Global search involves the particles being able to advance from a solution to other parts of the search space and locate other promising candidates while local search means that the particle is capable of exploiting the neighbourhood of the present solution for other promising candidates. In PSO, as the rate of information sharing increases among the particles they migrate towards the same direction and region in the search space. If any of the particles could not locate any better global solution after some time, they will eventually converge about the existing one which may not be the global minimum due to lack of exploration power; this is known as premature convergence. This type of behaviour is more likely when the swarm of particles is overconcentrated. It could also occur when the optimization problem is of high dimension and/or nonconvex. One of the possible ways to prevent this premature convergence is to embed a local search technique into PSO algorithm to help improve the quality of each solution by searching its neighbourhood. After the improvement, better information is communicated among the particles thereby increasing the algorithm's ability to locate better global solution in course of optimization. Hill climbing, modified Hooke and Jeeves, gradient descent, golden ratio, Stochastic local search, adaptive local search, local interpolation, simulated annealing, and chaotic local search are different local search techniques that have been combined with PSO to improve its local search ability [1018].

In this paper, a different local search technique was proposed to harness the global search ability of PSO and improve on its local search efforts. This technique is based on the collective efforts of randomly selected (with replacement) particles a number of times equal to the size of the problem dimension. When a particle is selected, it is made to contribute the value in the position of its randomly selected dimension from its personal best. The contributed values are then used to form a potential global best solution which is further refined. This concept could offer PSO the ability to enhance its performance in terms of convergence speed, local search ability, robustness, and increased solution accuracy. The local search technique was hybridized with two of the existing PSO variants, namely, random inertia weight PSO (RIW-PSO) and linear decreasing inertia weight PSO (LDIW-PSO), to form two new variants. Numerical simulations were performed to validate the efficiencies of each of them and some statistical analyses were performed to ascertain any statistically significant difference in performance between the proposed variants and the old ones. From the results obtained it was shown that the proposed variants are very efficient.

In the sections that follow, RIW-PSO and LDIW-PSO are briefly described in Section 2; the motivation and description of the proposed local search technique are presented in Section 3 while the improved PSO with local search technique is described in Section 4. Numerical simulations are performed in Section 5 and Section 6 concludes the paper.

2. The Particle Swarm Optimization Variants Used

Two PSO variants were used to validate the proposed improvement of the performance of PSO technique. The variants are LDIW-PSO and RIW-PSO. These were chosen because of the evidence available in the literature that they are less efficient in optimizing many continuous optimization problems [1921]. These variants are succinctly described below.

2.1. PSO Based on Linear Decreasing Inertia Weight (LDIW-PSO)

This variant was proposed in [9] after the inertia weight parameter was introduced into the original PSO by [7]. It implements the linear decreasing inertia weight strategy represented in (3) which decreases from some high which facilitates exploration to a low value which on the other hand promotes exploitation. This greatly improved the performance of PSO. LDIW-PSO does global search at the beginning and converges quickly towards optimal positions but lacks exploitation power [9] and the ability required to jump out of the local minimum most especially when being in the multimodal landscape. Some improvements on LDIW-PSO exist in the literature [6, 9, 22]: where and are the initial and final values of inertia weight, is the current iteration number, MAXitr is the maximum iteration number, and is the inertia weight value in the th iteration. Apart from the problem of premature convergence, this variant was found inefficient in tracking a nonlinear dynamic system because of the difficulty in predicting whether exploration (a larger inertia weight value) or exploitation (a smaller inertia weight) will be better at any given time in the search space of the nonlinear dynamic system [23].

2.2. PSO Based on Random Inertia Weight (RIW-PSO)

Due to the improved performance of PSO when the constant inertia weight was introduced into it [7], a new era of research was indirectly initiated and this has attracted the attentions of many researchers in the field. The inefficiency of linear decreasing inertia weight, which linearly decreases from 0.9 to 0.4, in tracking a nonlinear dynamic system prompted the introduction of RIW which randomly varies within the same range of values. Random adjustment is one of the strategies that have been proposed to determine the inertia weight value to further improve on the performance of PSO. This strategy is nonfeedback in nature and the inertia weight takes different value randomly at each iteration, from a specified interval. In line with this, random inertia weight strategy represented in (4) was introduced into PSO by [23] to enable the algorithm track and optimize dynamic systems. In the equation, rand() is a uniform random number in the interval which make the formula generate a number randomly varying between 0.5 and 1.0, with a mean value of 0.75. When and are set to 1.494, the algorithm seems to demonstrate better optimizing efficiency. The motivation behind the selection of these values was Clerc's constriction factor [23]: Not much is recorded in the literature regarding the implementation of this variant of PSO. Some of the few implementations found in the literature are recorded in [6, 2022].

3. Proposed Local Search Technique

The basic principle underlying the optimizing strategy of PSO technique is that each particle in the swarm communicates their discoveries to their neighbours and the particle with the best discovery attracts others. While this strategy looks very promising, there is the risk of the particles being susceptible to premature convergence, especially when the problem to be optimized is multimodal and high in dimensionality. The reason is that the more the particles share their discoveries among themselves, the higher their identical behaviour is until they converge to the same area in the solution search space. If none of the particle could discover better global best, after some time all the particles will converge about the existing global best which may not be the global minimizer.

One of the motivations for this local search technique is the challenge of premature convergence associated with PSO technique which affects its reliability and efficiency. Another motivation is the decision-making strategy used by the swarm in searching for optimal solution to optimization problems. The decision is dictated by a single particle in the swarm; that is, other particles follow the best particle among them to search for better solution. Involving more than one particle in the decision making could lead to a promising region in the search space where optimal solution could be obtained.

The description of the local search technique is as follows: after all the particles have obtained their various personal best positions, each particle has an equal chance of being selected to contribute its idea towards how a potential location in the search space where better global best could be obtained. As a result, a number of particles equal to the dimension of the problem being optimized are randomly selected (with replacement). Each selected particle contributes an idea by donating the value in the location of its randomly selected dimension from its personal best. All the ideas contributed by the selected particles are collectively used (hybridized) to construct a potential solution in the solution search space. After constructing the potential solution, some searches are locally done around its neighbourhood with the hope of locating a better solution in comparison with the current global solution. If a better solution is found, it is then used to replace the current global solution; otherwise no replacement is made.

In this local search, the potential new position is denoted by and is sampled from the neighbourhood of the collectively constructed potential global solution represented as by where is a random vector picked uniformly from the range and is the search radius which is initially set to max (maximum radius for local search). The local search technique moves from position to position when there is improvement to the fitness. If there is no improvement on the fitness of by , the search radius is linearly decreased by multiplying it with a factor using where max is the maximum number of times the neighbourhood of is to be sampled, is the current time the neighbourhood is being sampled, and min is the minimum radius for the local search.

This proposed local search technique has been named collective local unimodal search (CLUS) technique. It has some trace of similarity in operation with local unimodal sampling (LUS) technique [24]. But they are quite different in the sense that, while LUS randomly picks a potential solution from the entire population, CLUS constructs a potential solution using the collective efforts of a randomly selected number of particles from the swarm. Also, CLUS uses a linear method to decrease the search radius (step size) in the neighbourhood of the potential solution which is different from the method applied by LUS during optimization. The CLUS technique is presented in Algorithm 1. In the technique, and represent the current global fitness value and its corresponding position in the search space.

  new arrays, each of length Dim
 While      do
  
  
  for     to problem Dimension
   randomly select any particle
   randomly select a dimension from the personal best of the selected particle
    
  end for
     validate for search space boundary
  
   
   
  else
   
   
  end if
 end while
 Return     and  

4. Improved PSO with Collective Unimodal Local Search (PSOCLUS)

The RIW-PSO increases convergence in early iterations and does more of global search activities but soon gets stuck in local optima because of lack of local search ability. Also, LDIW-PSO does global search at earlier part of its iteration but lacks enough momentum to do local search as it gets towards its terminal point of execution. The aim of this paper is to make a general improvement on the performance of PSO which can be applied to any of its variants. To achieve this, the two PSO variants were hybridized with the proposed collective local unimodal search (CLUS) technique which takes advantage of their global search abilities to do some neighbourhood search for better results. The improved PSO algorithm is presented in Algorithm 2.

Begin Algorithm
Step  1. Definition Phase
      (1.1) function to optimize as  
      (1.2) Parameter
     (1.2.1) swarm size
     (1.2.2) problem dimension
     (1.2.3) solution search space
     (1.2.4) particle velocity range
Step  2. Initialized phase
     For all particles randomly initialized in search space
     (2.1) position 
     (2.2) velocity  ,
     (2.3)
     (2.4) best of  
     (2.5) evaluate using objective function of problem
Step  3. Operation Phase
     Repeat until a stopping criterion is satisfied
     (3.1). Compute inertia weight using any inertia weight formula
     (3.2). For each particle 
     (3.2.1). update   for particle using (1)
     (3.2.2). validate for velocity boundaries
     (3.2.3). update   for particle using (2)
     (3.2.4). validate for position boundaries
     (3.2.5). If then
     (3.3). best of
     (3.4). Implement local search using CLUS in  Algorithm 1
Step  4. Solution Phase
     (4.1).
     (4.2).
     (4.3). Return and  
End     Algorithm

5. Numerical Simulations

In this section, the improved algorithm (PSOCLUS) was implemented using the inertia weight strategy of RIW-PSO and the variant was labeled R-PSOCLUS. It was also implemented using the inertia weight strategy of LDIW-PSO and the variant was labeled L-PSOCLUS. The performances of R-PSOCLUS and L-PSOCLUS were experimentally tested against those of RIW-PSO and LDIW-PSO, respectively. The maximum number of iterations allowed was 1000 for problems with dimensions less than or equal to 10, 2000 for 20-dimensional problems, and 3000 for 30-dimensional problems. A swarm size of 20 was used in all the experiments and twenty-five independent runs were conducted to collect data for analysis. The termination criteria for all the algorithms were set to be as maximum number of iterations relative to the problems' dimensions. A run, in which an algorithm is able to satisfy the set success criteria (see Table 1) before or at the maximum iteration, is considered to be successful. To further prove the efficiency of the proposed local search technique, the proposed PSO variants were also compared with some existing PSO variants hybridized with different local search techniques. They are PSO with golden ratio local search [15] and PSO with local interpolation search [18]. A total of 6 different experiments were conducted.(i)R-PSOCLUS was compared with PSO with golden ratio local search (GLSPSO);(ii)R-PSOCLUS was compared with PSO with local interpolation search (PSOlis);(iii)R-PSOCLUS was compared with RIW-PSO;(iv)L-PSOCLUS was compared with PSO with golden ratio local search (GLSPSO);(v)L-PSOCLUS was compared with PSO with local interpolation search (PSOlis);(vi)L-PSOCLUS was compared with LDIW-PSO.

The application software was developed in Microsoft Visual C# programming language.

5.1. Test Problems

A total of 21 problems were used in the experiments. These problems have different degrees of complexity and multimodality which represents diverse landscapes enough to cover many of the problems which can arise in global optimization problems. Shown in Table 2 are the problems dimensions, optimal fitness values, and success thresholds. Presented in Table 3 are the definitions, characteristics (US: unimodal separable, UN: unimodal nonseparable, MS: multimodal separable, and MN: multimodal nonseparable), and search ranges of the problems. More details on the benchmark problems can be found in [22, 2527].

5.2. Parameter Setting

The additional parameters that were set in the experiment are inertia weight threshold for LDIW-PSO ( and ), acceleration coefficients ( and ), velocity thresholds ( and ), minimum radius (min), and maximum radius (max) for local search as well as the maximum number of neighbourhood sampling (max) during the local search. The respective settings of these parameters are shown in Table 1. The parameters and were randomly generated using the uniform random number generator. The values of and were chosen for LDIW-PSO based on the experiments conducted in [9]; values for and were chosen for RIW-PSO based on the recommendation in [23] and it was also used for LDIW-PSO because it was discovered in course of the experiments in this paper that these values make LDIW-PSO perform better than the commonly used value of 2.0. The settings for and were done based on the outcome of experimental studies in [8].

5.3. Performance Measurement

The efficiency of the algorithms was tested against the set of benchmark problems given in Table 2 and numerical results obtained were analyzed using the criteria that are listed below. All the results are presented in Tables 4 to 20.(i)Best fitness solution: the best of the fitness solution among the solutions obtained during the runs.(ii)Mean best fitness solution: this is a measure of the precision (quality) of the result that the algorithm can get within given iterations in all the 25 runs.(iii)Standard deviation (Std. Dev.) of mean best fitness solution over 25 runs: this measures the algorithm's stability and robustness.(iv)Average number of iterations an algorithm was able to reach the success threshold.(v)Success rate (SR) = : this is the rate at which the success threshold is met during the independent number of runs and is a reflection of the global search ability and robustness of the algorithm.

Statistical analysis using the Wilcoxon signed rank nonparametric test with 0.05 level of significance [28, 29] was also performed using the numerical results obtained by the algorithms, while box plots were used to analyze their variability in obtaining fitness values in all the runs.

5.4. Results and Discussions

Results obtained from all the experiments are discussed in this subsection to show the overall performance of the various algorithms. Presented in Tables 4, 5, 6, 7, 8, and 9 are the numerical results obtained and used to compare R-PSOCLUS and L-PSOCLUS with GLSPSO. R-PSOCLUS and L-PSOCLUS were also compared with PSOlis using the results presented in Table 10. The results in Tables 1118 were obtained for the scaled and nonscaled test problems listed in Table 3; the results were used to validate RIW-PSO, R-PSOCLUS, LDIW-PSO, and L-PSOCLUS. In each of the tables, for ease of observation, bold values represent the better results and “–” means that the algorithm could not satisfy the success threshold in any of the runs. The Wilcoxon sign rank nonparametric test, which is used as an alternative to the paired -test when the results cannot be assumed to be normally distributed, was applied to test the statistical significance differences between RIW-PSO and R-PSOCLUS as well as LDIW-PSO and L-PSOCLUS.

5.4.1. Comparison of R-PSOCLUS and Golden Ratio Local Search Based PSO (GLSPSO)

The results in Tables 46 show the performance and abilities of R-PSOCLUS and GLSPSO optimizing the test problems over three different problem dimensions. The results of GLSPSO were obtained from [15]. A large problem space was used for all the problems to verify the superiority between the two different local search techniques hybridized with the PSO variants. From the results it is evident that R-PSOCLUS is superior to GLSPSO. Apart from Ackley problem (across the three dimensions) and Rosenbrock (in dimension 100), R-PSOCLUS outperformed GLSPSO. It was able to obtain optimal minimum for some of the problems, demonstrating better exploitation ability, convergence precision, and solution quality.

5.4.2. Comparison between L-PSOCLUS and GLSPSO

To further demonstrate the efficiency of the proposed local search technique, L-PSOCLUS was also implemented and results were compared with the results of GLSPSO obtained from [15]. Three different types of dimensions were also used for the problems. As can be observed in Tables 79, across the three dimensions, GLSPSO was only able to perform better than L-PSOCLUS in Ackley problem. Apart from Griewank problem (dimension 10), GLSPSO was outperformed by L-PSOCLUS in the remaining four problems. L-PSOCLUS was able to obtain global optimum for Griewank and Sphere problems across the three dimensions, but, for Rastrigin, it was able to get global minimum for dimension 10. Again, the proposed local search technique demonstrates better exploitation ability than GLSPSO.

5.4.3. Comparison of R-PSOCLUS and L-PSOCLUS with PSOlis

Presented in Table 10 is the result obtained by L-PSOCLUS in comparison with result for PSOlis from [18]. Again, the outstanding performance of L-PSOCLUS over its competitor is evidence that the proposed local search technique is very efficient and capable of complementing the global search ability of PSO to obtain quality results by making it overcome premature convergence.

5.4.4. Comparison between RIW-PSO and R-PSOCLUS

The results presented in Table 11 are for the nonscaled test problems as optimized by the two algorithms while those in Tables 1214 are for the scaled problems with 10, 20, and 30 dimensions, respectively. In Table 19 are the results obtained using the Wilcoxon sign rank nonparametric test.

(1)  Results for the Nonscaled Problems. For the 7 nonscaled problems, Table 11 shows that there are no performance differences between the two algorithms in optimizing Booth, Easom, Shubert, and Trid problems. For Michalewicz, Schaffer's f6, and Salomon, R-PSOCLUS obtained more quality solutions and demonstrated better global search ability than RIW-PSO. The convergence curves in Figures 1(c) and 1(d) show that R-PSOCLUS has faster and better convergence. However, the value (0.190) obtained from the Wilcoxon sign test shown in Table 19 revealed that there is no statistical difference in the performance between the two algorithms for the nonscaled problems. Also, the two algorithms have equal median fitness.

(2)  Results for 10-Dimensional Problems. For the scaled problems with 10 dimensions, Table 12 clearly reveals great differences in performance between RIW-PSO and R-PSOCLUS. The two algorithms successfully optimized Rastrigin, Rosenbrock, Rotated Ellipsoid, Schwefel 2.22, Sphere, and Sum Squares problems with equal success rate of 100%, but R-PSOCLUS obtained significantly better mean fitness and standard deviation with fewer number of iterations. R-PSOCLUS was able to obtain the minimum optima for both Rastrigin and Step problems. For the other problems, R-PSOCLUS clearly outperformed RIW-PSO in solution quality, convergence precision, global search ability, and robustness, though none of them could meet the success threshold in optimizing the Noisy Quartic problem. The value (0.001) obtained from the Wilcoxon sign test presented in Table 19 indicates that there is statistically significant difference in performance between the two algorithms with a large effect size of in favour of R-PSOCLUS. The median fitness is also an evidence of this.

(3)  Results for 20-Dimensional Problems. The same set of experiments was performed using the same scaled problems but with their dimensions increased to 20, which also increased their complexities except Griewank. The numerical results in Table 13 also show that there are great differences in performance between RIW-PSO and R-PSOCLUS. The two algorithms had equal success rate of 100% in optimizing Rosenbrock, Schwefel 2.22, Sphere, and Sum Squares problems with R-PSOCLUS obtaining significantly better mean fitness (except Rosenbrock), standard deviation, and fewer number of iterations. Out of the remaining 10 problems R-PSOCLUS outperformed RIW-PSO in 9 of them with better solution quality, convergence precision, global search ability, and robustness; it also had success rate of 100% in 6 of the problems compared with RIW-PSO and was able to obtain global minimum for Griewank and Step problems. The algorithms could not meet the success criteria in optimizing the Dixon-Price, Noisy Quartic, and Schwefel problems, but R-PSOCLUS still performed better than RIW-PSO. The value (0.023) from Wilcoxon sign test as shown in Table 19 also indicates that there is statistically significant difference in performance between the two algorithms with a medium effect size of in favour of R-PSOCLUS. The median fitness value of R-PSOCLUS is smaller than that of RIW-PSO.

(4)  Results for 30-Dimensional Problems. Table 14 represents the experimental results obtained by the two algorithms using the same scaled problems but with their dimensions scaled to 30, which further increased their complexities except Griewank. The results further show the great differences in performance between RIW-PSO and R-PSOCLUS. Out of the 14 problems R-PSOCLUS had 100% success rate in 7 of them (4 multimodal and 3 unimodal) while RIW-PSO could only have in 3 of them (all unimodal). The two algorithms had equal success rate of 100% in optimizing Schwefel 2.22, Sphere, and Sum Squares problems with R-PSOCLUS obtaining significantly better mean fitness standard deviation and fewer number of iterations. Optimizing Dixon-Price, Noisy Quartic, Rotated Ellipsoid, and Schwefel problems, none of the algorithms could meet the success criteria, yet R-PSOCLUS still obtained better results than RIW-PSO. In all the 14 problems except Rotated Ellipsoid, R-PSOCLUS outperformed RIW-PSO and was able to obtain global minimum for Griewank and Step problems. The value (0.009) from Wilcoxon sign test in Table 19 is a confirmatory evidence that there is statistically significant difference in performance between RIW-PSO and R-PSOCLUS with a large effect size of in favour of R-PSOCLUS. The median fitness value of R-PSOCLUS is also smaller than that of RIW-PSO. The convergence graphs of six 30-dimensional test problems shown in Figure 2 demonstrate the speed and ability of convergence of the two algorithms. From the graphs it is clear that R-PSOCLUS demonstrates better convergence and global search ability than RIW-PSO. Besides it also possesses better ability to get out of local optima.

5.4.5. Comparison between LDIW-PSO and L-PSOCLUS

Presented in Table 15 are the results for the nonscaled test problems as optimized by the two algorithms while those in Tables 1618 are for the scaled problems with 10, 20, and 30 dimensions, respectively. The statistical analysis done by applying Wilcoxon sign rank nonparametric test is presented in Table 20.

(1)  Results for the Nonscaled Problems. Results in Table 15 show that there are no clear performance differences between LDIW-PSO and L-PSOCLUS in optimizing Booth, Easom, Shubert, and Trid problems; however, there are some not too significant differences in their average number of iterations to reach the success thresholds and standard deviation; in Shubert, LDIW-PSO obtained 100% success but L-PSOCLUS could not. Figures 1(a), 1(b), 1(e), and 1(f) show their convergence behaviour. Optimizing Michalewicz, Schaffer's f6, and Salomon, L-PSOCLUS obtained better quality solutions and has better search ability than LDIW-PSO. Also, the convergence graphs in Figures 1(c) and 1(d) show that  L-PSOCLUS have faster and better convergence in Schaffer's f6 and Salomon. The curves show that the two algorithms were trapped in local optima as shown by the flat parts of their curves and were able to escape from some of them. The value (0.190) in Table 20, obtained from the Wilcoxon sign test, indicates that there is no statistically significant difference between the two algorithms in performance.

(2)  Results for 10-Dimensional Problems. Optimizing the 10-dimensional scaled problems, L-PSOCLUS had 100% success in 10 of the 14 problems (4 multimodal and 6 unimodal) while LDIW-PSO had 100% success in 6 problems (1 multimodal and 5 unimodal) as shown in Table 16. It is only L-PSOCLUS that could successfully obtain the minimum optima for both Rastrigin and Step problems but none could reach the success threshold for Dixon-Price and Noisy Quartic. In all the problems except Dixon-Price (where they have approximately equal performance) and Sum Squares, L-PSOCLUS clearly outperformed LDIW-PSO in obtaining better solution quality, convergence precision, global search ability, and robustness as well as fewer number of iterations. To confirm this, Wilcoxon sign test was performed on the mean best fitness over all the problems and results are presented in Table 20; the value (0.003) obtained indicates that there is statistically significant difference in performance between the two algorithms with a large effect size of in favour of L-PSOCLUS which also has a lower median value for the mean best fitness.

(3)  Results for 20-Dimensional Problems. The numerical results in Table 17 also show that there are great differences in performance between LDIW-PSO and L-PSOCLUS performing the same set of experiments but with the problems dimensions increased to 20. The two algorithms had equal success rate of 100% in optimizing Ackley, Rosenbrock, Schwefel 2.22, Sphere, and Sum Squares problems with L-PSOCLUS obtaining significantly better mean fitness (except Rosenbrock), standard deviation, and fewer number of iterations. L-PSOCLUS outperformed LDIW-PSO in 7 (5 multimodal and 2 unimodal) of the rest 9 problems and obtained better solution, convergence precision, global search ability, and robustness; it was also able to obtain global minimum for Step problem. The algorithms could not reach the success thresholds for Dixon-Price, Noisy Quartic, and Schwefel problems. The nonparametric test that was performed using Wilcoxon sign test, with results shown in Table 20, also confirms statistically significant difference in performance between the two algorithms with value (0.011) and a large effect size of in the direction of L-PSOCLUS. The median fitness value of L-PSOCLUS is also smaller than that of LDIW-PSO.

(4)   Results for 30-Dimensional Problems. Scaling the dimensions of test problems to 30 to further increase their complexities, except Griewank which decreases in complexity with increased dimension, did not affect the better performance of L-PSOCLUS over LDIW-PSO. Presented in Table 18 are the experimental results obtained by the two algorithms optimizing the same scaled problems. The results indicate that there are great differences between LDIW-PSO and L-PSOCLUS in performance. Out of the 14 problems L-PSOCLUS had 100% success rate in 6 of them (3 multimodal and 3 unimodal) while LDIW-PSO could only have in 3 (1 multimodal and 2 unimodal). They had equal success rate of 100% in optimizing Ackley, Sphere, and Sum Squares problems and 92% in Rosenbrock with L-PSOCLUS obtaining significantly better results. Optimizing Dixon-Price, Noisy Quartic, and Schwefel problems, none of the algorithms could reach the success threshold, yet L-PSOCLUS still obtained better results than LDIW-PSO, except in Dixon-Price where they had approximately the same performance. LDIW-PSO was not able to reach success threshold for Noncontinuous Rastrigin and Rotated Ellipsoid problems unlike L-PSOCLUS. In all the 14 problems L-PSOCLUS conceded in none to LDIW-PSO and it was able to obtain global minimum for Griewank and Step problems. The value (0.001) in Table 20 further confirms that there is statistically significant difference between LDIW-PSO and L-PSOCLUS with a large effect size of in the direction of L-PSOCLUS. The median value for the mean fitness of L-PSOCLUS is also smaller than that of RIW-PSO. Figure 1 shows the convergence graphs of the two algorithms. From the graphs it is clear that L-PSOCLUS demonstrates better convergence speed, better ability to escape premature convergence, and global search ability than LDIW-PSO.

5.4.6. Box Plots Analysis

Other than using statistical test to observe the performance of RIW-PSO, R-PSOCLUS, LDIW-PSO, and L-PSOCLUS, box plots analysis was also performed for 6 of the scaled test problems with 30 dimensions; the results are presented in Figures 3(a)3(f). Box plots give a direct visual comparison of both location and the dispersion of data. The four algorithms are plotted together to optimize space. In each of the plot, RIW-PSO is compared with R-PSOCLUS while LDIW-PSO is compared with L-PSOCLUS. The plots strengthen and justify the better performance of PSO when used with the proposed local search technique.

6. Conclusion

A new local search technique has been proposed in this paper with the goal of addressing the problem of premature convergence associated with particle swarm optimization algorithms. The proposed local search was used to efficiently improve the performance of two existing PSO variants, RIW-PSO and LDIW-PSO. These variants have been known to be less efficient optimizing continuous optimization problems. In this paper they were hybridized with the local search to form two other variants R-PSOCLUS and L-PSOCLUS. Some well-studied benchmark problems with low and high dimensions were used to extensively validate the performance of these new variants and comparisons were made with RIW-PSO and LDIW-PSO. They were also compared with two other PSO variants in the literature, which are hybridized with different local search techniques. The experimental results obtained show that the proposed variants successfully obtain better results with high quality while demonstrating better convergence velocity and precision, stability, robustness, and global-local search ability than the competing variants. This therefore shows that the local search technique proposed can help PSO algorithms execute effective exploitation in the search space to obtain high quality results for complex continuous optimization problems. This local search technique can be used with any population-based optimization algorithms to obtain quality solutions to simple and complex optimization problem.

Further study is needed on the parameter tuning of the proposed local search technique. Empirical investigation of the behaviour of the technique in optimizing problems with noise needs further study. The scalability of the algorithms for problems with higher dimension greater than 100 is essential. Finally, the proposed algorithm can be applied to real-world optimization problems.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The authors acknowledge College of Agriculture, Engineering and Sciences, University of Kwazulu-Natal, South Africa, for their support towards this work.