The Scientific World Journal

The Scientific World Journal / 2014 / Article
Special Issue

Bioinspired Computation and Its Applications in Operation Management

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 798129 | 23 pages | https://doi.org/10.1155/2014/798129

Improved Particle Swarm Optimization with a Collective Local Unimodal Search for Continuous Optimization Problems

Academic Editor: T. Chen
Received31 Oct 2013
Accepted29 Dec 2013
Published25 Feb 2014

Abstract

A new local search technique is proposed and used to improve the performance of particle swarm optimization algorithms by addressing the problem of premature convergence. In the proposed local search technique, a potential particle position in the solution search space is collectively constructed by a number of randomly selected particles in the swarm. The number of times the selection is made varies with the dimension of the optimization problem and each selected particle donates the value in the location of its randomly selected dimension from its personal best. After constructing the potential particle position, some local search is done around its neighbourhood in comparison with the current swarm global best position. It is then used to replace the global best particle position if it is found to be better; otherwise no replacement is made. Using some well-studied benchmark problems with low and high dimensions, numerical simulations were used to validate the performance of the improved algorithms. Comparisons were made with four different PSO variants, two of the variants implement different local search technique while the other two do not. Results show that the improved algorithms could obtain better quality solution while demonstrating better convergence velocity and precision, stability, robustness, and global-local search ability than the competing variants.

1. Introduction

Optimization comes to focus when there are needs to plan, take decisions, operate and control systems, design models, or seek optimal solutions to varieties of problems faced from day to day by different people. A number of these problems, which can be formulated as continuous optimization problems, are often approached with limited resources. Dealing with such problems, most especially when they are large scale and complex, has attracted the development of different nature-inspired optimization algorithms. These algorithms display problem-solving capabilities for researchers to solve complex and challenging optimization problems with many success stories. Swarm-based techniques are a family of nature-inspired algorithms and are population-based in nature; they are also known as evolutionary computation techniques. Particle swarm optimization (PSO) technique is a member of swarm-based techniques which is capable of producing low cost, fast, and robust solutions to several complex optimization problems. It is a stochastic, self-adaptive, and problem-independent optimization technique and was originally proposed in 1995 by Eberhart and Kennedy as simulation of a flock of bird or the sociological behavior of a group of people [1, 2]. From the time this concept was brought into optimization, it has been used extensively in many fields which include function optimization and many difficult real-world optimization problems [35].

PSO technique was initially implemented with few lines of codes using basic mathematical operations with no major adjustment needed to adapt it to new problems and it was almost independent of the initialization of the swarm [6]. It needs few parameters to operate with for successful and efficient behavior in order to obtain quality solutions. To implement this technique, a number of particles, which are characterized by positions and velocities, called swarm are required to be randomly distributed in a solution search space depending on the boundaries defined for the design variables of the problem being optimized. The number of design variables determines the dimensionality of the search space. If -dimensional space is considered, the position and velocity of each particle are represented as the vectors and , respectively. Every particle has a memory of its personal experience which is communicated to all reachable neighbours in the search space to guide the direction of movement of the swarm. Also, the quality of each particle (solution) is determined by the objective function of the problem being optimized and the particle with best quality is taken as the global solution towards which other particles will converge. The common practice is for the technique to maintain a single swarm of particles throughout its operation. This process of seeking optimal solution involves the adjustments of the position and velocity of each particle in each iteration using In (1), and are vectors representing the th particle personal best and swarm global best positions, respectively; and ; and are acceleration factors known as cognitive and social scaling parameters that determine the magnitude of the random forces in the direction of and ; and are random numbers between 0 and 1; is iteration index. The symbol is the inertia weight parameter which was introduced into the original PSO in [7]. The purpose of its introduction was to help the PSO algorithm balance its global and local search activities.

There are possibilities of the positions and velocities of the particles in the swarm increasing in value beyond necessary when they are updated. As a measure, the positions are clamped in each dimension to the search range of the design variables, where and represent the lower and upper bounds of a particle's position, respectively, while their velocities are controlled to be within a specified range , where and represent the lower and upper bounds of a particle's velocity, respectively. The idea of velocity clamping which was introduced by [1, 2, 8] and extensively experimented with in [9] has led to significant improvement as regards the performance of PSO. This is so because the particles could concentrate, taking reasonably sized steps to search through the search space rather than bouncing about excessively.

A major feature that characterizes an efficient optimization algorithm is the ability to strike a balance between local and global search. Global search involves the particles being able to advance from a solution to other parts of the search space and locate other promising candidates while local search means that the particle is capable of exploiting the neighbourhood of the present solution for other promising candidates. In PSO, as the rate of information sharing increases among the particles they migrate towards the same direction and region in the search space. If any of the particles could not locate any better global solution after some time, they will eventually converge about the existing one which may not be the global minimum due to lack of exploration power; this is known as premature convergence. This type of behaviour is more likely when the swarm of particles is overconcentrated. It could also occur when the optimization problem is of high dimension and/or nonconvex. One of the possible ways to prevent this premature convergence is to embed a local search technique into PSO algorithm to help improve the quality of each solution by searching its neighbourhood. After the improvement, better information is communicated among the particles thereby increasing the algorithm's ability to locate better global solution in course of optimization. Hill climbing, modified Hooke and Jeeves, gradient descent, golden ratio, Stochastic local search, adaptive local search, local interpolation, simulated annealing, and chaotic local search are different local search techniques that have been combined with PSO to improve its local search ability [1018].

In this paper, a different local search technique was proposed to harness the global search ability of PSO and improve on its local search efforts. This technique is based on the collective efforts of randomly selected (with replacement) particles a number of times equal to the size of the problem dimension. When a particle is selected, it is made to contribute the value in the position of its randomly selected dimension from its personal best. The contributed values are then used to form a potential global best solution which is further refined. This concept could offer PSO the ability to enhance its performance in terms of convergence speed, local search ability, robustness, and increased solution accuracy. The local search technique was hybridized with two of the existing PSO variants, namely, random inertia weight PSO (RIW-PSO) and linear decreasing inertia weight PSO (LDIW-PSO), to form two new variants. Numerical simulations were performed to validate the efficiencies of each of them and some statistical analyses were performed to ascertain any statistically significant difference in performance between the proposed variants and the old ones. From the results obtained it was shown that the proposed variants are very efficient.

In the sections that follow, RIW-PSO and LDIW-PSO are briefly described in Section 2; the motivation and description of the proposed local search technique are presented in Section 3 while the improved PSO with local search technique is described in Section 4. Numerical simulations are performed in Section 5 and Section 6 concludes the paper.

2. The Particle Swarm Optimization Variants Used

Two PSO variants were used to validate the proposed improvement of the performance of PSO technique. The variants are LDIW-PSO and RIW-PSO. These were chosen because of the evidence available in the literature that they are less efficient in optimizing many continuous optimization problems [1921]. These variants are succinctly described below.

2.1. PSO Based on Linear Decreasing Inertia Weight (LDIW-PSO)

This variant was proposed in [9] after the inertia weight parameter was introduced into the original PSO by [7]. It implements the linear decreasing inertia weight strategy represented in (3) which decreases from some high which facilitates exploration to a low value which on the other hand promotes exploitation. This greatly improved the performance of PSO. LDIW-PSO does global search at the beginning and converges quickly towards optimal positions but lacks exploitation power [9] and the ability required to jump out of the local minimum most especially when being in the multimodal landscape. Some improvements on LDIW-PSO exist in the literature [6, 9, 22]: where and are the initial and final values of inertia weight, is the current iteration number, MAXitr is the maximum iteration number, and is the inertia weight value in the th iteration. Apart from the problem of premature convergence, this variant was found inefficient in tracking a nonlinear dynamic system because of the difficulty in predicting whether exploration (a larger inertia weight value) or exploitation (a smaller inertia weight) will be better at any given time in the search space of the nonlinear dynamic system [23].

2.2. PSO Based on Random Inertia Weight (RIW-PSO)

Due to the improved performance of PSO when the constant inertia weight was introduced into it [7], a new era of research was indirectly initiated and this has attracted the attentions of many researchers in the field. The inefficiency of linear decreasing inertia weight, which linearly decreases from 0.9 to 0.4, in tracking a nonlinear dynamic system prompted the introduction of RIW which randomly varies within the same range of values. Random adjustment is one of the strategies that have been proposed to determine the inertia weight value to further improve on the performance of PSO. This strategy is nonfeedback in nature and the inertia weight takes different value randomly at each iteration, from a specified interval. In line with this, random inertia weight strategy represented in (4) was introduced into PSO by [23] to enable the algorithm track and optimize dynamic systems. In the equation, rand() is a uniform random number in the interval which make the formula generate a number randomly varying between 0.5 and 1.0, with a mean value of 0.75. When and are set to 1.494, the algorithm seems to demonstrate better optimizing efficiency. The motivation behind the selection of these values was Clerc's constriction factor [23]: Not much is recorded in the literature regarding the implementation of this variant of PSO. Some of the few implementations found in the literature are recorded in [6, 2022].

3. Proposed Local Search Technique

The basic principle underlying the optimizing strategy of PSO technique is that each particle in the swarm communicates their discoveries to their neighbours and the particle with the best discovery attracts others. While this strategy looks very promising, there is the risk of the particles being susceptible to premature convergence, especially when the problem to be optimized is multimodal and high in dimensionality. The reason is that the more the particles share their discoveries among themselves, the higher their identical behaviour is until they converge to the same area in the solution search space. If none of the particle could discover better global best, after some time all the particles will converge about the existing global best which may not be the global minimizer.

One of the motivations for this local search technique is the challenge of premature convergence associated with PSO technique which affects its reliability and efficiency. Another motivation is the decision-making strategy used by the swarm in searching for optimal solution to optimization problems. The decision is dictated by a single particle in the swarm; that is, other particles follow the best particle among them to search for better solution. Involving more than one particle in the decision making could lead to a promising region in the search space where optimal solution could be obtained.

The description of the local search technique is as follows: after all the particles have obtained their various personal best positions, each particle has an equal chance of being selected to contribute its idea towards how a potential location in the search space where better global best could be obtained. As a result, a number of particles equal to the dimension of the problem being optimized are randomly selected (with replacement). Each selected particle contributes an idea by donating the value in the location of its randomly selected dimension from its personal best. All the ideas contributed by the selected particles are collectively used (hybridized) to construct a potential solution in the solution search space. After constructing the potential solution, some searches are locally done around its neighbourhood with the hope of locating a better solution in comparison with the current global solution. If a better solution is found, it is then used to replace the current global solution; otherwise no replacement is made.

In this local search, the potential new position is denoted by and is sampled from the neighbourhood of the collectively constructed potential global solution represented as by where is a random vector picked uniformly from the range and is the search radius which is initially set to max (maximum radius for local search). The local search technique moves from position to position when there is improvement to the fitness. If there is no improvement on the fitness of by , the search radius is linearly decreased by multiplying it with a factor using where max is the maximum number of times the neighbourhood of is to be sampled, is the current time the neighbourhood is being sampled, and min is the minimum radius for the local search.

This proposed local search technique has been named collective local unimodal search (CLUS) technique. It has some trace of similarity in operation with local unimodal sampling (LUS) technique [24]. But they are quite different in the sense that, while LUS randomly picks a potential solution from the entire population, CLUS constructs a potential solution using the collective efforts of a randomly selected number of particles from the swarm. Also, CLUS uses a linear method to decrease the search radius (step size) in the neighbourhood of the potential solution which is different from the method applied by LUS during optimization. The CLUS technique is presented in Algorithm 1. In the technique, and represent the current global fitness value and its corresponding position in the search space.

  new arrays, each of length Dim
 While      do
  
  
  for     to problem Dimension
   randomly select any particle
   randomly select a dimension from the personal best of the selected particle
    
  end for
     validate for search space boundary
  
   
   
  else
   
   
  end if
 end while
 Return     and  

4. Improved PSO with Collective Unimodal Local Search (PSOCLUS)

The RIW-PSO increases convergence in early iterations and does more of global search activities but soon gets stuck in local optima because of lack of local search ability. Also, LDIW-PSO does global search at earlier part of its iteration but lacks enough momentum to do local search as it gets towards its terminal point of execution. The aim of this paper is to make a general improvement on the performance of PSO which can be applied to any of its variants. To achieve this, the two PSO variants were hybridized with the proposed collective local unimodal search (CLUS) technique which takes advantage of their global search abilities to do some neighbourhood search for better results. The improved PSO algorithm is presented in Algorithm 2.

Begin Algorithm
Step  1. Definition Phase
      (1.1) function to optimize as  
      (1.2) Parameter
     (1.2.1) swarm size
     (1.2.2) problem dimension
     (1.2.3) solution search space
     (1.2.4) particle velocity range
Step  2. Initialized phase
     For all particles randomly initialized in search space
     (2.1) position 
     (2.2) velocity  ,
     (2.3)
     (2.4) best of  
     (2.5) evaluate using objective function of problem
Step  3. Operation Phase
     Repeat until a stopping criterion is satisfied
     (3.1). Compute inertia weight using any inertia weight formula
     (3.2). For each particle 
     (3.2.1). update   for particle using (1)
     (3.2.2). validate for velocity boundaries
     (3.2.3). update   for particle using (2)
     (3.2.4). validate for position boundaries
     (3.2.5). If then
     (3.3). best of
     (3.4). Implement local search using CLUS in  Algorithm 1
Step  4. Solution Phase
     (4.1).
     (4.2).
     (4.3). Return and  
End     Algorithm

5. Numerical Simulations

In this section, the improved algorithm (PSOCLUS) was implemented using the inertia weight strategy of RIW-PSO and the variant was labeled R-PSOCLUS. It was also implemented using the inertia weight strategy of LDIW-PSO and the variant was labeled L-PSOCLUS. The performances of R-PSOCLUS and L-PSOCLUS were experimentally tested against those of RIW-PSO and LDIW-PSO, respectively. The maximum number of iterations allowed was 1000 for problems with dimensions less than or equal to 10, 2000 for 20-dimensional problems, and 3000 for 30-dimensional problems. A swarm size of 20 was used in all the experiments and twenty-five independent runs were conducted to collect data for analysis. The termination criteria for all the algorithms were set to be as maximum number of iterations relative to the problems' dimensions. A run, in which an algorithm is able to satisfy the set success criteria (see Table 1) before or at the maximum iteration, is considered to be successful. To further prove the efficiency of the proposed local search technique, the proposed PSO variants were also compared with some existing PSO variants hybridized with different local search techniques. They are PSO with golden ratio local search [15] and PSO with local interpolation search [18]. A total of 6 different experiments were conducted.(i)R-PSOCLUS was compared with PSO with golden ratio local search (GLSPSO);(ii)R-PSOCLUS was compared with PSO with local interpolation search (PSOlis);(iii)R-PSOCLUS was compared with RIW-PSO;(iv)L-PSOCLUS was compared with PSO with golden ratio local search (GLSPSO);(v)L-PSOCLUS was compared with PSO with local interpolation search (PSOlis);(vi)L-PSOCLUS was compared with LDIW-PSO.


Parameter minR maxR maxT

Value0.90.41.4940.05 * 0.05 * 0.012.0100

The application software was developed in Microsoft Visual C# programming language.

5.1. Test Problems

A total of 21 problems were used in the experiments. These problems have different degrees of complexity and multimodality which represents diverse landscapes enough to cover many of the problems which can arise in global optimization problems. Shown in Table 2 are the problems dimensions, optimal fitness values, and success thresholds. Presented in Table 3 are the definitions, characteristics (US: unimodal separable, UN: unimodal nonseparable, MS: multimodal separable, and MN: multimodal nonseparable), and search ranges of the problems. More details on the benchmark problems can be found in [22, 2527].


NumberProblemDimensionsOptimal valueSuccess threshold

1Ackley 10, 20, 300
2Booth20
3Easom2−1−1
4Griewank10, 20, 300
5Dixon-Price10, 20, 300
6Levy10, 20, 300
7Michalewicz5−4.687−4.687
8Noisy Quartic10, 20, 300
9Noncontinous Rastrigin10, 20, 30020
10Rastrigin10, 20, 30020
11Rosenbrock 10, 20, 30020
12Rotated Ellipsoid10, 20, 300
13Salomon50
14Schaffer's f6 20
15Schwefel10, 20, 30
16Schwefel P2.2210, 20, 300
17Shubert2−186.7309−186.7309
18Sphere10, 20, 300
19Step10, 20, 300
20Sum Squares10, 20, 300
21Trid6−50−50


NumberProblemFormulationFeatureSearch range

1Ackley MN±32
2Booth MN±10
3Easom UN±100
4Griewank MN±600
5Dixon-Price UN±10
6Levy
where
MN±10
7Michalewicz MS 0,
8Noisy Quartic US±1.28
9Noncontinous Rastrigin
MS±5.12
10Rastrigin MS±5.12
11Rosenbrock UN±30
12Rotated Ellipsoid UN±100
13Salomon MN±100
14Schaffer's f6 MN±100
15Schwefel MS±500
16Schwefel P2.22 UN±10
17Shubert MN±10
18Sphere US±100
19Step US±10
20SumSquaresUS±10
21Trid UN±d 2

5.2. Parameter Setting

The additional parameters that were set in the experiment are inertia weight threshold for LDIW-PSO ( and ), acceleration coefficients ( and ), velocity thresholds ( and ), minimum radius (min), and maximum radius (max) for local search as well as the maximum number of neighbourhood sampling (max) during the local search. The respective settings of these parameters are shown in Table 1. The parameters and were randomly generated using the uniform random number generator. The values of and were chosen for LDIW-PSO based on the experiments conducted in [9]; values for and were chosen for RIW-PSO based on the recommendation in [23] and it was also used for LDIW-PSO because it was discovered in course of the experiments in this paper that these values make LDIW-PSO perform better than the commonly used value of 2.0. The settings for and were done based on the outcome of experimental studies in [8].

5.3. Performance Measurement

The efficiency of the algorithms was tested against the set of benchmark problems given in Table 2 and numerical results obtained were analyzed using the criteria that are listed below. All the results are presented in Tables 4 to 20.(i)Best fitness solution: the best of the fitness solution among the solutions obtained during the runs.(ii)Mean best fitness solution: this is a measure of the precision (quality) of the result that the algorithm can get within given iterations in all the 25 runs.(iii)Standard deviation (Std. Dev.) of mean best fitness solution over 25 runs: this measures the algorithm's stability and robustness.(iv)Average number of iterations an algorithm was able to reach the success threshold.(v)Success rate (SR) = : this is the rate at which the success threshold is met during the independent number of runs and is a reflection of the global search ability and robustness of the algorithm.


ProblemAckleyGriewankRastriginRosenbrockSphere

AlgorithmGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUS

Best fitness0.03640.0000 8.80620.00002.61880.0000
Mean fitness0.341317.13710.00410.001629.49360.00009.00251.99710.01420.0000
Worst fitness1.265320.08880.04190.079150.47810.000018.98873.14440.04760.0000
Std. Dev.0.27626.75430.00610.011110.43720.00000.0340.72620.01230.0000

Statistical analysis using the Wilcoxon signed rank nonparametric test with 0.05 level of significance [28, 29] was also performed using the numerical results obtained by the algorithms, while box plots were used to analyze their variability in obtaining fitness values in all the runs.

5.4. Results and Discussions

Results obtained from all the experiments are discussed in this subsection to show the overall performance of the various algorithms. Presented in Tables 4, 5, 6, 7, 8, and 9 are the numerical results obtained and used to compare R-PSOCLUS and L-PSOCLUS with GLSPSO. R-PSOCLUS and L-PSOCLUS were also compared with PSOlis using the results presented in Table 10. The results in Tables 1118 were obtained for the scaled and nonscaled test problems listed in Table 3; the results were used to validate RIW-PSO, R-PSOCLUS, LDIW-PSO, and L-PSOCLUS. In each of the tables, for ease of observation, bold values represent the better results and “–” means that the algorithm could not satisfy the success threshold in any of the runs. The Wilcoxon sign rank nonparametric test, which is used as an alternative to the paired -test when the results cannot be assumed to be normally distributed, was applied to test the statistical significance differences between RIW-PSO and R-PSOCLUS as well as LDIW-PSO and L-PSOCLUS.


ProblemAckleyGriewankRastriginRosenbrockSphere

AlgorithmGLSPSOR-PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUS

Best fitness2.278420.30750.08970.0000109.594613.9247175.878522.75891.91230.0000
Mean fitness2.839820.47780.12570.0000185.522136.3715218.497627.51472.74490.0000
Worst fitness3.295220.57920.20740.0000229.622972.6581259.246676.74333.95590.0000
Std. Dev.0.22730.05740.02740.000024.982916.488221.80279.91820.48400.0000


ProblemAckleyGriewankRastriginRosenbrockSphere

AlgorithmGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUS

Best fitness3.514820.96660.31950.0022792.004293.57951378.01867.266923.06140.1970
Mean fitness3.670921.06910.42420.0230881.0822688.00481602.024909.848627.25344.7232
Worst fitness3.766421.13060.49920.0923934.9773848.99271763.095519.458529.161516.1174
Std. Dev.0.05510.03160.03030.025535.2341103.185490.287421083.57911.22534.2498


ProblemAckleyGriewankRastriginRosenbrockSphere

AlgorithmGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUS

Best fitness0.03640.0000 8.80620.00002.61880.0000
Mean fitness0.341318.25040.00410.004229.49360.00009.00251.05160.01420.0000
Worst fitness1.265320.07710.04190.100850.47810.000018.98872.80330.04760.0000
Std. Dev.0.27625.46400.00610.018610.43720.00000.0340.64490.01230.0000


ProblemAckleyGriewankRastriginRosenbrockSphere

AlgorithmGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUS

Best fitness2.278420.31840.08970.0000109.59460.1444175.87850.00001.91230.0000
Mean fitness2.839820.46310.12570.0000185.522118.7372218.497625.13592.74490.0000
Worst fitness3.295220.57340.20740.0000229.622938.8433259.246677.44443.95590.0000
Std. Dev.0.22730.06150.02740.000024.98298.457021.802713.25360.48400.0000


ProblemAckleyGriewankRastriginRosenbrockSphere

AlgorithmGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUSGLSPSO -PSOCLUS

Best fitness3.514820.21360.31950.0000792.004212.04161378.093.739023.06140.0000
Mean fitness3.670921.04910.42420.0000881.0822366.65211602.0107.230027.25340.0000
Worst fitness3.766421.11520.49920.0000934.9773504.22041763.0428.175829.16150.0000
Std. Dev.0.05510.12540.03030.000035.234168.200990.287456.92311.22530.0000


ProblemAlgorithm
PSOlis -PSOCLUS -PSOCLUS

Ackley
Griewank
Rastrigin2.005
Rosenbrock3.987
Sphere


ProblemBoothEasomMichalewiczSchaffer's f6SalomonShubertTrid-6

Algorithm RIW-PSO -PSOCLUSRIW-PSO -PSOCLUSRIW-PSO -PSOCLUSRIW-PSO -PSOCLUSRIW-PSO -PSOCLUSRIW-PSO -PSOCLUSRIW-PSO -PSOCLUS

Best fitness + 00
Mean fitness
Std. Dev.
Av. iteration39.1237.9255.045.48133.83109.95923.4071.8107.16114.40110.20
SR (%)100100100100004888020100100100100


ProblemAckleyGriewankDixon-PriceLevyNoisy QuarticNoncontinuous RastriginRastrigin

AlgorithmRIW-PSO -PSOCLUSRIW-PSO -PSOCLUSRIW-PSO -PSOCLUSRIW-PSO -PSOCLUSRIW-PSO -PSOCLUSRIW-PSO -PSOCLUSRIW-PSO -

Best fitness
Mean fitness
Std. Dev.
Av. iteration287.88263.68464.16295.50258.50127.3199.3240.4912.4449.4425.00
SR (%)9610009688521000092100100100

ProblemRosenbrockRotated EllipsoidSchwefelSchwefel 2.22SphereStepSum Squares

AlgorithmRIW-PSO -PSOCLUSRIW-PSO -PSOCLUSRIW-PSO -PSOCLUSRIW-PSO -PSOCLUSRIW-PSO -PSOCLUSRIW-PSO -PSOCLUSRIW-PSO -PSOCLUS

Best fitness