Research Article  Open Access
JuiYu Wu, "Solving Unconstrained Global Optimization Problems via Hybrid Swarm Intelligence Approaches", Mathematical Problems in Engineering, vol. 2013, Article ID 256180, 15 pages, 2013. https://doi.org/10.1155/2013/256180
Solving Unconstrained Global Optimization Problems via Hybrid Swarm Intelligence Approaches
Abstract
Stochastic global optimization (SGO) algorithms such as the particle swarm optimization (PSO) approach have become popular for solving unconstrained global optimization (UGO) problems. The PSO approach, which belongs to the swarm intelligence domain, does not require gradient information, enabling it to overcome this limitation of traditional nonlinear programming methods. Unfortunately, PSO algorithm implementation and performance depend on several parameters, such as cognitive parameter, social parameter, and constriction coefficient. These parameters are tuned by using trial and error. To reduce the parametrization of a PSO method, this work presents two efficient hybrid SGO approaches, namely, a realcoded genetic algorithmbased PSO (RGAPSO) method and an artificial immune algorithmbased PSO (AIAPSO) method. The specific parameters of the internal PSO algorithm are optimized using the external RGA and AIA approaches, and then the internal PSO algorithm is applied to solve UGO problems. The performances of the proposed RGAPSO and AIAPSO algorithms are then evaluated using a set of benchmark UGO problems. Numerical results indicate that, besides their ability to converge to a global minimum for each test UGO problem, the proposed RGAPSO and AIAPSO algorithms outperform many hybrid SGO algorithms. Thus, the RGAPSO and AIAPSO approaches can be considered alternative SGO approaches for solving standarddimensional UGO problems.
1. Introduction
An unconstrained global optimization (UGO) problem can generally be formulated as follows: where is an objective function and represents a decision variable vector. Additionally, , denotes search space (), which is dimensional and bounded by parametric constraints as follows: where and are the lower and upper boundaries of the decision variables , respectively.
Many conventional nonlinear programming (NLP) techniques, such as the golden search, quadratic approximation, NelderMead, steepest descent, Newton, and conjugate gradient methods, have been used to solve UGO problems [1]. Unfortunately, such NLP methods have difficulty in solving UGO problems when an objective function of an UGO problem is nondifferential. Many stochastic global optimization (SGO) approaches developed to overcome this limitation of the traditional NLP methods include genetic algorithms (GAs), particle swarm optimization (PSO), ant colony optimization (ACO), and artificial immune algorithms (AIAs). For instance, Hamzaçebi [2] developed an enhanced GA incorporating a local random search algorithm for eight continuous functions. Furthermore, Chen [3] presented a twolayer PSO method to solve nine UGO problems. Zhao [4] presented a perturbed PSO approach for 12 UGO problems. Meanwhile, Toksari [5] developed an ACO algorithm for solving UGO problems. Finally, Kelsey and Timmis [6] presented an AIA method based on the clonal selection principle for solving 12 UGO problems.
This work focuses on a PSO algorithm, based on it is being effective, robust and easy to use in the SGO methods. Research on the PSO method has considered many critical issues such as parameter selection, integration of the PSO algorithm with the approaches of selfadaptation, and integration with other intelligent optimizing methods [7]. This work surveys two issues: first is a PSO approach that integrates with other intelligent optimizing methods and second is parameter selection for use in a PSO approach.
Regarding the first issue, the conventional PSO algorithm lacks evolution operators of GAs, such as crossover and mutation operations. Therefore, PSO has premature convergence, that is, a rapid loss of diversity during optimization [4]. To overcome this limitation, many hybrid SGO methods have been developed to create diverse candidate solutions to enhance the performance of a PSO approach. Hybrid algorithms have some advantages; for instance, hybrid algorithms outperform individual algorithms in solving certain problems and thus can solve general problems more efficiently [8]. Kao and Zahara [9] presented a hybrid GA and PSO algorithm to solve 17 multimodal test functions. Their study used the operations of GA and PSO methods to generate candidate solutions to improve solution quality and convergence rates. Furthermore, Shelokar et al. [10] presented a hybrid PSO and ACO algorithm to solve multimodal continuous optimization problems. Their study used an ACO algorithm to update the particle positions to enhance a PSO algorithm performance. Chen et al. [11] presented a hybrid PSO and external optimization based on the Bak–Sneppen model to solve unimodal and multimodal benchmark problems. Furthermore, Thangaraj et al. [12] surveyed many algorithms that combine the PSO algorithm with other search techniques and compared the performances obtained using hybrid differential evolution PSO (DEPSO), adaptive mutation PSO (AMPSO), and hybrid GA and PSO (GAPSO) approaches to solve nine conventional benchmark problems.
Regarding the second issue, a PSO algorithm has numerous parameters that must be set, such as cognitive parameter, social parameter, inertia weight, and constriction coefficient. Traditionally, the optimal parameter settings of a PSO algorithm are tuned based on trial and error. The abilities of a PSO algorithm to explore and exploit are constrained to optimum parameter settings [13, 14]. Therefore, Jiang et al. [15] used a stochastic process theory to analyze the parameter settings (e.g., cognitive parameter, social parameter, and inertia weight) of a standard PSO algorithm.
This work focuses on the second issue related to the application of a PSO method. Fortunately, the optimization of parameter settings for a PSO algorithm can be viewed as an UGO problem. Moreover, realcoded GA (RGA) and AIA are efficient SGO approaches for solving UGO problems. Based on the advantage of a hybrid algorithm [8], this work develops two hybrid SGO approaches. The first approach is a hybrid RGA and PSO (RGAPSO) algorithm, while the second one is a hybrid AIA and PSO (AIAPSO) algorithm. The proposed RGAPSO and AIAPSO algorithms are considered as a means of solving the two optimization problems simultaneously. The first UGO problem (optimization of cognitive parameter, social parameter, and constriction coefficient) is optimized using external RGA and AIA approaches, respectively. The second UGO problem is then solved using the internal PSO algorithm. The performances of the proposed RGAPSO and AIAPSO algorithms are evaluated using a set of benchmark UGO problems and compared with those of many hybrid algorithms [9, 10, 12].
The rest of this paper is organized as follows. Section 2 describes RGA, PSO, and AIA approaches. Section 3 then presents the proposed RGAPSO and AIAPSO methods. Next, Section 4 compares the experimental results of the proposed RGAPSO and AIAPSO approaches with those of many hybrid methods. Conclusions are finally drawn in Section 5.
2. Related Works
The SGO approaches such as RGA, PSO, and AIA [16] are described as follows.
2.1. RealCoded Genetic Algorithm
GAs are based on the concepts of natural selection and use three genetic operations, that is, selection, crossover, and mutation, to explore and exploit the solution space. In solving continuous function optimization problems, RGA method outperforms binarycoded GA approach [17]. Therefore, this work describes operators of a RGA method [18].
2.1.1. Selection Operation
A selection operation picks up strong individuals from a current population based on their fitness function values and then reproduces these individuals into a crossover pool. Many selection operations developed include the roulette wheel, the ranking, and the tournament methods [17, 18]. This work employs the normalized geometric ranking method as follows: where = probability of selecting individual , = probability of choosing the best individual (here ), , and = individual ranking based on fitness value, where 1 represents the best, , and ps_{RGA} = population size of the RGA method.
2.1.2. Crossover Operation
While exploring the solution space by creating new offspring, the crossover operation randomly chooses two parents from the crossover pool and then uses these two parents to create two new offspring. This operation is repeated until the ps_{RGA}/2 is satisfied. The whole arithmetic crossover is easily performed as follows: where and = parents (decision variable vectors), and = offspring (decision variable vectors), and = uniform random number in the interval .
2.1.3. Mutation Operation
Mutation operation can improve the diversity of individuals (candidate solutions). Multinonuniform mutation is described as follows: where , perturbed factor, and = uniform random variable in the interval , = maximum generation of the RGA method, = current generation of the RGA method, = current decision variable , and = trial decision variable (candidate solution) .
2.2. Particle Swarm Optimization
Kennedy and Eberhart [19] first presented a standard PSO algorithm, which is inspired by the social behavior of bird flocks or fish schools. Like GAs, a PSO method is a populationbased algorithm. A population of candidate solutions is called a particle swarm. The particle velocities can be updated by (6) as follows: where = particle velocity of decision variable of particle at generation , = particle velocity of decision variable of particle at generation , = cognitive parameter, = social parameter, = particle position of decision variable of particle at generation , = independent uniform random numbers in the interval at generation , = best local solution at generation , = best global solution at generation , and ps_{PSO} = population size of the PSO algorithm.
The particle positions can be obtained using (7) as follows:
Shi and Eberhart [20] introduced a modified PSO algorithm by incorporating an inertia weight () into (8) to control the exploration and exploitation capabilities of a PSO algorithm as follows:
A constriction coefficient () in (9) is used to balance the exploration and exploitation tradeoff [21–23] as follows: where = uniform random variable in the interval , , ,.
This work considers parameters and to modify the particle velocities as follows: where , increased value reduces the , and = maximum generation of the PSO algorithm.
According to (11), the optimal values of parameters , , and are difficult to obtain through trial and error. This work thus optimizes these parameter settings by using RGA and AIA approaches.
2.3. Artificial Immune Algorithm
Wu [24] presented an AIA approach based on clonal selection and immune network theories to solve constrained global optimization problems. The AIA method consists of selection, hypermutation, receptor editing, and bone marrow operations. The selection operation is performed to reproduce strong antibodies (Abs). Also, diverse Abs are created using hypermutation, receptor editing, and bone marrow operations, as described in the following subsections.
2.3.1. Ab and Ag Representation
In the human system, an antigen (Ag) has multiple epitopes (antigenic determinants), which can be recognized by many Abs with paratopes (recognizers), on its surface. In the AIA approach, an Ag represents known parameters of a solved problem. The Abs are the candidate solutions (i.e., decision variables , ) of the solved problem. The quality of a candidate solution is evaluated using an AbAg affinity that is derived from the value of an objective function of the solved problem.
2.3.2. Selection Operation
The selection operation controls the number of antigenspecific Abs. This operation is defined according to AbAg and AbAb recognition information as follows: where = probability that recognizes (the best solution), = the best with the highest AbAg affinity, = decision variables of , and rs_{AIA} = repertoire (population) size of the AIA approach.
The is recognized by other in a current Ab repertoire. Large implies that can effectively recognize . The with that is equivalent to or larger than the threshold degree of AIA approach is reproduced to generate an intermediate Ab repertoire.
2.3.3. Hypermutation Operation
The somatic hypermutation operation can be expressed as follows: where = perturbation factor, = current generation of the AIA method, = maximum generation number of the AIA method, and and = uniform random number in the interval .
This operation has two tasks, that is, a uniform search and local fine tuning.
2.3.4. ReceptorEditing Operation
A receptorediting operation is developed based on the standard Cauchy distribution , in which the local parameter is zero and the scale parameter is one. Receptor editing is implemented using Cauchy random variables that are created from , owing to their ability to provide a large jump in the AbAg affinity landscape to increase the probability of escaping from the local AbAg affinity landscape. Cauchy receptor editing can be defined by where , vector of Cauchy random variables, and = uniform random number in the interval .
This operation is used in local finetuning and large perturbation.
2.3.5. Bone Marrow Operation
The paratope of an can be created by recombining gene segments and [25]. Therefore, based on this metaphor, diverse are synthesized using a bone marrow operation. This operation randomly selects two from the intermediate repertoire and a recombination point from the gene segments of the paratope of the selected . The selected gene segments (e.g., gene of and gene of the ) are reproduced to create a library of gene segments. The selected gene segments in the paratope are then deleted. The new is formed by inserting the gene segment, which is gene of the in the library plus a random variable created from standard normal distribution , at the recombination point. The details of the implementation of the bone marrow operation can be found [24].
3. Methods
This work develops the RGAPSO and AIAPSO approaches for solving UGO problems. The implementation of the RGAPSO and AIAPSO methods is described as follows.
3.1. RGAPSO Algorithm
Figure 1 shows the pseudocode of the proposed RGAPSO algorithm. The best parameter setting of the internal PSO algorithm is obtained by using the external RGA method. Benchmark UGO problems are solved by using the internal PSO algorithm.
External RGA
Step 1 (initialize the parameter settings). Parameter settings such as , crossover probability of the RGA method , mutation probability of the RGA approach , as well as lower and upper boundaries of the parameters (, , and ) for a PSO algorithm are given. The candidate solution (individual) of a RGA method represents the optimized parameters of internal PSO algorithm. Figure 2 illustrates the candidate solution of the RGA method.
Step 2 (calculate the fitness function value). The fitness function value of the external RGA method is the best objective function value obtained from the best solution of each internal PSO algorithm execution as follows:
The candidate solution of the external RGA method is incorporated into the internal PSO algorithm and, then, the internal PSO algorithm is used to solve an UGO problem. The internal PSO algorithm is executed as follows.
Internal PSO Algorithm (1)Generate an initial particle swarm. An initial particle swarm is created based on from [] of an UGO problem. A particle represents a candidate solution of an UGO problem.(2)Compute the objective function value. The objective function value of the internal PSO algorithm of particle is the objective function value of an UGO problem. (3)Update the particle velocity and position. Equations (7) and (11) can be used to update the particle position and velocity. (4)Perform an elitist strategy. A new particle swarm is generated from internal step . Notably, of a candidate solution (particle ) in the particle swarm is evaluated. This work makes a pairwise comparison between the of candidate solutions in the new particle swarm and that in the current particle swarm. A situation in which the candidate solution () in the new particle swarm is better than candidate solution in the current particle swarm implies that the strong candidate solution in the new particle swarm replaces the candidate solution in the current particle swarm. The elitist strategy guarantees that the best candidate solution is always preserved in the next generation. The current particle swarm is updated to the particle swarm of the next generation.
Internal steps from to are repeated until the maximum generation number of the PSO method of the internal PSO algorithm is satisfied.
End
Step 3 (perform a selection operation). Equation (3) is used to select the parents into a crossover pool.
Step 4 (implement a crossover operation). The crossover operation performs a global search. The candidate solutions are created by using (4).
Step 5 (conduct a mutation operation). The mutation operation implements a local search. A solution space is exploited using (5).
Step 6 (perform an elitist strategy). This work presents an elitist strategy to update the population. A situation in which the of candidate solution in the new population is larger than the of candidate solution in the current population suggests that a replacement of the weak candidate solution takes place. Additionally, a situation in which the of candidate solution in the new population is equal to or worse than that in the current population implies that the candidate solution in the current population survives. In addition to maintaining the strong candidate solutions, this strategy effectively eliminates weak candidate solutions.
External steps 2 to 6 are repeated until the of the external RGA method is met.
3.2. AIAPSO Algorithm
Figure 3 shows the pseudocode of the proposed AIAPSO algorithm. The external AIA method is used to optimize the parameter settings of the internal PSO method, which is employed to solve benchmark UGO problems.
External AIA
Step 1 (initialize the parameter settings). Several parameters must be predetermined. These include repertoire (population) size and . An available Ab repertoire (population) is randomly generated using from the lower and upper boundaries of parameters [], [], and []. Figure 4 shows the Ab and Ag representation.
Step 2 (evaluate the AbAg affinity).
Internal PSO Algorithm. The external AIA approach offers parameter settings , , and for the internal PSO algorithm. Subsequently, internal steps – of the PSO algorithm are implemented. The internal PSO method returns the best fitness value of PSO to the external AIA method.(1)Generate an initial particle swarm. An initial particle swarm is created based on from [, ] of an UGO problem. A particle represents a candidate solution of an UGO problem.(2)Compute the fitness value. The fitness value of the internal PSO algorithm is the objective function value of an UGO problem. (3)Update the particle velocity and position. Equations (7) and (11) can be used to update the particle position and velocity. (4)Perform an elitist strategy. A new particle swarm (population) is generated from internal step . Notably, of a candidate solution (particle ) in the particle swarm is evaluated. This work makes a pairwise comparison between the of candidate solutions in the new particle swarm and that in the current particle swarm. The elitist strategy guarantees that the best candidate solution is always preserved in the next generation. The current particle swarm is updated to the particle swarm of the next generation.
Internal steps from to are repeated until the of the internal PSO algorithm is satisfied.
End
Consistent with the AbAg affinity metaphor, an AbAg affinity is determined using (16) as follows:
Following the evaluation of the AbAg affinities of Abs in the current Ab repertoire, the Ab with the highest AbAg affinity () is chosen to undergo clonal selection in external Step 3.
Step 3 (perform a clonal selection operation). To control the number of antigenspecific Abs, (12) is employed.
Step 4 (implement an AbAg affinity maturation operation). The intermediate Ab repertoire that is created in external Step 3 is divided into two subsets. These Abs undergo somatic hypermutation operation by using (13) when the random number is 0.5 or less. Notably, these Abs suffer receptorediting operation using (14) when the random number exceeds 0.5.
Step 5 (introduce diverse Abs). Based on the bone marrow operation, diverse Abs are created to recruit the Abs suppressed in external Step 3.
Step 6 (Update an Ab repertoire). A new Ab repertoire is generated from external Steps 3–5. The AbAg affinities of the Abs in the generated Ab repertoire are evaluated. This work presents a strategy for updating the Ab repertoire. A situation in which the AbAg affinity of in the new Ab repertoire exceeds that in the current Ab repertoire implies that a strong Ab in the new Ab repertoire replaces the weak Ab in the current Ab repertoire. Additionally, a situation in which the AbAg affinity of in the new Ab repertoire equals to or is worse than that in the current Ab repertoire implies that the in the current Ab repertoire survives. In addition to maintaining the strong Abs, this strategy eliminates nonfunctional Abs
Repeat external Steps 2–6 until the termination criterion is satisfied.
4. Results
The proposed RGAPSO and AIAPSO algorithms were applied to a set of benchmark UGO problems taken from other studies [9, 10, 17, 26], as detailed in the Appendix. The proposed RGAPSO and AIAPSO approaches were coded in MATLAB software and run on a Pentium D 3.0 (GHz) personal computer. Onehundred independent runs were conducted for each test problem (TP). To have comparable numerical results, the accuracy was chosen based on the numerical results reported in [9, 10, 17, 26]. Numerical results were summarized, including rate of successful minimizations (success rate %), best mean worst, mean computational CPU time (MCCT), and mean error ME (average value of the gap between the objective function values calculated using the AIAPSO and RGAPSO solutions and the known global minimum value). Table 1 lists the parameter settings for the proposed RGAPSO and AIAPSO approaches. The table shows 20,000 objective function evaluations of the internal PSO approach for an UGO problem with decision variables and 60,000 objective function evaluations of the internal PSO for an UGO problem with decision variables. Moreover, the external AIA and RGA methods stop when = 20 and = 20 are met or the best fitness value of the RGA approach (or the best AbAg affinity of the AIA method) does not significantly change for the past five generations.

4.1. Numerical Results Obtained Using the RGAPSO and AIAPSO Algorithms for LowDimensional UGO Problems ()
Table 2 lists the numerical results obtained using the proposed RGAPSO algorithm. The numerical results indicate that the RGAPSO algorithm can obtain the global minimum for each test UGO problem since these MEs equal or closely approximate “0,” and the RGAPSO algorithm has an acceptable MCCT for each TP. Table 3 lists the optimal parameter settings obtained using the proposed RGAPSO algorithm to solve 14 UGO problems.
 
: the objective value obtained using the best RGAPSO solution. : mean objective value obtained using the RGAPSO solutions. : the worst objective value obtained using the worst RGAPSO solution. 

Table 4 lists the numerical results obtained using the proposed AIAPSO algorithm. Numerical results indicate that the AIAPSO algorithm can obtain the global minimum for each test UGO problem since these MEs equal or closely approximate “0,” and that the AIAPSO algorithm has an acceptable MCCT for each TP. Table 5 lists the optimal parameter settings obtained using the proposed AIAPSO algorithm for solving 14 UGO problems.
 
: the objective value obtained using the best AIAPSO solution. : mean objective value obtained using the AIAPSO solutions. : the worst objective value obtained using the worst AIAPSO solution. 

4.2. Numerical Results Obtained Using the RGAPSO and AIAPSO Algorithms for a StandardDimensional UGO Problem ()
To investigate the effectiveness of the RGAPSO and AIAPSO methods for solving a standarddimensional UGO problem, the Zakharov problem with 30 decision variables (ZA_{30}), as described in the Appendix, has been solved using the RGAPSO and AIAPSO approaches. Fifty independent runs were performed to solve the UGO problem. To increase the diversity of candidate solutions for use in the external RGA method, the parameter was set from 0.15 to “1.” Table 6 lists the numerical results obtained using the RGAPSO and AIAPSO approaches. This table indicates that the two approaches converge to the global optimum value, since the MEs closely approximate “0,” and the MCCT of the RGAPSO method is larger than that of the AIAPSO method. Moreover, the success rates of the proposed RGAPSO and AIAPSO approaches are 100%. The Wilcoxon test is performed to the difference of median values of the MEs obtained using the RGAPSO and AIAPSO methods. The value of the Wilcoxon test is 0.028, which is smaller than the significance level of 0.05, indicating that the performance of the RGAPSO method is statistically different to that of the AIAPSO method. Table 7 summarizes the optimal parameter settings obtained using the proposed RGAPSO and AIAPSO algorithms for the UGO problem ZA_{30.}


The UGO problem ZA_{50} was solved using the RGAPSO and AIAPSO methods. The RGAPSO and AIAPSO methods fail to solve the UGO problem, since the diversity of the particle swarm in the internal PSO method cannot be maintained. Hence, future work will focus on improving the diversity of the particle swarm by applying mutation operations.
4.3. Comparison
Table 8 lists the results of the Wilcoxon test for the MEs obtained using the proposed RGAPSO and AIAPSO methods for 14 UGO problems. In this table, the “**” represents the value of Wilcoxon test which cannot be obtained, since the MEs obtained using the RGAPSO and AIAPSO methods for a TP are identical. Moreover, the median values of the MEs obtained using the RGAPSO and AIAPSO methods for TP 10 are not statistically different, since their value is larger than the significance level of 0.05. Overall, the performances obtained using the RGAPSO and AIAPSO methods are statistically identical.

Table 9 compares the numerical results obtained using the RGAPSO and AIAPSO methods with those obtained using the hybrid algorithms for 11 TPs. Specifically, the table lists the numerical results obtained using the NelderMead simplex search and PSO (NMPSO) and GAPSO taken from [9], those obtained using particle swarm ant colony optimization (PSACO), continuous hybrid algorithm (CHA) and continuous tabu simplex search (CTSS) taken from [10], and those obtained using DEPSO, AMPSO1, and AMPSO2 taken from [12]. Table 9 indicates that the RGAPSO and AIAPSO methods yield superior accuracy of MEs obtained using the NMPSO, GAPSO, DEPSO, AMPSO1, AMPSO2, CHA and CTSS approaches for TPs 2, 3, 4, 5, 7, 9, 11, and 12 and that RGAPSO and AIAPSO approaches yield superior accuracy of MEs obtained using the PSACO method for TPs 4, 5, 7, 9, 10, 11, 12, 13 and 14. Table 10 compares the percentage success rates of the proposed RGAPSO and AIAPSO approaches and those of the hybrid algorithms for 11 TPs, indicating that all algorithms except for the CHA and CTSS methods achieved identical performance (100% success rate) for all TPs.
 
(“—” denotes unavailable information). 
 
(“—” denotes unavailable information). 
4.4. Summary of Results
The proposed RGAPSO and AIAPSO algorithms have the following benefits.(1)Parameter manipulation of the internal PSO algorithm is based on the solved UGO problems. Owing to their ability to efficiently solve UGO problems, the external RGA and AIA approaches are substituted for trial and error to manipulate the parameters (, , and ). (2)Besides obtaining the optimum parameter settings of the internal PSO algorithm, the RGAPSO and AIAPSO algorithms can yield a global minimum for an UGO problem.(3)Beside, outperforming some published hybrid SGO methods, the proposed RGAPSO and AIAPSO approaches reduce the parametrization for the internal PSO algorithm, despite being more complex than individual SGO approaches.
The proposed RGAPSO and AIAPSO algorithms are limited in that they cannot solve highdimensional UGO problems (such as ). Future work will focus on increasing the diversity of the particle swarm of the internal PSO method by applying mutation to solve highdimensional UGO problems.
5. Conclusions
This work developed RGAPSO and AIAPSO algorithms. Performances of the proposed RGAPSO and AIAPSO approaches were evaluated using a set of benchmark UGO problems. Numerical results indicate that the proposed RGAPSO and AIAPSO methods can converge to global minimum for each test UGO problem and obtain the best parameter settings of the internal PSO algorithm. Moreover, the numerical results obtained using the RGAPSO and AIAPSO algorithms are superior to those obtained using many alternative hybrid SGO methods. The RGAPSO and AIAPSO methods can thus be considered efficient SGO approaches for solving standarddimensional UGO problems.
Appendix
(1) Sixhump camel back (SHCB) (two variables) [17]: search domain: ; one global minimum at two different points: and , . (2) Goldsteinprice (GP) (two variables) [9, 10]: search domain: , four local minima; one global minimum: , . (3) Easom (ES) (two variables) [9]: search domain: , several local minima (exact number unspecified in usual literature); one global minimum: , . (4) B2 (two variables) [9, 10, 17]: search domain: , several local minima (exact number unspecified in usual literature); one global minimum , . (5) De Jong (DJ) (three variables) [9, 10]: search domain: , one global minimum: , . (6) Booth (BO) (two variables) [26]: search domain: , one global minimum: , . (7) Branin RCOC (RC) (two variables) [9, 10]: search domain: , no local minimum; three global minima: , , , . (8) Rastrigin (RA) (two variables) [10]: search domain: , 50 local minima; One global minimum: , . (9) Rosenbrock (RSn) (N variables) [9, 10]: Two functions were considered: RS_{2}, RS_{5} search domain: , several local minima (exact number unspecified in usual literature); global minimum: , . (10) Shubert (SH) (two variables) [9, 10]: search domain: , 760 local minima; 18 global minima; . (11) Zakharov (ZAn) (N variables) [9, 10]: Three functions were considered: ZA_{2}, ZA_{5}, ZA_{10}, ZA_{30} search domain: , several local minima (exact number unspecified in usual literature); global minimum: , .Conflict of Interests
The author confirm that he does not has a conflict of interest with the MATLAB software.
Acknowledgment
The author would like to thank the National Science Council of the Republic of China, Taiwan for financially supporting this research under Contract no. NSC 1002622E262006CC3.
References
 W. Y. Yang, W. Cao, T.S. Chung, and J. Morris, Applied Numerical Methods Using MATLAB, John Wiley & Sons, Hoboken, NJ, USA, 2005. View at: Publisher Site  Zentralblatt MATH
 C. Hamzaçebi, “Improving genetic algorithms' performance by local search for continuous function optimization,” Applied Mathematics and Computation, vol. 196, no. 1, pp. 309–317, 2008. View at: Publisher Site  Google Scholar
 C. C. Chen, “Twolayer particle swarm optimization for unconstrained optimization problems,” Applied Soft Computing Journal, vol. 11, no. 1, pp. 295–304, 2011. View at: Publisher Site  Google Scholar
 X. Zhao, “A perturbed particle swarm algorithm for numerical optimization,” Applied Soft Computing Journal, vol. 10, no. 1, pp. 119–124, 2010. View at: Publisher Site  Google Scholar
 M. D. Toksari, “Minimizing the multimodal functions with Ant Colony Optimization approach,” Expert Systems with Applications, vol. 36, no. 3, pp. 6030–6035, 2009. View at: Publisher Site  Google Scholar
 J. Kelsey and J. Timmis, “Immune inspired somatic contiguous hypermutation for function optimisation,” in Proceedings of the Genetic and Evolutionary Computation (GECCO '03), pp. 207–208, Chicago, Ill, USA, 2003. View at: Google Scholar
 M. Gang, Z. Wei, and C. Xiaolin, “A novel particle swarm optimization algorithm based on particle migration,” Applied Mathematics and Computation, vol. 218, no. 11, pp. 6620–6626, 2012. View at: Google Scholar
 H. Poorzahedy and O. M. Rouhani, “Hybrid metaheuristic algorithms for solving network design problem,” European Journal of Operational Research, vol. 182, no. 2, pp. 578–596, 2007. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 Y. T. Kao and E. Zahara, “A hybrid genetic algorithm and particle swarm optimization for multimodal functions,” Applied Soft Computing Journal, vol. 8, no. 2, pp. 849–857, 2008. View at: Publisher Site  Google Scholar
 P. S. Shelokar, P. Siarry, V. K. Jayaraman, and B. D. Kulkarni, “Particle swarm and ant colony algorithms hybridized for improved continuous optimization,” Applied Mathematics and Computation, vol. 188, no. 1, pp. 129–142, 2007. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 M. R. Chen, X. Li, X. Zhang, and Y. Z. Lu, “A novel particle swarm optimizer hybridized with extremal optimization,” Applied Soft Computing Journal, vol. 10, no. 2, pp. 367–373, 2010. View at: Publisher Site  Google Scholar
 R. Thangaraj, M. Pant, A. Abraham, and P. Bouvry, “Particle swarm optimization: hybridization perspectives and experimental illustrations,” Applied Mathematics and Computation, vol. 217, no. 12, pp. 5208–5226, 2011. View at: Publisher Site  Google Scholar
 Z. H. Zhan, J. Zhang, Y. Li, and H. S. H. Chung, “Adaptive particle swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 39, no. 6, pp. 1362–1381, 2009. View at: Publisher Site  Google Scholar
 A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Selforganizing hierarchical particle swarm optimizer with timevarying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240–255, 2004. View at: Publisher Site  Google Scholar
 M. Jiang, Y. P. Luo, and S. Y. Yang, “Stochastic convergence analysis and parameter selection of the standard particle swarm optimization algorithm,” Information Processing Letters, vol. 102, no. 1, pp. 8–16, 2007. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 J. Y. Wu, “Solving constrained global optimization problems by using hybrid evolutionary computing and artificial life approaches,” Mathematical Problems in Engineering, vol. 2012, Article ID 841410, 36 pages, 2012. View at: Publisher Site  Google Scholar
 Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, New York, NY, USA, 1999.
 C. R. Houck, J. A. Joines, M. G. Kay, and in:, “A genetic algorithm for function optimization: a MATLAB implementation,” in NSCUIE TR 9509, North Carolina State University, Raleigh, NC, USA, 1995. View at: Google Scholar
 J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, WA, Australia, December 1995. View at: Google Scholar
 Y. Shi and R. Eberhart, “Modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98), pp. 69–73, Anchorage, Alaska, USA, May 1998. View at: Google Scholar
 M. Clerc, “The swarm and the queen: towards a deterministic and adaptive particle swarm optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1951–1957, Washington, DC, USA, 1999. View at: Google Scholar
 M. Clerc and J. Kennedy, “The particle swarmexplosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002. View at: Publisher Site  Google Scholar
 A. P. Engelbrecht, Fundamentals of Computational Swarm Intelligence, John Wiley & Sons, 2005.
 J. Y. Wu, “Solving constrained global optimization via artificial immune system,” International Journal on Artificial Intelligence Tools, vol. 20, no. 1, pp. 1–27, 2011. View at: Publisher Site  Google Scholar
 L. N. de Castro and F. J. Von Zuben, “Artificial Immune Systems—Part I—Basic Theory and Applications,” FEEC/Universidade Estadual de Campinas, Campinas, Brazil, 1999, ftp://ftp.dca.fee.unicamp.br/pub/docs/vonzuben/tr_dca/trdca0199.pdf. View at: Google Scholar
 S. K. S. Fan and E. Zahara, “A hybrid simplex search and particle swarm optimization for unconstrained optimization,” European Journal of Operational Research, vol. 181, no. 2, pp. 527–548, 2007. View at: Publisher Site  Google Scholar  Zentralblatt MATH
Copyright
Copyright © 2013 JuiYu Wu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.