Research Article  Open Access
JuiYu Wu, "Solving Constrained Global Optimization Problems by Using Hybrid Evolutionary Computing and Artificial Life Approaches", Mathematical Problems in Engineering, vol. 2012, Article ID 841410, 36 pages, 2012. https://doi.org/10.1155/2012/841410
Solving Constrained Global Optimization Problems by Using Hybrid Evolutionary Computing and Artificial Life Approaches
Abstract
This work presents a hybrid realcoded genetic algorithm with a particle swarm optimization (RGAPSO) algorithm and a hybrid artificial immune algorithm with a PSO (AIAPSO) algorithm for solving 13 constrained global optimization (CGO) problems, including six nonlinear programming and seven generalized polynomial programming optimization problems. External RGA and AIA approaches are used to optimize the constriction coefficient, cognitive parameter, social parameter, penalty parameter, and mutation probability of an internal PSO algorithm. CGO problems are then solved using the internal PSO algorithm. The performances of the proposed RGAPSO and AIAPSO algorithms are evaluated using 13 CGO problems. Moreover, numerical results obtained using the proposed RGAPSO and AIAPSO algorithms are compared with those obtained using published individual GA and AIA approaches. Experimental results indicate that the proposed RGAPSO and AIAPSO algorithms converge to a global optimum solution to a CGO problem. Furthermore, the optimum parameter settings of the internal PSO algorithm can be obtained using the external RGA and AIA approaches. Also, the proposed RGAPSO and AIAPSO algorithms outperform some published individual GA and AIA approaches. Therefore, the proposed RGAPSO and AIAPSO algorithms are highly promising stochastic global optimization methods for solving CGO problems.
1. Introduction
Many scientific, engineering, and management problems can be expressed as constrained global optimization (CGO) problems, as follows: where denotes an objective function; represents a set of nonlinear inequality constraints; refers to a set of nonlinear equality constraints; represents a vector of decision variables which take real values, and each decision variable is constrained by its lower and upper boundaries ; is the total number of decision variables . For instance, generalized polynomial programming (GPP) belongs to the nonlinear programming (NLP) method. The formulation of GPP is a nonconvex objective function subject to nonconvex inequality constraints and possibly disjointed feasible region. The GPP approach has been successfully used to solve problems including alkylation process design, heat exchanger design, optimal reactor design [1], inventory decision problem (economic production quantity) [2], process synthesis and the design of separations, phase equilibrium, nonisothermal complex reactor networks, and molecular conformation [3].
Traditional local NLP optimization approaches based on a gradient algorithm are inefficient for solving CGO problems, while an objective function is nondifferentiable. Global optimization methods can be divided into deterministic or stochastic [4]. Often involving a sophisticated optimization process, deterministic global optimization methods typically make assumptions regarding the problem to be solved [5]. Stochastic global optimization methods that do not require gradient information and numerous assumptions have received considerable attention. For instance, Sun et al. [6] devised an improved vector particle swarm optimization (PSO) algorithm with a constraintpreserving method to solve CGO problems. Furthermore, Tsoulos [7] developed a realcoded genetic algorithm (RGA) with a penalty function approach for solving CGO problems. Additionally, Deep and Dipti [8] presented a selforganizing GA with a tournament selection method for solving CGO problems. Meanwhile, Wu and Chung [9] developed a RGA with a static penalty function approach for solving GPP optimization problems. Finally, Wu [10] introduced an artificial immune algorithm (AIA) with an adaptive penalty function method to solve CGO problems.
Zadeh [11] defined “soft computing” as the synergistic power of two or more fused computational intelligence (CI) schemes, which can be divided into several branches: granular computing (e.g., fuzzy sets, rough sets, and probabilistic reasoning), neurocomputing (e.g., supervised, unsupervised, and reinforcement neural learning algorithms), evolutionary computing (e.g., GAs, genetic programming, and PSO algorithms), and artificial life (e.g., artificial immune systems) [12]. Besides, outperforming individual algorithms in terms of solving certain problems, hybrid algorithms can solve general problems more efficiently [13]. Therefore, hybrid CI approaches have recently attracted considerable attention as a promising field of research. Various hybrid evolutionary computing (GA and PSO methods) and artificial life (such as AIA methods) approaches have been developed for solving optimization problems. These hybrid algorithms focus on developing diverse candidate solutions (such as chromosomes and particles) of population/swarm to solve optimization problems more efficiently. These hybrid algorithms use two different algorithms to create diverse candidate solutions using their specific operations and then merge these diverse candidate solutions to increase the diversity of the candidate population. For instance, AbdElWahed et al. [14] developed an integrated PSO algorithm and GA to solve nonlinear optimization problems. Additionally, Kuo and Han [15] presented a hybrid GA and PSO algorithm for bilevel linear programming to solve a supply chain distribution problem. Furthermore, Shelokar et al. [16] presented a hybrid PSO method and ant colony optimization method for solving continuous optimization problems. Finally, Hu et al. [17] developed an immune cooperative PSO algorithm for solving the faulttolerant routing problem.
Compared to the above hybrid CI algorithms, this work optimizes the parameter settings of an individual CI method by using another individual CI algorithm. A standard PSO algorithm has certain limitations [17, 18]. For instance, a PSO algorithm includes many parameters that must be set, such as the cognitive parameter, social parameter, and constriction coefficient. In practice, the optimal parameter settings of a PSO algorithm are tuned based on trial and error and prior knowledge is required to successfully manipulate the cognitive parameter, social parameter, and constriction coefficient. The exploration and exploitative capabilities of a PSO algorithm are limited to optimum parameter settings. Moreover, conventional PSO methods involve premature convergence that rapidly losses diversity during optimization.
Fortunately, optimization of parameter settings for a conventional PSO algorithm can be considered an unconstrained global optimization (UGO) problem, and the diversity of candidate solutions of the PSO method can be increased using a multinonuniform mutation operation [19]. Moreover, the parameter manipulation of a GA and AIA method is easy to implement without prior knowledge. Therefore, to overcome the limitations of a standard PSO algorithm, this work develops two hybrid CI algorithms to solve CGO problems efficiently. The first algorithm is a hybrid RGA and PSO (RGAPSO) algorithm, while the second algorithm is a hybrid AIA and PSO (AIAPSO) algorithm. The proposed RGAPSO and AIAPSO algorithms are considered to optimize two optimization problems simultaneously. The UGO problem (optimization of cognitive parameter, social parameter, constriction coefficient, penalty parameter, and mutation probability of an internal PSO algorithm based on a penalty function approach) is optimized using external RGA and AIA approaches, respectively. A CGO problem is then solved using the internal PSO algorithm. The performances of the proposed RGAPSO and AIAPSO algorithms are evaluated using a set of CGO problems (e.g., six benchmark NLP and seven GPP optimization problems).
The rest of this paper is organized as follows. Section 2 describes the RGA, PSO algorithm, AIA, and penalty function approaches. Section 3 then introduces the proposed RGAPSO and AIAPSO algorithms. Next, Section 4 compares the experimental results of the proposed RGAPSO and AIAPSO algorithms with those of various published individual GAs and AIAs [9, 10, 20–22] and hybrid algorithms [23, 24]. Finally, conclusions are drawn in Section 5.
2. Related Works
2.1. RealCoded Genetic Algorithm
GAs are stochastic global optimization methods based on the concepts of natural selection and use three genetic operators, that is, selection, crossover, and mutation, to explore and exploit the solution space. RGA outperforms binarycoded GA in solving continuous function optimization problems [19]. This work thus describes operators of a RGA [25].
2.1.1. Selection
A selection operation selects strong individuals from a current population based on their fitness function values and then reproduces these individuals into a crossover pool. The several selection operations developed include the roulette wheel, ranking, and tournament methods [19, 25]. This work uses the normalized geometric ranking method, as follows: probability of selecting individual probability of choosing the best individual (here ) individual ranking based on fitness value, where 1 represents the best, , = population size of the RGA.
2.1.2. Crossover
While exploring the solution space by creating new offspring, the crossover operation randomly selects two parents from the crossover pool and then uses these two parents to generate two new offspring. This operation is repeated until the /2 is satisfied. The whole arithmetic crossover is easily implemented, as follows: where and parents, and offspring, uniform random number in the interval .
2.1.3. Mutation
Mutation operation can increase the diversity of individuals (candidate solutions). Multinonuniform mutation is described as follows: where , perturbed factor, and = uniform random variable in the interval , = maximum generation of the RGA, = current generation of the RGA, = current decision variable , = trial candidate solution .
2.2. Particle Swarm Optimization
Kennedy and Eberhart [26] first introduced a conventional PSO algorithm, which is inspired by the social behavior of bird flocks or fish schools. Like GAs, a PSO algorithm is a populationbased algorithm. A population of candidate solutions is called a particle swarm. The particle velocities can be updated by (2.5), as follows: = particle velocity of decision variable of particle at generation , particle velocity of decision variable of particle at generation , cognitive parameter, social parameter, particle position of decision variable of particle at generation , independent uniform random numbers in the interval at generation , best local solution at generation best global solution at generation , population size of the PSO algorithm.
The particle positions can be computed using (2.6), as follows:
Shi and Eberhart [27] developed a modified PSO algorithm by incorporating an inertia weight () into (2.7) to control the exploration and exploitation capabilities of a PSO algorithm, as follows:
A constriction coefficient () was inserted into (2.8) to balance the exploration and exploitation tradeoff [28–30], as follows: where uniform random variable in the interval .
This work considers parameters and to update the particle velocities, as follows: where , increased value reduces the , maximum generation of the PSO algorithm.
According to (2.10), the optimal values of parameters , , and are difficult to obtain through a trial and error. This work thus optimizes these parameter settings by using RGA and AIA approaches.
2.3. Artificial Immune Algorithm
Wu [10] presented an AIA based on clonal selection and immune network theories to solve CGO problems. The AIA approach comprises selection, hypermutation, receptor editing, and bone marrow operations. The selection operation is performed to reproduce strong antibodies (Abs). Also, diverse Abs are created using hypermutation, receptor editing, and bone marrow operations, as described in the following subsections.
2.3.1. Ab and Ag Representation
In the human immune system, an antigen (Ag) has multiple epitopes (antigenic determinants), which can be recognized by various Abs with paratopes (recognizers), on its surface. In the AIA approach, an Ag represents known parameters of a solved problem. The Abs are the candidate solutions (i.e., decision variables , ) of the solved problem. The quality of a candidate solution is evaluated using an AbAg affinity that is derived from the value of an objective function of the solved problem.
2.3.2. Selection Operation
The selection operation, which is based on the immune network principle [31], controls the number of antigenspecific Abs. This operation is defined according to AbAg and AbAb recognition information, as follows: where probability that recognizes (the best solution), the best with the highest AbAg affinity, decision variables of repertoire (population) size of the AIA.
The is recognized by other Ab in a current Ab repertoire. Large implies that Ab can effectively recognize . The Ab with that is equivalent to or larger than the threshold degree is reproduced to generate an intermediate Ab repertoire.
2.3.3. Hypermutation Operation
Multinonuniform mutation [19] is used as the somatic hypermutation operation, which can be expressed as follows: where perturbation factor, current generation of the AIA, maximum generation number of the AIA, and uniform random number in the interval .
This operation has two tasks, that is, a uniform search and local finetuning.
2.3.4. Receptor Editing Operation
A receptor editing operation is developed using the standard Cauchy distribution , in which the local parameter is zero and the scale parameter is one. Receptor editing is performed using Cauchy random variables that are generated from , owing to their ability to provide a large jump in the AbAg affinity landscape to increase the probability of escaping from the local AbAg affinity landscape. Cauchy receptor editing can be defined by where , vector of Cauchy random variables, uniform random number in the interval .
This operation is employed in local finetuning and large perturbation.
2.3.5. Bone Marrow Operation
The paratope of an Ab can be generated by recombining gene segments and [32]. Therefore, based on this metaphor, diverse Abs are synthesized using a bone marrow operation. This operation randomly chooses two Abs from the intermediate Ab repertoire and a recombination point from the gene segments of the paratope of the selected Abs. The selected gene segments (e.g., gene of Ab 1 and gene of the Ab 2) are reproduced to create a library of gene segments. The selected gene segments in the paratope are then deleted. The new Ab 1 is formed by inserting the gene segment, which is gene of the Ab 2 in the library plus a random variable created from standard normal distribution , at the recombination point. The literature details the implementation of the bone marrow [10].
2.4. Penalty Function Methods
Stochastic global optimization approaches, including GAs, AIAs, and PSO, are naturally unconstrained optimization methods. Penalty function methods, which are constraint handling approaches, are commonly used to create feasible solutions to a CGO problem and transform it into an unconstrained optimization problem. Two popular penalty functions exist, namely, the exterior and interior functions. Exterior penalty functions use an infeasible solution as a starting point, and convergence is from the infeasible region to the feasible one. Interior penalty functions start from a feasible solution, then move from the feasible region to the constrained boundaries. Exterior penalty functions are favored over interior penalty functions, because they do not require a feasible starting point and are easily implemented. The exterior penalty functions developed to date include static, dynamic, adaptive, and death penalty functions [33]. This work uses the form of a static penalty function, as follows: where pseudoobjective function obtained using an original objective function plus a penalty term, penalty parameter.
Unfortunately, the penalty function scheme is limited by the need to finetune the penalty parameter [8]. To overcome this limitation, this work attempts to find the optimum for each CGO problem using the RGA and AIA approaches. Additionally, to obtain highquality RGAPSO and AIAPSO solutions accurate to at least five decimal places for the violation of each constraint to a specific CGO problem, the parameter is within the search space .
3. Method
3.1. RGAPSO Algorithm
Figure 1 shows the pseudocode of the proposed RGAPSO algorithm. The external RGA approach is used to optimize the best parameter settings of the internal PSO algorithm, and the internal PSO algorithm is employed to solve CGO problems.
External RGA
Step 1 (initialize the parameter settings). Parameter settings are given such as , crossover probability , mutation probability of the external RGA approach , the lower and upper boundaries of these parameters , , , , and the mutation probability of the internal PSO algorithm . The candidate solutions (individuals) of the external RGA represent the optimized parameters of the internal PSO algorithm. Finally, Figure 2 illustrates the candidate solution of the external RGA approach.
Step 2 (compute the fitness function value). The fitness function value of the external RGA approach is the best objective function value obtained from the best solution of each internal PSO algorithm execution, as follows:
Candidate solution of the external RGA approach is incorporated into the internal PSO algorithm, and a CGO problem is then solved using the internal PSO algorithm, which is executed as follows.
Internal PSO Algorithm
Step (1) (create an initial particle swarm). An initial particle swarm is created based on the from [, ] of a CGO problem. A particle represents a candidate solution of a CGO problem, as shown in Figure 3. Step (2) (calculate the objective function value). According to (2.14), the pseudoobjective function value of the internal PSO algorithm is defined by
Step (3) (update the particle velocity and position). The particle position and velocity can be updated using (2.6) and (2.10), respectively.Step (4) (implement a mutation operation). The standard PSO algorithm lacks evolution operations of GAs such as crossover and mutation. To maintain the diversity of particles, this work uses the multinonuniform mutation operator defined by (2.4).Step (5) (perform an elitist strategy). A new particle swarm is created from internal step (3). Notably, of a candidate solution j (particle j) in the particle swarm is evaluated. Here, a pairwise comparison is made between the value of candidate solutions in the new and current particle swarms. A situation in which the candidate solution j () in the new particle swarm is superior to candidate solution in the current particle swarm implies that the strong candidate solution in the new particle swarm replaces the candidate solution in the current particle swarm. The elitist strategy guarantees that the best candidate solution is always preserved in the next generation. The current particle swarm is updated to the particle swarm of the next generation.
Internal steps (2) to (5) are repeated until the value of the internal PSO algorithm is satisfied.
End
Step 3 (implement selection operation). The parents in a crossover pool are selected using (2.1).
Step 4 (perform crossover operation). In GAs, the crossover operation performs a global search. Thus, the crossover probability usually exceeds 0.5. Additionally, candidate solutions are created using (2.3).
Step 5 (conduct mutation operation). In GAs, the mutation operation implements a local search. Additionally, a solution space is exploited using (2.4).
Step 6 (implement an elitist strategy). This work updates the population using an elitist strategy. A situation in which the of candidate solution in the new population is larger than that in the current population suggests that the weak candidate solution is replaced. Additionally, a situation in which the of candidate solution in the new population is equal to or worse than that in the current population implies that the candidate solution in the current population survives. In addition to maintaining the strong candidate solutions, this strategy eliminates weak candidate solutions.
External Steps 2 to 6 are repeated until the value of the external RGA approach is met.
3.2. AIAPSO Algorithm
Figure 4 shows the pseudocode of the proposed AIAPSO algorithm, in which the external AIA approach is used to optimize the parameter settings of the internal PSO algorithm and the PSO algorithm is used to solve CGO problems.
External AIA
Step 1 (initialize the parameter settings). Several parameters must be predetermined. These include and the threshold for AbAb recognition , as well as the lower and upper boundaries of these parameters , , , , and . Figure 5 shows the Ab and Ag representation.
Step 2 (evaluate the AbAg affinity).
Internal PSO Algorithm
The external AIA approach offers parameter settings , , , , and for the internal PSO algorithm, subsequently leading to the implementation of internal steps (1)–(5) of the PSO algorithm. The PSO algorithm returns the best fitness value of PSO to the external AIA approach. Step (1) (create an initial particle swarm). An initial particle swarm is created based on from [, ] of a CGO problem. A particle represents a candidate solution of a CGO problem.Step (2) (calculate the objective function value). Equation (3.2) is used as the pseudoobjective function value of the internal PSO algorithm. Step (3) (update the particle velocity and position). Equations (2.6) and (2.10) can be used to update the particle position and velocity. Step (4) (implement a mutation operation). The diversity of the particle swarm is increased using (2.4). Step (5) (perform an elitist strategy). A new particle swarm (population) is generated from internal step . Notably, of a candidate solution (particle ) in the particle swarm is evaluated. Here, a pairwise comparison is made between the value of candidate solutions in the new and current particle swarms. The elitist strategy guarantees that the best candidate solution is always preserved in the next generation. The current particle swarm is updated to the particle swarm of the next generation.
Internal steps (2) to (5) are repeated until the value of the internal PSO algorithm is satisfied.
End
Consistent with the AbAg affinity metaphor, an AbAg affinity is determined using (3.3), as follows:
Following the evaluation of the AbAg affinities of Abs in the current Ab repertoire, the Ab with the highest AbAg affinity () is chosen to undergo clonal selection operation in external Step 3.
Step 3 (perform clonal selection operation). To control the number of antigenspecific Abs, (2.11) is used.
Step 4 (implement AbAg affinity maturation). The intermediate Ab repertoire that is created in external Step 3 is divided into two subsets. These Abs undergo somatic hypermutation operation by using (2.12) when the random number is 0.5 or less. Notably, these Abs suffer receptor editing operation using (2.13) when the random number exceeds 0.5.
Step 5 (introduce diverse Abs). Based on the bone marrow operation, diverse Abs are created to recruit the Abs suppressed in external Step 3.
Step 6 (update an Ab repertoire). A new Ab repertoire is generated from external Steps 3–5. The AbAg affinities of the Abs in the generated Ab repertoire are evaluated. This work presents a strategy for updating the Ab repertoire. A situation in which the AbAg affinity of Ab in the new Ab repertoire exceeds that in the current Ab repertoire implies that a strong Ab in the new Ab repertoire replaces the weak Ab in the current Ab repertoire. Additionally, a situation in which the AbAg affinity of Ab in the new Ab repertoire equals to or is worse than that in the current Ab repertoire implies that the Ab in the current Ab repertoire survives. In addition to maintaining the strong Abs, this strategy eliminates nonfunctional Abs.
External Steps 2–6 are repeated until the termination criterion is satisfied.
4. Results
The 13 CGO problems were taken from other studies [1, 20, 21, 23, 34]. The set of CGO problems comprises six benchmark NLP problems (TPs 1–4 and 12–13), and seven GPP problems, in which TP 5 (alkylation process design in chemical engineering), TP 6 (optimal reactor design), TP 12 (a tension/compression string design problem), and TP 13 (a pressure vessel design problem) are constrained engineering problems, were used to evaluate the performances of the proposed RGAPSO and AIAPSO algorithms. In the appendix, the objective function, constraints, boundary conditions of decision variables, and known global optimum for TPs 1−11 are described and the problem characteristics of TPs 5, 12, and 13 are further detailed.
The proposed RGAPSO and AIAPSO algorithms were coded in MATLAB software and executed on a Pentium D 3.0 (GHz) personal computer. Fifty independent runs were conducted to solve each test problem (TP). Numerical results were summarized, including the best, median, mean, and worst results, as well as the standard deviation (S.D.) of objective function values obtained using RGAPSO and AIAPSO solutions, mean computational CPU times (MCCTs), and mean absolute percentage error MAPE, as defined by where value of the known global solution, = values obtained from solutions of stochastic global optimization approaches (e.g., RGAPSO and AIAPSO algorithms).
Table 1 lists the parameter settings for the RGAPSO and AIAPSO algorithms, as shown in Table 1.
 
]: the lower and upper boundaries of parameter . []: the lower and upper boundaries of parameter . []: the lower and upper boundaries of parameter . []: the lower and upper boundaries of parameter . []: the lower and upper boundaries of for the internal PSO algorithm . 
4.1. Comparison of the Results Obtained Using the RGAPSO and AIAPSO Algorithms
Table 2 summarizes the numerical results obtained using the proposed RGAPSO and AIAPSO algorithms for TPs 1–13. Numerical results indicate that the RGAPSO and the AIAPSO algorithms can obtain the global minimum solution to TPs 1–11, since each MAPE% is small. Moreover, the best, median, worst, and S.D. of objective function values obtained using the RGAPSO and AIAPSO solutions are identical for TPs 1, 2, 3, 4, 6, 7, 8, 9, and 11. Furthermore, the worst values obtained using the AIAPSO algorithm for TPs 5 and 13 are smaller than those obtained using the RGAPSO algorithm. Additionally, test is performed for each TP, indicating that the mean values obtained using the RGAPSO and AIAPSO algorithms are statistically significant for TPs 5, 10, 12, and 13, since value is smaller than a significant level 0.05. Based on the results of test, the AIAPSO algorithm yields better mean values than the RGAPSO algorithm for TPs 5, 12, and 13, and the AIAPSO algorithm yields worse mean value than the RGAPSO algorithm for TP 10.
 
(The “—” denotes unavailable information, and *represents that the mean values obtained using the RGAPSO and AIAPSO algorithms are statistically different.) 
Tables 3 and 4 list the best solutions obtained using the RGAPSO and AIAPSO algorithms from TPs 1–13, respectively, indicating that each constraint is satisfied (i.e., the violation of each constraint is accurate to at least five decimal places) for every TP. Tables 5 and 6 list the best parameter settings of the internal PSO algorithm obtained using the external RGA and AIA approaches, respectively.




4.2. Comparison of the Results for the Proposed RGAPSO and AIAPSO Algorithms with Those Obtained Using the Published Individual GA and AIA Approaches and Hybrid Algorithms
Table 7 compares the numerical results of the proposed RGAPSO and AIAPSO algorithms with those obtained using published individual GA and AIA approaches for TPs 1–4. In this table, GA1 is a GA with a penalty function methods, as used by Michalewicz [20]. Notably, GA2 represents a GA with a penalty function, but without any penalty parameter, as used by Deb [21]. Also, GA3 is an RGA with a static penalty function, as developed by Wu and Chung [9]. Notably, AIA1 is an AIA method called CLONALG, as proposed by CruzCortés et al. [22]. Finally, AIA2 is an AIA approach based on an adaptive penalty function, as developed by Wu [10]. The numerical results of GA1, GA2, and AIA1 methods for solving TPs 1–4 were collected from the published literature [20–22]. Furthermore, the GA1, GA2, and AIA1 approaches were executed under 350,000 objective function evaluations. To fairly compare the performances of the proposed hybrid CI algorithms and the individual GA and AIA approaches, the GA3, AIA2, the internal PSO algorithm of RGAPSO method, and the internal PSO algorithm of AIAPSO method were independently executed 50 times under 350,000 objective function evaluations for solving TPs 1–4.
 
(The “—” denotes unavailable information.) 
For solving TP 1, the median values obtained using the RGAPSO and AIAPSO algorithms are smaller than those obtained using the GA1, GA3, and AIA2 approaches, and the worst values obtained using RGAPSO and AIAPSO algorithms are smaller than those obtained using the GA1, GA2, GA3, AIA1, and AIA2 approaches. For solving TP 2, the median and worst values obtained using the RGAPSO and AIAPSO algorithms are smaller than those obtained using the GA3 method. For solving TP 3, the median and worst values obtained using the RGAPSO and AIAPSO algorithms are smaller than those obtained using the GA1 and GA3 approaches. For solving TP 4, the median and worst values obtained using the RGAPSO and AIAPSO algorithms are smaller than those obtained using the GA3 method, and the worst values obtained using the RGAPSO and AIAPSO algorithms are smaller than those obtained using the AIA1 approach. Moreover, the GA3 method obtained the worst MAPE% for TP 1 and TP 4. Table 8 lists the results of the test for the GA3, AIA2, RGAPSO, and AIAPSO methods. This table indicates that the mean values of the RGAPSO, and AIAPSO algorithms are not statistically significant, since values are larger than a significant level 0.05, and the mean values between GA3 versus AIA2, GA3 versus RGAPSO, GA3 versus AIAPSO, AIA2 versus RGAPSO, and AIA2 versus AIAPSO are statistically significant. According to Tables 7 and 8, the mean values obtained using the RGAPSO and AIAPSO algorithms are better than those of obtained using the GA3 and AIA1 methods for TPs 1–4.
 
Represents that the mean values obtained using two algorithms are statistically different.) 
Table 9 compares the numerical results obtained using the proposed RGAPSO and AIAPSO algorithms and those obtained using AIA2 and GA3 for solving TPs 5–13. The AIA2, GA3, the internal PSO algorithm of the RGAPSO approach, and the internal PSO algorithm of AIAPSO approach were independently executed 50 times under 300,000 objective function evaluations. Table 9 shows that MAPE% obtained using the proposed RGAPSO and AIAPSO algorithms is close to 1%, or smaller than 1% for TPs 5–11, indicating that the proposed RGAPSO and AIAPSO algorithms can converge to global optimum for TPs 5–11. Moreover, the worst values obtained using the RGAPSO and AIAPSO algorithms are significantly smaller than those obtained using the GA3 method for TPs 5, 6, 11, and 13. Additionally, the worst values obtained using the RGAPSO and AIAPSO algorithms are smaller than those obtained using the AIA2 method for TPs 5, 6, and 13.
 
(The “—” denotes unavailable information.) 
Table 10 summarizes the results of the test for TPs 5–13. According to Tables 9 and 10, the mean values of the RGAPSO and AIAPSO algorithms are smaller than those of the GA3 approach for TPs 5, 6, 7, 8, 9, 10, 11, and 13. Moreover, the mean values obtained using the RGAPSO and AIAPSO algorithms are smaller than those of the AIA2 approach for TPs 6, 7, 8, 10, and 12. Totally, according to Tables 7−10, the performances of the hybrid CI methods are superior to those of individual GA and AIA methods.
 
Represents that the mean values obtained using two algorithms are statistically different.) 
The TPs 12 and 13 have been solved by many hybrid algorithms. For instance, Huang et al. [23] presented a coevolutionary differential evolution (CDE) that integrates a coevolution mechanism and a DE approach. Zahara and Kao [24] developed a hybrid NelderMead simplex search method and a PSO algorithm (NMPSO). Table 11 compares the numerical results of the CDE, NMPSO, RGAPSO, and AIAPSO methods for solving TPs 12−13. The table indicates that the best, mean, and worst values obtained using the NMPSO method are superior to those obtained using the CDE, RGAPSO, and AIAPSO approaches for TP 12. Moreover, the best, mean, and worst values obtained using the AIAPSO algorithm are better than those of the CDE, NMPSO, and RGAPSO algorithms.

According to the No Free Lunch theorem [35], if algorithm A outperforms algorithm B on average for one class of problems, then the average performance of the former must be worse than that of the latter over the remaining problems. Therefore, it is unlikely that any unique stochastic global optimization approach exists that performs best for all CGO problems.
4.3. Summary of Results
The proposed RGAPSO and AIAPSO algorithms with a penalty function method have the following benefits.(1)Parameter manipulation of the internal PSO algorithm is based on the solved CGO problems. Owing to their ability to efficiently solve an UGO problem, the external RGA and AIA approaches are substituted for trial and error to manipulate the parameters (, , , , and ). (2)Besides obtaining the optimum parameter settings of the internal PSO algorithm, the RGAPSO and AIAPSO algorithms can yield a global optimum for a CGO problem.(3)In addition to performing better than approaches of some published individual GA and AIA approaches, the proposed RGAPSO and AIAPSO algorithms reduce the parametrization for the internal PSO algorithm, despite the RGAPSO and AIAPSO algorithms being more complex than individual GA and AIA approaches.
The proposed RGAPSO and AIAPSO algorithms have the following limitations. (1)The proposed RGAPSO and AIAPSO algorithms increase the computational CPU time, as shown in Table 2.(2)The proposed RGAPSO and AIAPSO algorithms are designed to solve CGO problems with continuous decision variables . Therefore, the proposed algorithms cannot be applied to manufacturing problems such as job shop scheduling and quadratic assignment problems (combinatorial optimization problems).
5. Conclusions
This work presents novel RGAPSO and AIAPSO algorithms. The synergistic power of the RGA with PSO algorithm and the AIA with PSO algorithm is also demonstrated by using 13 CGO problems. Numerical results indicate that, in addition to converging to a global minimum for each test CGO problem, the proposed RGAPSO and AIAPSO algorithms obtain the optimum parameter settings of the internal PSO algorithm. Moreover, the numerical results obtained using the RGAPSO and AIAPSO algorithms are superior to those obtained using alternative stochastic global optimization methods such as individual GA and AIA approaches. The RGAPSO and AIAPSO algorithms are highly promising stochastic global optimization approaches for solving CGO problems.
Appendices
A. TP 1 [20, 21]
TP 1 has ten decision variables, eight inequality constraints, and 20 boundary conditions, as follows: The global solution to TP 1 is as follows:
B. TP 2 [21]
TP 2 involves five decision variables, six inequality constraints, and ten boundary conditions, as follows:
The global solution to TP 2 is
C. TP 3 [20, 21]
TP 3 has seven decision variables, four inequality constraints, and 14 boundary conditions, as follows: The global solution to TP 3 is
D. TP 4 [20, 21]
TP 4 involves 13 decision variables, nine inequality constraints, and 26 boundary conditions, as follows: The global solution to TP 4 is
E. TP 5 (Alkylation Process Design Problem in Chemical Engineering) [1]
TP 5 has seven decision variables subject to 12 nonconvex, two linear, and 14 boundary constraints. The objective function is to improve the octane number of some olefin feed by reacting it with isobutane in the presence of acid. The decision variables are olefin feed rate (barrels/day) , acid addition rate (thousands of pounds/day) , alkylate yield (barrels/day) , acid strength , motor octane number , external isobutanetoolefin ration , and F4 performance number : where denotes positive parameters given in Table 12. The global solution to TP 5 is

F. TP 6 (Optimal Reactor Design Problem) [1]
TP 6 contains eight decision variables subject to four nonconvex inequality constraints and 16 boundary conditions, as follows: The global solution to TP 6 is
G. TP 7 [1]
TP 7 has four decision variables, two nonconvex inequality constraints, and eight boundary conditions, as follows:
The global solution to TP 7 is
H. TP 8 [1]
TP 8 contains three decision variables subject to one nonconvex inequality constraint and six boundary conditions, as follows: The global solution to TP 8 is
I. TP 9 [1]
TP 9 contains eight decision variables subject to four nonconvex inequality constraints and 16 boundary conditions, as follows: The global solution to TP 9 is