About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 841410, 36 pages
http://dx.doi.org/10.1155/2012/841410
Research Article

Solving Constrained Global Optimization Problems by Using Hybrid Evolutionary Computing and Artificial Life Approaches

Department of Business Administration, Lunghwa University of Science and Technology, No. 300, Section 1, Wanshou Road, Guishan, Taoyuan County 333, Taiwan

Received 28 February 2012; Revised 15 April 2012; Accepted 19 April 2012

Academic Editor: Jung-Fa Tsai

Copyright © 2012 Jui-Yu Wu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This work presents a hybrid real-coded genetic algorithm with a particle swarm optimization (RGA-PSO) algorithm and a hybrid artificial immune algorithm with a PSO (AIA-PSO) algorithm for solving 13 constrained global optimization (CGO) problems, including six nonlinear programming and seven generalized polynomial programming optimization problems. External RGA and AIA approaches are used to optimize the constriction coefficient, cognitive parameter, social parameter, penalty parameter, and mutation probability of an internal PSO algorithm. CGO problems are then solved using the internal PSO algorithm. The performances of the proposed RGA-PSO and AIA-PSO algorithms are evaluated using 13 CGO problems. Moreover, numerical results obtained using the proposed RGA-PSO and AIA-PSO algorithms are compared with those obtained using published individual GA and AIA approaches. Experimental results indicate that the proposed RGA-PSO and AIA-PSO algorithms converge to a global optimum solution to a CGO problem. Furthermore, the optimum parameter settings of the internal PSO algorithm can be obtained using the external RGA and AIA approaches. Also, the proposed RGA-PSO and AIA-PSO algorithms outperform some published individual GA and AIA approaches. Therefore, the proposed RGA-PSO and AIA-PSO algorithms are highly promising stochastic global optimization methods for solving CGO problems.

1. Introduction

Many scientific, engineering, and management problems can be expressed as constrained global optimization (CGO) problems, as follows: Minimize𝑓(𝐱),s.t.𝑔𝑚(𝐱)0,𝑚=1,2,,𝑀,𝑘𝑥(𝐱)=0,𝑘=1,2,,𝐾,𝑙𝑛𝑥𝑛𝑥𝑢𝑛,𝑛=1,2,,𝑁,(1.1) where 𝑓(𝐱) denotes an objective function; 𝑔𝑚(𝐱) represents a set of 𝑚 nonlinear inequality constraints; 𝑘(𝐱) refers to a set of 𝑘 nonlinear equality constraints; 𝐱 represents a vector of decision variables which take real values, and each decision variable 𝑥𝑛 is constrained by its lower and upper boundaries [𝑥𝑙𝑛,𝑥𝑢𝑛]; 𝑁 is the total number of decision variables 𝑥𝑛. For instance, generalized polynomial programming (GPP) belongs to the nonlinear programming (NLP) method. The formulation of GPP is a nonconvex objective function subject to nonconvex inequality constraints and possibly disjointed feasible region. The GPP approach has been successfully used to solve problems including alkylation process design, heat exchanger design, optimal reactor design [1], inventory decision problem (economic production quantity) [2], process synthesis and the design of separations, phase equilibrium, nonisothermal complex reactor networks, and molecular conformation [3].

Traditional local NLP optimization approaches based on a gradient algorithm are inefficient for solving CGO problems, while an objective function is nondifferentiable. Global optimization methods can be divided into deterministic or stochastic [4]. Often involving a sophisticated optimization process, deterministic global optimization methods typically make assumptions regarding the problem to be solved [5]. Stochastic global optimization methods that do not require gradient information and numerous assumptions have received considerable attention. For instance, Sun et al. [6] devised an improved vector particle swarm optimization (PSO) algorithm with a constraint-preserving method to solve CGO problems. Furthermore, Tsoulos [7] developed a real-coded genetic algorithm (RGA) with a penalty function approach for solving CGO problems. Additionally, Deep and Dipti [8] presented a self-organizing GA with a tournament selection method for solving CGO problems. Meanwhile, Wu and Chung [9] developed a RGA with a static penalty function approach for solving GPP optimization problems. Finally, Wu [10] introduced an artificial immune algorithm (AIA) with an adaptive penalty function method to solve CGO problems.

Zadeh [11] defined “soft computing” as the synergistic power of two or more fused computational intelligence (CI) schemes, which can be divided into several branches: granular computing (e.g., fuzzy sets, rough sets, and probabilistic reasoning), neurocomputing (e.g., supervised, unsupervised, and reinforcement neural learning algorithms), evolutionary computing (e.g., GAs, genetic programming, and PSO algorithms), and artificial life (e.g., artificial immune systems) [12]. Besides, outperforming individual algorithms in terms of solving certain problems, hybrid algorithms can solve general problems more efficiently [13]. Therefore, hybrid CI approaches have recently attracted considerable attention as a promising field of research. Various hybrid evolutionary computing (GA and PSO methods) and artificial life (such as AIA methods) approaches have been developed for solving optimization problems. These hybrid algorithms focus on developing diverse candidate solutions (such as chromosomes and particles) of population/swarm to solve optimization problems more efficiently. These hybrid algorithms use two different algorithms to create diverse candidate solutions using their specific operations and then merge these diverse candidate solutions to increase the diversity of the candidate population. For instance, Abd-El-Wahed et al. [14] developed an integrated PSO algorithm and GA to solve nonlinear optimization problems. Additionally, Kuo and Han [15] presented a hybrid GA and PSO algorithm for bilevel linear programming to solve a supply chain distribution problem. Furthermore, Shelokar et al. [16] presented a hybrid PSO method and ant colony optimization method for solving continuous optimization problems. Finally, Hu et al. [17] developed an immune cooperative PSO algorithm for solving the fault-tolerant routing problem.

Compared to the above hybrid CI algorithms, this work optimizes the parameter settings of an individual CI method by using another individual CI algorithm. A standard PSO algorithm has certain limitations [17, 18]. For instance, a PSO algorithm includes many parameters that must be set, such as the cognitive parameter, social parameter, and constriction coefficient. In practice, the optimal parameter settings of a PSO algorithm are tuned based on trial and error and prior knowledge is required to successfully manipulate the cognitive parameter, social parameter, and constriction coefficient. The exploration and exploitative capabilities of a PSO algorithm are limited to optimum parameter settings. Moreover, conventional PSO methods involve premature convergence that rapidly losses diversity during optimization.

Fortunately, optimization of parameter settings for a conventional PSO algorithm can be considered an unconstrained global optimization (UGO) problem, and the diversity of candidate solutions of the PSO method can be increased using a multi-nonuniform mutation operation [19]. Moreover, the parameter manipulation of a GA and AIA method is easy to implement without prior knowledge. Therefore, to overcome the limitations of a standard PSO algorithm, this work develops two hybrid CI algorithms to solve CGO problems efficiently. The first algorithm is a hybrid RGA and PSO (RGA-PSO) algorithm, while the second algorithm is a hybrid AIA and PSO (AIA-PSO) algorithm. The proposed RGA-PSO and AIA-PSO algorithms are considered to optimize two optimization problems simultaneously. The UGO problem (optimization of cognitive parameter, social parameter, constriction coefficient, penalty parameter, and mutation probability of an internal PSO algorithm based on a penalty function approach) is optimized using external RGA and AIA approaches, respectively. A CGO problem is then solved using the internal PSO algorithm. The performances of the proposed RGA-PSO and AIA-PSO algorithms are evaluated using a set of CGO problems (e.g., six benchmark NLP and seven GPP optimization problems).

The rest of this paper is organized as follows. Section 2 describes the RGA, PSO algorithm, AIA, and penalty function approaches. Section 3 then introduces the proposed RGA-PSO and AIA-PSO algorithms. Next, Section 4 compares the experimental results of the proposed RGA-PSO and AIA-PSO algorithms with those of various published individual GAs and AIAs [9, 10, 2022] and hybrid algorithms [23, 24]. Finally, conclusions are drawn in Section 5.

2. Related Works

2.1. Real-Coded Genetic Algorithm

GAs are stochastic global optimization methods based on the concepts of natural selection and use three genetic operators, that is, selection, crossover, and mutation, to explore and exploit the solution space. RGA outperforms binary-coded GA in solving continuous function optimization problems [19]. This work thus describes operators of a RGA [25].

2.1.1. Selection

A selection operation selects strong individuals from a current population based on their fitness function values and then reproduces these individuals into a crossover pool. The several selection operations developed include the roulette wheel, ranking, and tournament methods [19, 25]. This work uses the normalized geometric ranking method, as follows: 𝑝𝑗=𝑞(1𝑞)𝑟1,𝑗=1,2,,psRGA,(2.1)𝑝𝑗= probability of selecting individual 𝑗,𝑞= probability of choosing the best individual (here 𝑞=0.35) 𝑞=𝑞1(1𝑞)psRGA,(2.2)𝑟= individual ranking based on fitness value, where 1 represents the best, 𝑟=1,2,,psRGA, psRGA = population size of the RGA.

2.1.2. Crossover

While exploring the solution space by creating new offspring, the crossover operation randomly selects two parents from the crossover pool and then uses these two parents to generate two new offspring. This operation is repeated until the psRGA/2 is satisfied. The whole arithmetic crossover is easily implemented, as follows: 𝐯𝟏=𝛽×𝐯𝟏+(1𝛽)×𝐯𝟐,𝐯𝟐=(1𝛽)𝐯𝟏+𝛽×𝐯𝟐,(2.3) where 𝐯𝟏 and 𝐯𝟐= parents, 𝐯𝟏 and 𝐯𝟐= offspring, 𝛽= uniform random number in the interval [0,1.5].

2.1.3. Mutation

Mutation operation can increase the diversity of individuals (candidate solutions). Multi-nonuniform mutation is described as follows: 𝑥trial,𝑛=𝑥current,𝑛+𝑥𝑢𝑛𝑥current,𝑛𝑔pertRGAif𝑈1𝑥(0,1)<0.5,current,𝑛𝑥current,𝑛𝑥𝑙𝑛𝑔pertRGAif𝑈1(0,1)0.5,(2.4) where pert(𝑔RGA)=[𝑈2(0,1)(1𝑔RGA/𝑔max,RGA)]2, perturbed factor, 𝑈1(0,1) and 𝑈2(0,1) = uniform random variable in the interval [0,1], 𝑔max,RGA = maximum generation of the RGA, 𝑔RGA = current generation of the RGA, 𝑥current,𝑛 = current decision variable 𝑥𝑛, 𝑥trial,𝑛 = trial candidate solution 𝑥𝑛.

2.2. Particle Swarm Optimization

Kennedy and Eberhart [26] first introduced a conventional PSO algorithm, which is inspired by the social behavior of bird flocks or fish schools. Like GAs, a PSO algorithm is a population-based algorithm. A population of candidate solutions is called a particle swarm. The particle velocities can be updated by (2.5), as follows: 𝑣𝑗,𝑛𝑔PSO+1=𝑣𝑗,𝑛𝑔PSO+𝑐1𝑟1𝑗𝑔PSO𝑝𝑙𝑏𝑗,𝑛𝑔PSO𝑥𝑗,𝑛𝑔PSO+𝑐2𝑟2𝑗𝑔PSO𝑝𝑔𝑏𝑗,𝑛𝑔PSO𝑥𝑗,𝑛𝑔PSO𝑗=1,2,,psPSO,𝑛=1,2,,𝑁,(2.5)𝑣𝑗,𝑛(𝑔PSO+1) = particle velocity of decision variable 𝑥𝑛 of particle 𝑗 at generation 𝑔PSO+1, 𝑣𝑗,𝑛(𝑔PSO)= particle velocity of decision variable 𝑥𝑛 of particle 𝑗 at generation 𝑔PSO, 𝑐1= cognitive parameter, 𝑐2= social parameter, 𝑥𝑗,𝑛(𝑔PSO)= particle position of decision variable 𝑥𝑛 of particle 𝑗 at generation 𝑔PSO, 𝑟1,𝑗(𝑔PSO),𝑟2,𝑗(𝑔PSO)= independent uniform random numbers in the interval [0,1] at generation 𝑔PSO, 𝑝𝑙𝑏𝑗,𝑛(𝑔PSO)= best local solution at generation 𝑔PSO,𝑝𝑔𝑏𝑗,𝑛(𝑔PSO)= best global solution at generation 𝑔PSO, psPSO= population size of the PSO algorithm.

The particle positions can be computed using (2.6), as follows: 𝑥𝑗,𝑛𝑔PSO+1=𝑥𝑗,𝑛𝑔PSO+𝑣𝑗,𝑛𝑔PSO+1𝑗=1,2,,psPSO,𝑛=1,2,,𝑁.(2.6)

Shi and Eberhart [27] developed a modified PSO algorithm by incorporating an inertia weight (𝜔in) into (2.7) to control the exploration and exploitation capabilities of a PSO algorithm, as follows: 𝑣𝑗,𝑛𝑔PSO+1=𝜔in𝑣𝑗,𝑛𝑔PSO+𝑐1𝑟1𝑗𝑔PSO𝑝𝑙𝑏𝑗,𝑛𝑔PSO𝑥𝑗,𝑛𝑔PSO+𝑐2𝑟2𝑗𝑔PSO𝑝𝑔𝑏𝑗,𝑛𝑔PSO𝑥𝑗,𝑛𝑔PSO𝑗=1,2,,psPSO,𝑛=1,2,,𝑁.(2.7)

A constriction coefficient (𝜒) was inserted into (2.8) to balance the exploration and exploitation tradeoff [2830], as follows: 𝑣𝑗,𝑛𝑔PSO𝑣+1=𝜒𝑗,𝑛𝑔PSO+𝜌1𝑔PSO𝑝𝑙𝑏𝑗,𝑛𝑔PSO𝑥𝑗,𝑛𝑔PSO+𝜌2𝑔PSO𝑝𝑔𝑏𝑗,𝑛𝑔PSO𝑥𝑗,𝑛𝑔PSO𝑗=1,2,,psPSO,𝑛=1,2,,𝑁,(2.8) where 𝜒=2𝑈3(0,1)|||2𝜏𝜏|||,(𝜏4)(2.9)𝑈3(0,1)= uniform random variable in the interval [0,1],𝜏=𝜏1+𝜏2,𝜏1=𝑐1𝑟1𝑗,𝜏1=𝑐2𝑟2𝑗.

This work considers parameters 𝜔in and 𝜒 to update the particle velocities, as follows: 𝑣𝑗,𝑛𝑔PSO𝜔+1=𝜒in𝑣𝑗,𝑛𝑔PSO+𝑐1𝑟1𝑗𝑔PSO𝑝𝑙𝑏𝑗,𝑛𝑔PSO𝑥𝑗,𝑛𝑔PSO+𝑐2𝑟2𝑗𝑔PSO𝑝𝑔𝑏𝑗,𝑛𝑔PSO𝑥𝑗,𝑛𝑔PSO𝑗=1,2,,psPSO,𝑛=1,2,,𝑁,(2.10) where 𝜔in=((𝑔max,PSO𝑔PSO)/𝑔max,PSO), increased 𝑔PSO value reduces the 𝜔in, 𝑔max,PSO= maximum generation of the PSO algorithm.

According to (2.10), the optimal values of parameters 𝑐1, 𝑐2, and 𝜒 are difficult to obtain through a trial and error. This work thus optimizes these parameter settings by using RGA and AIA approaches.

2.3. Artificial Immune Algorithm

Wu [10] presented an AIA based on clonal selection and immune network theories to solve CGO problems. The AIA approach comprises selection, hypermutation, receptor editing, and bone marrow operations. The selection operation is performed to reproduce strong antibodies (Abs). Also, diverse Abs are created using hypermutation, receptor editing, and bone marrow operations, as described in the following subsections.

2.3.1. Ab and Ag Representation

In the human immune system, an antigen (Ag) has multiple epitopes (antigenic determinants), which can be recognized by various Abs with paratopes (recognizers), on its surface. In the AIA approach, an Ag represents known parameters of a solved problem. The Abs are the candidate solutions (i.e., decision variables 𝑥𝑛, 𝑛=1,2,,𝑁) of the solved problem. The quality of a candidate solution is evaluated using an Ab-Ag affinity that is derived from the value of an objective function of the solved problem.

2.3.2. Selection Operation

The selection operation, which is based on the immune network principle [31], controls the number of antigen-specific Abs. This operation is defined according to Ab-Ag and Ab-Ab recognition information, as follows: 𝑝𝑟𝑗=1𝑁𝑁𝑛=11𝑒𝑑𝑛𝑗,𝑑𝑛𝑗=||||𝑥𝑛𝑥𝑛𝑗𝑥𝑛||||,𝑗=1,2,,rs,𝑛=1,2,,𝑁,(2.11) where 𝑝𝑟𝑗= probability that 𝐀𝐛𝑗 recognizes 𝐀𝐛 (the best solution), 𝑥𝑛= the best 𝐀𝐛 with the highest Ab-Ag affinity, 𝑥𝑛𝑗= decision variables 𝑥𝑛 of 𝐀𝐛𝑗,rs= repertoire (population) size of the AIA.

The 𝐀𝐛 is recognized by other Ab  𝑗 in a current Ab repertoire. Large 𝑝𝑟𝑗 implies that Ab  𝑗 can effectively recognize 𝐀𝐛. The Ab  𝑗 with 𝑝𝑟𝑗 that is equivalent to or larger than the threshold degree 𝑝𝑟𝑡 is reproduced to generate an intermediate Ab repertoire.

2.3.3. Hypermutation Operation

Multi-nonuniform mutation [19] is used as the somatic hypermutation operation, which can be expressed as follows: 𝑥trial,𝑛=𝑥current,𝑛+𝑥𝑢𝑛𝑥current,𝑛𝑔pertAIA,if𝑈4𝑥(0,1)<0.5,current,𝑛𝑥current,𝑛𝑥𝑙𝑛𝑔pertAIA,if𝑈4(0,1)0.5,(2.12) where pert(𝑔AIA)={𝑈5(0,1)(1𝑔AIA/𝑔max,AIA)}2= perturbation factor, 𝑔AIA= current generation of the AIA, 𝑔max,AIA= maximum generation number of the AIA, 𝑈4(0,1) and 𝑈5(0,1)= uniform random number in the interval [0,1].

This operation has two tasks, that is, a uniform search and local fine-tuning.

2.3.4. Receptor Editing Operation

A receptor editing operation is developed using the standard Cauchy distribution 𝐶(0,1), in which the local parameter is zero and the scale parameter is one. Receptor editing is performed using Cauchy random variables that are generated from 𝐶(0,1), owing to their ability to provide a large jump in the Ab-Ag affinity landscape to increase the probability of escaping from the local Ab-Ag affinity landscape. Cauchy receptor editing can be defined by 𝐱trial=𝐱current+𝑈5(0,1)2×𝝈,(2.13) where 𝝈=[𝜎1,𝜎2,,𝜎𝑁]𝑇, vector of Cauchy random variables, 𝑈5(0,1)= uniform random number in the interval [0,1].

This operation is employed in local fine-tuning and large perturbation.

2.3.5. Bone Marrow Operation

The paratope of an Ab can be generated by recombining gene segments 𝐕𝐇𝐃𝐇𝐉𝐇 and 𝐕𝐋𝐉𝐋 [32]. Therefore, based on this metaphor, diverse Abs are synthesized using a bone marrow operation. This operation randomly chooses two Abs from the intermediate Ab repertoire and a recombination point from the gene segments of the paratope of the selected Abs. The selected gene segments (e.g., gene 𝑥1 of Ab 1 and gene 𝑥1 of the Ab 2) are reproduced to create a library of gene segments. The selected gene segments in the paratope are then deleted. The new Ab  1  is formed by inserting the gene segment, which is gene 𝑥1 of the Ab 2 in the library plus a random variable created from standard normal distribution 𝑁(0,1), at the recombination point. The literature details the implementation of the bone marrow [10].

2.4. Penalty Function Methods

Stochastic global optimization approaches, including GAs, AIAs, and PSO, are naturally unconstrained optimization methods. Penalty function methods, which are constraint handling approaches, are commonly used to create feasible solutions to a CGO problem and transform it into an unconstrained optimization problem. Two popular penalty functions exist, namely, the exterior and interior functions. Exterior penalty functions use an infeasible solution as a starting point, and convergence is from the infeasible region to the feasible one. Interior penalty functions start from a feasible solution, then move from the feasible region to the constrained boundaries. Exterior penalty functions are favored over interior penalty functions, because they do not require a feasible starting point and are easily implemented. The exterior penalty functions developed to date include static, dynamic, adaptive, and death penalty functions [33]. This work uses the form of a static penalty function, as follows: Minimize𝑓pseudo(𝐱,𝜌)=𝑓(𝐱)+𝜌𝑀𝑚=1max0,𝑔𝑚+(𝐱)𝐾𝑘=1𝑘(𝐱)2,(2.14) where 𝑓pseudo(𝐱,𝜌)= pseudo-objective function obtained using an original objective function plus a penalty term, 𝜌= penalty parameter.

Unfortunately, the penalty function scheme is limited by the need to fine-tune the penalty parameter 𝜌 [8]. To overcome this limitation, this work attempts to find the optimum 𝜌 for each CGO problem using the RGA and AIA approaches. Additionally, to obtain high-quality RGA-PSO and AIA-PSO solutions accurate to at least five decimal places for the violation of each constraint to a specific CGO problem, the parameter 𝜌 is within the search space [1×109,1×1011].

3. Method

3.1. RGA-PSO Algorithm

Figure 1 shows the pseudocode of the proposed RGA-PSO algorithm. The external RGA approach is used to optimize the best parameter settings of the internal PSO algorithm, and the internal PSO algorithm is employed to solve CGO problems.

841410.fig.001
Figure 1: The pseudocode of the proposed RGA-PSO algorithm.

External RGA

Step 1 (initialize the parameter settings). Parameter settings are given such as psRGA, crossover probability 𝑝𝑐, mutation probability of the external RGA approach 𝑝𝑚,RGA, the lower and upper boundaries of these parameters 𝑐1, 𝑐2, 𝜒, 𝜌, and the mutation probability of the internal PSO algorithm 𝑝𝑚,PSO. The candidate solutions (individuals) of the external RGA represent the optimized parameters of the internal PSO algorithm. Finally, Figure 2 illustrates the candidate solution of the external RGA approach.

841410.fig.002
Figure 2: Chromosome representation of the external RGA.

Step 2 (compute the fitness function value). The fitness function value tness𝑗 of the external RGA approach is the best objective function value 𝑓(𝐱PSO) obtained from the best solution 𝐱PSO of each internal PSO algorithm execution, as follows: tness𝑗𝐱=𝑓PSO,𝑗=1,2,,psRGA.(3.1)

Candidate solution 𝑗 of the external RGA approach is incorporated into the internal PSO algorithm, and a CGO problem is then solved using the internal PSO algorithm, which is executed as follows.

Internal PSO Algorithm
Step (1) (create an initial particle swarm). An initial particle swarm is created based on the psPSO from [𝑥𝑙𝑛, 𝑥𝑢𝑛] of a CGO problem. A particle represents a candidate solution of a CGO problem, as shown in Figure 3. Step (2) (calculate the objective function value). According to (2.14), the pseudo-objective function value of the internal PSO algorithm is defined by 𝑓pseudo,𝑗𝐱=𝑓PSO,𝑗+𝜌×𝑀𝑚=1max0,𝑔𝑚𝐱PSO,𝑗2,𝑗=1,2,,psPSO.(3.2)Step (3) (update the particle velocity and position). The particle position and velocity can be updated using (2.6) and (2.10), respectively.Step (4) (implement a mutation operation). The standard PSO algorithm lacks evolution operations of GAs such as crossover and mutation. To maintain the diversity of particles, this work uses the multi-nonuniform mutation operator defined by (2.4).Step (5) (perform an elitist strategy). A new particle swarm is created from internal step (3). Notably, 𝑓(𝐱PSO,𝑗) of a candidate solution j (particle j) in the particle swarm is evaluated. Here, a pairwise comparison is made between the 𝑓(𝐱PSO,𝑗) value of candidate solutions in the new and current particle swarms. A situation in which the candidate solution j (𝑗=1,2,,psPSO) in the new particle swarm is superior to candidate solution 𝑗 in the current particle swarm implies that the strong candidate solution 𝑗 in the new particle swarm replaces the candidate solution 𝑗 in the current particle swarm. The elitist strategy guarantees that the best candidate solution is always preserved in the next generation. The current particle swarm is updated to the particle swarm of the next generation.

841410.fig.003
Figure 3: Candidate solution of the internal PSO algorithm.

Internal steps (2) to (5) are repeated until the 𝑔max,PSO value of the internal PSO algorithm is satisfied.

End

Step 3 (implement selection operation). The parents in a crossover pool are selected using (2.1).

Step 4 (perform crossover operation). In GAs, the crossover operation performs a global search. Thus, the crossover probability 𝑝c usually exceeds 0.5. Additionally, candidate solutions are created using (2.3).

Step 5 (conduct mutation operation). In GAs, the mutation operation implements a local search. Additionally, a solution space is exploited using (2.4).

Step 6 (implement an elitist strategy). This work updates the population using an elitist strategy. A situation in which the tness𝑗 of candidate solution 𝑗 in the new population is larger than that in the current population suggests that the weak candidate solution 𝑗 is replaced. Additionally, a situation in which the tness𝑗 of candidate solution 𝑗 in the new population is equal to or worse than that in the current population implies that the candidate solution 𝑗 in the current population survives. In addition to maintaining the strong candidate solutions, this strategy eliminates weak candidate solutions.

External Steps 2 to 6 are repeated until the 𝑔max,RGA value of the external RGA approach is met.

3.2. AIA-PSO Algorithm

Figure 4 shows the pseudocode of the proposed AIA-PSO algorithm, in which the external AIA approach is used to optimize the parameter settings of the internal PSO algorithm and the PSO algorithm is used to solve CGO problems.

841410.fig.004
Figure 4: The pseudocode of the AIA-PSO algorithm.

External AIA

Step 1 (initialize the parameter settings). Several parameters must be predetermined. These include rs and the threshold for Ab-Ab recognition 𝑝𝑟𝑡, as well as the lower and upper boundaries of these parameters 𝑐1, 𝑐2, 𝜒, 𝜌, and 𝑝𝑚,PSO. Figure 5 shows the Ab and Ag representation.

841410.fig.005
Figure 5: Ag and Ab representation.

Step 2 (evaluate the Ab-Ag affinity).   
Internal PSO Algorithm
The external AIA approach offers parameter settings 𝑐1, 𝑐2, 𝜒, 𝜌, and 𝑝𝑚,PSO for the internal PSO algorithm, subsequently leading to the implementation of internal steps (1)–(5) of the PSO algorithm. The PSO algorithm returns the best fitness value of PSO 𝑓(𝐱PSO) to the external AIA approach. Step (1) (create an initial particle swarm). An initial particle swarm is created based on psPSO from [𝑥𝑙𝑛, 𝑥𝑢𝑛] of a CGO problem. A particle represents a candidate solution of a CGO problem.Step (2) (calculate the objective function value). Equation (3.2) is used as the pseudo-objective function value of the internal PSO algorithm. Step (3) (update the particle velocity and position). Equations (2.6) and (2.10) can be used to update the particle position and velocity. Step (4) (implement a mutation operation). The diversity of the particle swarm is increased using (2.4). Step (5) (perform an elitist strategy). A new particle swarm (population) is generated from internal step (3). Notably, 𝑓(𝐱PSO,𝑗) of a candidate solution 𝑗 (particle 𝑗) in the particle swarm is evaluated. Here, a pairwise comparison is made between the 𝑓(𝐱PSO,𝑗) value of candidate solutions in the new and current particle swarms. The elitist strategy guarantees that the best candidate solution is always preserved in the next generation. The current particle swarm is updated to the particle swarm of the next generation.

Internal steps (2) to (5) are repeated until the 𝑔max,PSO value of the internal PSO algorithm is satisfied.

End
Consistent with the Ab-Ag affinity metaphor, an Ab-Ag affinity is determined using (3.3), as follows: maxanity𝑗𝐱=1×𝑓PSO𝑗=1,2,,rs.(3.3) Following the evaluation of the Ab-Ag affinities of Abs in the current Ab repertoire, the Ab with the highest Ab-Ag affinity (𝐀𝐛) is chosen to undergo clonal selection operation in external Step 3.

Step 3 (perform clonal selection operation). To control the number of antigen-specific Abs, (2.11) is used.

Step 4 (implement Ab-Ag affinity maturation). The intermediate Ab repertoire that is created in external Step 3 is divided into two subsets. These Abs undergo somatic hypermutation operation by using (2.12) when the random number is 0.5 or less. Notably, these Abs suffer receptor editing operation using (2.13) when the random number exceeds 0.5.

Step 5 (introduce diverse Abs). Based on the bone marrow operation, diverse Abs are created to recruit the Abs suppressed in external Step 3.

Step 6 (update an Ab repertoire). A new Ab repertoire is generated from external Steps 35. The Ab-Ag affinities of the Abs in the generated Ab repertoire are evaluated. This work presents a strategy for updating the Ab repertoire. A situation in which the Ab-Ag affinity of Ab  𝑗 in the new Ab repertoire exceeds that in the current Ab repertoire implies that a strong Ab in the new Ab repertoire replaces the weak Ab in the current Ab repertoire. Additionally, a situation in which the Ab-Ag affinity of Ab  𝑗 in the new Ab repertoire equals to or is worse than that in the current Ab repertoire implies that the Ab  𝑗 in the current Ab repertoire survives. In addition to maintaining the strong Abs, this strategy eliminates nonfunctional Abs.

External Steps 26 are repeated until the termination criterion 𝑔max,AIA is satisfied.

4. Results

The 13 CGO problems were taken from other studies [1, 20, 21, 23, 34]. The set of CGO problems comprises six benchmark NLP problems (TPs 1–4 and 12–13), and seven GPP problems, in which TP 5 (alkylation process design in chemical engineering), TP 6 (optimal reactor design), TP 12 (a tension/compression string design problem), and TP 13 (a pressure vessel design problem) are constrained engineering problems, were used to evaluate the performances of the proposed RGA-PSO and AIA-PSO algorithms. In the appendix, the objective function, constraints, boundary conditions of decision variables, and known global optimum for TPs 1−11 are described and the problem characteristics of TPs 5, 12, and 13 are further detailed.

The proposed RGA-PSO and AIA-PSO algorithms were coded in MATLAB software and executed on a Pentium D 3.0 (GHz) personal computer. Fifty independent runs were conducted to solve each test problem (TP). Numerical results were summarized, including the best, median, mean, and worst results, as well as the standard deviation (S.D.) of objective function values obtained using RGA-PSO and AIA-PSO solutions, mean computational CPU times (MCCTs), and mean absolute percentage error MAPE, as defined by MAPE=50𝑠=1||𝑓𝐱𝐱𝑓𝑠stochastic𝐱/𝑓||50×100%,𝑠=1,2,,50,(4.1) where 𝑓(𝐱)= value of the known global solution, 𝑓(𝐱𝑠stochastic) = values obtained from solutions of stochastic global optimization approaches (e.g., RGA-PSO and AIA-PSO algorithms).

Table 1 lists the parameter settings for the RGA-PSO and AIA-PSO algorithms, as shown in Table 1.

tab1
Table 1: The parameter settings for the RGA-PSO and AIA-PSO algorithms.
4.1. Comparison of the Results Obtained Using the RGA-PSO and AIA-PSO Algorithms

Table 2 summarizes the numerical results obtained using the proposed RGA-PSO and AIA-PSO algorithms for TPs 1–13. Numerical results indicate that the RGA-PSO and the AIA-PSO algorithms can obtain the global minimum solution to TPs 1–11, since each MAPE% is small. Moreover, the best, median, worst, and S.D. of objective function values obtained using the RGA-PSO and AIA-PSO solutions are identical for TPs 1, 2, 3, 4, 6, 7, 8, 9, and 11. Furthermore, the worst values obtained using the AIA-PSO algorithm for TPs 5 and 13 are smaller than those obtained using the RGA-PSO algorithm. Additionally, 𝑡-test is performed for each TP, indicating that the mean values obtained using the RGA-PSO and AIA-PSO algorithms are statistically significant for TPs 5, 10, 12, and 13, since 𝑃 value is smaller than a significant level 0.05. Based on the results of 𝑡-test, the AIA-PSO algorithm yields better mean values than the RGA-PSO algorithm for TPs 5, 12, and 13, and the AIA-PSO algorithm yields worse mean value than the RGA-PSO algorithm for TP 10.

tab2
Table 2: Numerical results of the proposed RGA-PSO and AIA-PSO algorithms for TPs 1–13.

Tables 3 and 4 list the best solutions obtained using the RGA-PSO and AIA-PSO algorithms from TPs 1–13, respectively, indicating that each constraint is satisfied (i.e., the violation of each constraint is accurate to at least five decimal places) for every TP. Tables 5 and 6 list the best parameter settings of the internal PSO algorithm obtained using the external RGA and AIA approaches, respectively.

tab3
Table 3: The best solutions obtained using the RGA-PSO algorithm from TPs 1–13.
tab4
Table 4: The best solutions obtained using the AIA-PSO algorithm from TPs 1–13.
tab5
Table 5: The best parameter settings of the best solution obtained using the RGA-PSO algorithm from TPs 1–13.
tab6
Table 6: The best parameter settings of the best solution obtained using the AIA-PSO algorithm from TPs 1–13.
4.2. Comparison of the Results for the Proposed RGA-PSO and AIA-PSO Algorithms with Those Obtained Using the Published Individual GA and AIA Approaches and Hybrid Algorithms

Table 7 compares the numerical results of the proposed RGA-PSO and AIA-PSO algorithms with those obtained using published individual GA and AIA approaches for TPs 1–4. In this table, GA-1 is a GA with a penalty function methods, as used by Michalewicz [20]. Notably, GA-2 represents a GA with a penalty function, but without any penalty parameter, as used by Deb [21]. Also, GA-3 is an RGA with a static penalty function, as developed by Wu and Chung [9]. Notably, AIA-1 is an AIA method called CLONALG, as proposed by Cruz-Cortés et al. [22]. Finally, AIA-2 is an AIA approach based on an adaptive penalty function, as developed by Wu [10]. The numerical results of GA-1, GA-2, and AIA-1 methods for solving TPs 1–4 were collected from the published literature [2022]. Furthermore, the GA-1, GA-2, and AIA-1 approaches were executed under 350,000 objective function evaluations. To fairly compare the performances of the proposed hybrid CI algorithms and the individual GA and AIA approaches, the GA-3, AIA-2, the internal PSO algorithm of RGA-PSO method, and the internal PSO algorithm of AIA-PSO method were independently executed 50 times under 350,000 objective function evaluations for solving TPs 1–4.

tab7
Table 7: Comparison of the results of the proposed RGA-PSO and AIA-PSO algorithms and those of the published individual GA and AIA approaches for TPs 1–4.

For solving TP 1, the median values obtained using the RGA-PSO and AIA-PSO algorithms are smaller than those obtained using the GA-1, GA-3, and AIA-2 approaches, and the worst values obtained using RGA-PSO and AIA-PSO algorithms are smaller than those obtained using the GA-1, GA-2, GA-3, AIA-1, and AIA-2 approaches. For solving TP 2, the median and worst values obtained using the RGA-PSO and AIA-PSO algorithms are smaller than those obtained using the GA-3 method. For solving TP 3, the median and worst values obtained using the RGA-PSO and AIA-PSO algorithms are smaller than those obtained using the GA-1 and GA-3 approaches. For solving TP 4, the median and worst values obtained using the RGA-PSO and AIA-PSO algorithms are smaller than those obtained using the GA-3 method, and the worst values obtained using the RGA-PSO and AIA-PSO algorithms are smaller than those obtained using the AIA-1 approach. Moreover, the GA-3 method obtained the worst MAPE% for TP 1 and TP 4. Table 8 lists the results of the 𝑡-test for the GA-3, AIA-2, RGA-PSO, and AIA-PSO methods. This table indicates that the mean values of the RGA-PSO, and AIA-PSO algorithms are not statistically significant, since 𝑃 values are larger than a significant level 0.05, and the mean values between GA-3 versus AIA-2, GA-3 versus RGA-PSO, GA-3 versus AIA-PSO, AIA-2 versus RGA-PSO, and AIA-2 versus AIA-PSO are statistically significant. According to Tables 7 and 8, the mean values obtained using the RGA-PSO and AIA-PSO algorithms are better than those of obtained using the GA-3 and AIA-1 methods for TPs 1–4.

tab8
Table 8: Results of the 𝑡-test for TPs 1–4.

Table 9 compares the numerical results obtained using the proposed RGA-PSO and AIA-PSO algorithms and those obtained using AIA-2 and GA-3 for solving TPs 5–13. The AIA-2, GA-3, the internal PSO algorithm of the RGA-PSO approach, and the internal PSO algorithm of AIA-PSO approach were independently executed 50 times under 300,000 objective function evaluations. Table 9 shows that MAPE% obtained using the proposed RGA-PSO and AIA-PSO algorithms is close to 1%, or smaller than 1% for TPs 5–11, indicating that the proposed RGA-PSO and AIA-PSO algorithms can converge to global optimum for TPs 5–11. Moreover, the worst values obtained using the RGA-PSO and AIA-PSO algorithms are significantly smaller than those obtained using the GA-3 method for TPs 5, 6, 11, and 13. Additionally, the worst values obtained using the RGA-PSO and AIA-PSO algorithms are smaller than those obtained using the AIA-2 method for TPs 5, 6, and 13.

tab9
Table 9: Comparison of the numerical results of the proposed RGA-PSO and AIA-PSO algorithms and those of the published individual AIA and RGA for TPs 5–13.

Table 10 summarizes the results of the 𝑡-test for TPs 5–13. According to Tables 9 and 10, the mean values of the RGA-PSO and AIA-PSO algorithms are smaller than those of the GA-3 approach for TPs 5, 6, 7, 8, 9, 10, 11, and 13. Moreover, the mean values obtained using the RGA-PSO and AIA-PSO algorithms are smaller than those of the AIA-2 approach for TPs 6, 7, 8, 10, and 12. Totally, according to Tables 710, the performances of the hybrid CI methods are superior to those of individual GA and AIA methods.

tab10
Table 10: Results of t-test for TPs 5–13.

The TPs 12 and 13 have been solved by many hybrid algorithms. For instance, Huang et al. [23] presented a coevolutionary differential evolution (CDE) that integrates a coevolution mechanism and a DE approach. Zahara and Kao [24] developed a hybrid Nelder-Mead simplex search method and a PSO algorithm (NM-PSO). Table 11 compares the numerical results of the CDE, NM-PSO, RGA-PSO, and AIA-PSO methods for solving TPs 12−13. The table indicates that the best, mean, and worst values obtained using the NM-PSO method are superior to those obtained using the CDE, RGA-PSO, and AIA-PSO approaches for TP 12. Moreover, the best, mean, and worst values obtained using the AIA-PSO algorithm are better than those of the CDE, NM-PSO, and RGA-PSO algorithms.

tab11
Table 11: Comparison of the numerical results of the proposed RGA-PSO and AIA-PSO algorithms and those of the published hybrid algorithms for TPs 12-13.

According to the No Free Lunch theorem [35], if algorithm A outperforms algorithm B on average for one class of problems, then the average performance of the former must be worse than that of the latter over the remaining problems. Therefore, it is unlikely that any unique stochastic global optimization approach exists that performs best for all CGO problems.

4.3. Summary of Results

The proposed RGA-PSO and AIA-PSO algorithms with a penalty function method have the following benefits.(1)Parameter manipulation of the internal PSO algorithm is based on the solved CGO problems. Owing to their ability to efficiently solve an UGO problem, the external RGA and AIA approaches are substituted for trial and error to manipulate the parameters (𝜒, 𝑐1, 𝑐2, 𝜌, and 𝑝𝑚,PSO). (2)Besides obtaining the optimum parameter settings of the internal PSO algorithm, the RGA-PSO and AIA-PSO algorithms can yield a global optimum for a CGO problem.(3)In addition to performing better than approaches of some published individual GA and AIA approaches, the proposed RGA-PSO and AIA-PSO algorithms reduce the parametrization for the internal PSO algorithm, despite the RGA-PSO and AIA-PSO algorithms being more complex than individual GA and AIA approaches.

The proposed RGA-PSO and AIA-PSO algorithms have the following limitations. (1)The proposed RGA-PSO and AIA-PSO algorithms increase the computational CPU time, as shown in Table 2.(2)The proposed RGA-PSO and AIA-PSO algorithms are designed to solve CGO problems with continuous decision variables 𝑥𝑛. Therefore, the proposed algorithms cannot be applied to manufacturing problems such as job shop scheduling and quadratic assignment problems (combinatorial optimization problems).

5. Conclusions

This work presents novel RGA-PSO and AIA-PSO algorithms. The synergistic power of the RGA with PSO algorithm and the AIA with PSO algorithm is also demonstrated by using 13 CGO problems. Numerical results indicate that, in addition to converging to a global minimum for each test CGO problem, the proposed RGA-PSO and AIA-PSO algorithms obtain the optimum parameter settings of the internal PSO algorithm. Moreover, the numerical results obtained using the RGA-PSO and AIA-PSO algorithms are superior to those obtained using alternative stochastic global optimization methods such as individual GA and AIA approaches. The RGA-PSO and AIA-PSO algorithms are highly promising stochastic global optimization approaches for solving CGO problems.

Appendices

A. TP 1 [20, 21]

TP 1 has ten decision variables, eight inequality constraints, and 20 boundary conditions, as follows: Minimize𝑓(𝐱)=𝑥21+𝑥22+𝑥1𝑥214𝑥116𝑥2+𝑥3102𝑥+4452+𝑥532𝑥+2612+5𝑥27𝑥+78112𝑥+29102+𝑥1072+45Subjectto𝑔1(𝐱)105+4𝑥1+5𝑥23𝑥7+9𝑥8𝑔0,2(𝐱)10𝑥18𝑥217𝑥7+2𝑥8𝑔0,3(𝐱)8𝑥1+2𝑥2+5𝑥92𝑥10𝑔120,4𝑥(𝐱)3122𝑥+4232+2𝑥237𝑥4𝑔1200,5(𝐱)5𝑥21+8𝑥2+𝑥3622𝑥4𝑔400,6(𝐱)𝑥21𝑥+22222𝑥1𝑥2+14𝑥56𝑥6𝑔0,7𝑥(𝐱)0.5182𝑥+2242+3𝑥25𝑥6𝑔300,8(𝐱)3𝑥1+6𝑥2𝑥+129827𝑥100,10𝑥𝑛10,𝑛=1,2,,10.(A.1) The global solution to TP 1 is as follows: 𝐱𝑓𝐱=(2.171996,2.363683,8.773926,5.095984,0.9906548,1.430574,1.321644,9.828726,8.280092,8.375927),=24.306.(A.2)

B. TP 2 [21]

TP 2 involves five decision variables, six inequality constraints, and ten boundary conditions, as follows: Minimize𝑓(𝐱)=5.3578547𝑥23+0.8356891𝑥1𝑥5+37.293239𝑥140792.141Subjectto𝑔1(𝐱)85.3344070.0056858𝑥2𝑥50.0006262𝑥1𝑥4+0.0022053𝑥3𝑥5𝑔0,2(𝐱)6.665593+0.0056858𝑥2𝑥5+0.0006262𝑥1𝑥40.0022053𝑥3𝑥5𝑔0,3(𝐱)9.487510.0071371𝑥2𝑥50.0029955𝑥1𝑥20.0021813𝑥23𝑔0,4(𝐱)29.48751+0.0071371𝑥2𝑥5+0.0029955𝑥1𝑥2+0.0021813𝑥23𝑔0,5(𝐱)10.6690390.0047026𝑥3𝑥50.0012547𝑥1𝑥30.0019085𝑥3𝑥4𝑔0,6(𝐱)15.699039+0.0047026𝑥3𝑥5+0.0012547𝑥1𝑥3+0.0019085𝑥3𝑥40,78𝑥1102,33𝑥245,27𝑥𝑛45,𝑛=3,4,5.(B.1)

The global solution to TP 2 is 𝐱𝑓𝐱=(78.0,33.0,29.995256,45.0,36.775812),=30665.539.(B.2)

C. TP 3 [20, 21]

TP 3 has seven decision variables, four inequality constraints, and 14 boundary conditions, as follows: 𝑥Minimize𝑓(𝐱)=1102𝑥+52122+𝑥43𝑥+34112+10𝑥65+7𝑥26+𝑥474𝑥6𝑥710𝑥68𝑥7Subjectto𝑔1(𝐱)127+2𝑥21+3𝑥42+𝑥3+4𝑥24+5𝑥5𝑔0,2(𝐱)282+7𝑥1+3𝑥2+10𝑥23+𝑥4𝑥5𝑔0,3(𝐱)196+23𝑥1+𝑥22+6𝑥268𝑥7𝑔0,4(𝐱)4𝑥21+𝑥223𝑥1𝑥2+2𝑥23+5𝑥611𝑥70,10𝑥𝑛10,𝑛=1,2,,7.(C.1) The global solution to TP 3 is 𝐱𝑓𝐱=(2.330499,1.951372,0.4775414,4.365726,0.6244870,1.038131,1.594227),=680.630.(C.2)

D. TP 4 [20, 21]

TP 4 involves 13 decision variables, nine inequality constraints, and 26 boundary conditions, as follows: Minimize𝑓(𝐱)=54𝑛=1𝑥𝑛54𝑛=1𝑥2𝑛13𝑛=5𝑥𝑛Subjectto𝑔1(𝐱)2𝑥1+2𝑥2+𝑥10+𝑥11𝑔100,2(𝐱)2𝑥1+2𝑥3+𝑥10+𝑥12𝑔100,3(𝐱)2𝑥2+2𝑥3+𝑥11+𝑥12𝑔100,4(𝐱)8𝑥1+𝑥10𝑔0,5(𝐱)8𝑥2+𝑥11𝑔0,6(𝐱)8𝑥3+𝑥12𝑔0,7(𝐱)2𝑥4𝑥5+𝑥10𝑔0,8(𝐱)2𝑥6𝑥7+𝑥11𝑔0,9(𝐱)2𝑥8𝑥9+𝑥120,0𝑥𝑛1,𝑛=1,2,,9,0𝑥𝑛100,𝑛=10,11,12,0𝑥131.(D.1) The global solution to TP 4 is 𝐱=𝐱(1,1,1,1,1,1,1,1,1,3,3,3,1),𝑓=15.(D.2)

E. TP 5 (Alkylation Process Design Problem in Chemical Engineering) [1]

TP 5 has seven decision variables subject to 12 nonconvex, two linear, and 14 boundary constraints. The objective function is to improve the octane number of some olefin feed by reacting it with isobutane in the presence of acid. The decision variables 𝑥𝑛 are olefin feed rate (barrels/day) 𝑥1, acid addition rate (thousands of pounds/day) 𝑥2, alkylate yield (barrels/day) 𝑥3, acid strength 𝑥4, motor octane number 𝑥5, external isobutane-to-olefin ration 𝑥6, and F-4 performance number 𝑥7: 𝜔Minimize𝑓(𝐱)=1𝑥1+𝜔2𝑥1𝑥6+𝜔3𝑥3+𝜔4𝑥2+𝜔5𝜔6𝑥3𝑥5s.t.𝑔1(𝐱)=𝜔7𝑥26+𝜔8𝑥11𝑥3𝜔9𝑥6𝑔1,2(𝐱)=𝜔10𝑥1𝑥31+𝜔11𝑥1𝑥31𝑥6𝜔12𝑥1𝑥31𝑥26𝑔1,3(𝐱)=𝜔13𝑥26+𝜔14𝑥5𝜔15𝑥4𝜔16𝑥6𝑔1,4(𝐱)=𝜔17𝑥51+𝜔18𝑥51𝑥6+𝜔19𝑥4𝑥51𝜔20𝑥51𝑥26𝑔1,5(𝐱)=𝜔21𝑥7+𝜔22𝑥2𝑥31𝑥41𝜔23𝑥2𝑥31𝑔1,6(𝐱)=𝜔24𝑥71+𝜔25𝑥2𝑥31𝑥71𝜔26𝑥2𝑥31𝑥41𝑥71𝑔1,7(𝐱)=𝜔27𝑥51+𝜔28𝑥51𝑥7𝑔1,8(𝐱)=𝜔29𝑥5𝜔30𝑥7𝑔1,9(𝐱)=𝜔31𝑥3𝜔32𝑥1𝑔1,10(𝐱)=𝜔33𝑥1𝑥31+𝜔34𝑥31𝑔1,11(𝐱)=𝜔35𝑥2𝑥31𝑥41𝜔36𝑥2𝑥31𝑔1,12(𝐱)=𝜔37𝑥4+𝜔38𝑥21𝑥3𝑥4𝑔1,13(𝐱)=𝜔39𝑥1𝑥6+𝜔40𝑥1𝜔41𝑥3𝑔1,14(𝐱)=𝜔42𝑥11𝑥3+𝜔43𝑥11𝜔44𝑥61,1500𝑥12000,1𝑥2120,3000𝑥33500,85𝑥493,90𝑥595,3𝑥612,145𝑥7162,(E.1) where 𝜔𝑙(𝑙=1,2,,44) denotes positive parameters given in Table 12. The global solution to TP 5 is 𝐱=𝐱(1698.18,53.66,3031.30,90.11,10.50,153.53),𝑓=1227.1978.(E.2)

tab12
Table 12: Coefficients for TP 5.

F. TP 6 (Optimal Reactor Design Problem) [1]

TP 6 contains eight decision variables subject to four nonconvex inequality constraints and 16 boundary conditions, as follows: Minimize𝑓(𝐱)=0.4𝑥10.67𝑥70.67+0.4𝑥20.67𝑥80.67+10𝑥1𝑥2s.t.𝑔1(𝐱)=0.0588𝑥5𝑥7+0.1𝑥1𝑔1,2(𝐱)=0.0588𝑥6𝑥8+0.1𝑥1+0.1𝑥2𝑔1,3(𝐱)=4𝑥3𝑥51+2𝑥30.71𝑥51+0.0588𝑥31.3𝑥7𝑔1.4(𝐱)=4𝑥4𝑥61+2𝑥40.71𝑥61+0.0588𝑥41.3𝑥81,0.1𝑥𝑛10,𝑛=1,2,,8.(F.1) The global solution to TP 6 is 𝐱=𝐱(6.4747,2.2340,0.6671,0.5957,5.9310,5.5271,1.0108,0.4004),𝑓=3.9511.(F.2)

G. TP 7 [1]

TP 7 has four decision variables, two nonconvex inequality constraints, and eight boundary conditions, as follows: Minimize𝑓(𝐱)=𝑥1+0.4𝑥10.67𝑥30.67s.t.𝑔1(𝐱)=0.05882𝑥3𝑥4+0.1𝑥1𝑔12(𝐱)=4𝑥2𝑥41+2𝑥20.71𝑥41+0.05882𝑥21.3𝑥31,0.1𝑥1,𝑥2,𝑥3,𝑥410.(G.1)

The global solution to TP 7 is 𝐱=𝐱(8.1267,0.6154,0.5650,5.6368),𝑓=5.7398.(G.2)

H. TP 8 [1]

TP 8 contains three decision variables subject to one nonconvex inequality constraint and six boundary conditions, as follows: Minimize𝑓(𝐱)=0.5𝑥1𝑥21𝑥15𝑥21s.t.𝑔1(𝐱)=0.01𝑥2𝑥31+0.01𝑥1+0.0005𝑥1𝑥31,1𝑥𝑛100,𝑛=1,2,3.(H.1) The global solution to TP 8 is 𝐱=𝐱(88.2890,7.7737,1.3120),𝑓=83.2540.(H.2)

I. TP 9 [1]

TP 9 contains eight decision variables subject to four nonconvex inequality constraints and 16 boundary conditions, as follows: Minimize𝑓(𝐱)=𝑥1𝑥5+0.4𝑥10.67𝑥30.67+0.4𝑥50.67𝑥70.67s.t.𝑔1(𝐱)=0.05882𝑥3𝑥4+0.1𝑥1𝑔1,2(𝐱)=0.05882𝑥7𝑥8+0.1𝑥1+0.1𝑥5𝑔1,3(𝐱)=4𝑥2𝑥41+2𝑥20.71𝑥41+0.05882𝑥21.3𝑥3𝑔1,4(𝐱)=4𝑥6𝑥81+2𝑥60.71𝑥81+0.05882𝑥61.3𝑥71,0.01𝑥𝑛10,𝑛=1,2,,8.(I.1) The global solution to TP 9 is 𝐱=𝐱(6.4225,0.6686,1.0239,5.9399,2.2673,0.5960,0.4029,5.5288),𝑓=6.0482.(I.2)

J. TP 10 [34]

TP 10 contains three decision variables subject to one nonconvex inequality constraint and six boundary conditions, as follows: Minimize𝑓(𝐱)=5𝑥1+50000𝑥11+20𝑥2+72000𝑥21+10𝑥3+144000𝑥31s.t.𝑔1(𝐱)=4𝑥11+32𝑥21+120𝑥311,1𝑥𝑛1000,𝑛=1,2,3.(J.1) The global solution to TP 10 is 𝐱=𝐱(107.4,84.9,204.5),𝑓=6300.(J.2)

K. TP 11 [1, 34]

TP 11 involves five decision variables, six inequality constraints, and ten boundary conditions, as follows: Minimize𝑔0(𝐱)=5.3578𝑥23+0.8357𝑥1𝑥5+37.2392𝑥1s.t.𝑔1(𝐱)=0.00002584𝑥3𝑥50.00006663𝑥2𝑥50.0000734𝑥1𝑥4𝑔1,2(𝐱)=0.000853007𝑥2𝑥5+0.00009395𝑥1𝑥40.00033085𝑥3𝑥5𝑔1,4(𝐱)=0.00024186𝑥2𝑥5+0.00010159𝑥1𝑥2+0.00007379𝑥23𝑔1,3(𝐱)=1330.3294𝑥21𝑥510.42𝑥1𝑥510.30586𝑥21𝑥23𝑥51𝑔1,5(𝐱)=2275.1327𝑥31𝑥510.2668𝑥1𝑥510.40584𝑥4𝑥51𝑔1,6(𝐱)=0.00029955𝑥3𝑥5+0.00007992𝑥1𝑥3+0.00012157𝑥3𝑥41,78𝑥1102,33𝑥245,27𝑥345,27𝑥445,27𝑥545.(K.1)

The global solution to TP 11 is 𝐱=𝐱(78.0,33.0,29.998,45.0,36.7673),𝑓=10122.6964.(K.2)

L. TP 12 (a Tension/Compression String Design Problem) [23]

TP 12 involves three decision variables, six inequality constraints, and six boundary conditions. This problem is taken from Huang et al. [23]. This problem attempts to minimize the weight (i.e., 𝑓(𝐱)) of a tension/compression spring subject to constraints on minimum deflection, shear stress, and surge frequency. The design variables are the mean coil diameter 𝑥2, wire diameter 𝑥1, and number of active coils 𝑥3: 𝑥Minimize𝑓(𝐱)=3𝑥+22𝑥21s.t.𝑔1𝑥(𝐱)=132𝑥371785𝑥41𝑔0,2(𝐱)=4𝑥22𝑥1𝑥2𝑥125662𝑥31𝑥41+15108𝑥21𝑔10,3(𝐱)=1140.45𝑥1𝑥22𝑥3𝑔0,4𝑥(𝐱)=1+𝑥21.510,0.05𝑥12,0.25𝑥21.3,2𝑥315.(L.1)

M. TP 13 (Pressure Vessel Design Problem) [23]

TP 13 involves four decision variables, four inequality constraints, and eight boundary conditions. This problem attempts to minimize the total cost (𝑓(𝐱)), including cost of materials forming and welding. A cylindrical vessel is capped at both ends by hemispherical heads. Four design variables exist: thickness of the shell 𝑥1, thickness of the head 𝑥2, inner radius 𝑥3, and length of the cylindrical section of the vessel, excluding the head 𝑥4: Minimize𝑓(𝐱)=0.6224𝑥1𝑥3𝑥4+1.7781𝑥2𝑥23+3.1661𝑥21𝑥4+19.84𝑥21𝑥3s.t.𝑔1(𝐱)=𝑥1+0.0193𝑥3𝑔0,2(𝐱)=𝑥2+0.00954𝑥3𝑔0,3(𝐱)=𝜋𝑥32𝑥443𝜋𝑥33𝑔+12960000,4(𝐱)=𝑥42400,0𝑥1100,0𝑥2100,10𝑥3200,10𝑥3200.(M.1)

Acknowledgment

The author would like to thank the National Science Council of the Republic of China, Taiwan, for financially supporting this research under Contract no. NSC 100-2622-E-262-006-CC3.

References

  1. C. A. Floudas, P. M. Pardalos, C. S. Adjiman, and W. R. Esposito, Handbook of Test Problems in Local and Global Optimization, Kluwer, Boston, Mass, USA, 1999.
  2. L. Kit-Nam Francis, “A generalized geometric-programming solution to economic production quantity model with flexibility and reliability considerations,” European Journal of Operational Research, vol. 176, no. 1, pp. 240–251, 2007.
  3. J. F. Tsai, “Treating free variables in generalized geometric programming problems,” Computers & Chemical Engineering, vol. 33, no. 1, pp. 239–243, 2009.
  4. P. Xu, “A hybrid global optimization method: the multi-dimensional case,” Journal of Computational and Applied Mathematics, vol. 155, no. 2, pp. 423–446, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  5. C. A. Floudas, Deterministic Global Optimization, Kluwer Academic, Boston, Mass, USA, 1999.
  6. C. I. Sun, J. C. Zeng, J. S. Pan, et al., “An improved vector particle swarm optimization for constrained optimization problems,” Information Sciences, vol. 181, no. 6, pp. 1153–1163, 2011.
  7. I. G. Tsoulos, “Solving constrained optimization problems using a novel genetic algorithm,” Applied Mathematics and Computation, vol. 208, no. 1, pp. 273–283, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. K. Deep and Dipti, “A self-organizing migrating genetic algorithm for constrained optimization,” Applied Mathematics and Computation, vol. 198, no. 1, pp. 237–250, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. J. Y. Wu and Y. K. Chung, “Real-coded genetic algorithm for solving generalized polynomial programming problems,” Journal of Advanced Computational Intelligence and Intelligent Informatics, vol. 11, no. 4, pp. 358–364, 2007.
  10. J. Y. Wu, “Solving constrained global optimization via artificial immune system,” International Journal on Artificial Intelligence Tools, vol. 20, no. 1, pp. 1–27, 2011.
  11. L. A. Zadeh, “Fuzzy logic, neural networks, and soft computing,” Communications of the ACM, vol. 37, no. 3, pp. 77–84, 1994.
  12. A. Konar, Computational Intelligence-Principles, Techniques and Applications, Springer, New York, NY, USA, 2005.
  13. H. Poorzahedy and O. M. Rouhani, “Hybrid meta-heuristic algorithms for solving network design problem,” European Journal of Operational Research, vol. 182, no. 2, pp. 578–596, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  14. W. F. Abd-El-Wahed, A. A. Mousa, and M. A. El-Shorbagy, “Integrating particle swarm optimization with genetic algorithms for solving nonlinear optimization problems,” Journal of Computational and Applied Mathematics, vol. 235, no. 5, pp. 1446–1453, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  15. R. J. Kuo and Y. S. Han, “A hybrid of genetic algorithm and particle swarm optimization for solving bi-level linear programming problem—a case study on supply chain model,” Applied Mathematical Modelling, vol. 35, no. 8, pp. 3905–3917, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  16. P. S. Shelokar, P. Siarry, V. K. Jayaraman, and B. D. Kulkarni, “Particle swarm and ant colony algorithms hybridized for improved continuous optimization,” Applied Mathematics and Computation, vol. 188, no. 1, pp. 129–142, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  17. Y. Hu, Y. Ding, and K. Hao, “An immune cooperative particle swarm optimization algorithm for fault-tolerant routing optimization in heterogeneous wireless sensor networks,” Mathematical Problems in Engineering, vol. 2012, Article ID 743728, 19 pages, 2012. View at Publisher · View at Google Scholar
  18. X. Zhao, “A perturbed particle swarm algorithm for numerical optimization,” Applied Soft Computing, vol. 10, no. 1, pp. 119–124, 2010.
  19. Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, New York, NY, USA, 1994.
  20. Z. Michalewicz, “Genetic algorithm, numerical optimization, and constraints,” in Proceedings of the 6th International Conference on Genetic Algorithms, pp. 151–158, San Mateo, Calif, USA, 1995.
  21. K. Deb, “An efficient constraint handling method for genetic algorithms,” Computer Methods in Applied Mechanics and Engineering, vol. 186, no. 2–4, pp. 311–338, 2000.
  22. N. Cruz-Cortés, D. Trejo-Pérez, and C. A. Coello Coello, “Handling constraints in global optimization using an artificial immune system,” in Proceedings of the 4th International Conference on Artificial Immune Systems, pp. 234–247, Banff, Canada,, 2005.
  23. F.-Z. Huang, L. Wang, and Q. He, “An effective co-evolutionary differential evolution for constrained optimization,” Applied Mathematics and Computation, vol. 186, no. 1, pp. 340–356, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  24. E. Zahara and Y. T. Kao, “Hybrid Nelder-Mead simplex search and particle swarm optimization for constrained engineering design problems,” Expert Systems with Applications, vol. 36, no. 2, part 2, pp. 3880–3886, 2009.
  25. C. R. Houck, J. A. Joines, and M. G. Kay, “A genetic algorithm for function optimization: a matlab implementation,” in NSCU-IE TR 95-09, North Carolina State University, Raleigh, NC, USA, 1995.
  26. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, pp. 1942–1948, Perth, Australia, 1995.
  27. Y. Shi and R. Eberhart, “A modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation, pp. 69–73, Anchorage, Alaska, USA, 1998.
  28. M. Clerc, “The swarm and the queen: towards a deterministic and adaptive particle swarm optimization,” in Proceedings of the IEEE Congress on Evolutionary Computation, pp. 1951–1957, Washington, DC, USA, 1999.
  29. M. Clerc and J. Kennedy, “The particle swarm-explosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 1, pp. 58–73, 2002.
  30. A. P. Engelbrecht, Fundamentals of Computational Swarm Intelligence, John Wiley & Sons, 2005.
  31. N. K. Jerne, “Idiotypic networks and other preconceived ideas,” Immunological Reviews, vol. 79, pp. 5–24, 1984.
  32. L. N. de Castro and F. J. Von Zuben, “Artificial Immune Systems: Part I—Basic Theory and Applications,” FEEC/Univ. Campinas, Campinas, Brazil, 1999, ftp://ftp.dca.fee.unicamp.br/pub/docs/vonzuben/tr_dca/trdca0199.pdf.
  33. C. A. Coello Coello, “Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art,” Computer Methods in Applied Mechanics and Engineering, vol. 191, no. 11-12, pp. 1245–1287, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  34. M. J. Rijckaert and X. M. Martens, “Comparison of generalized geometric programming algorithms,” Journal of Optimization Theory and Applications, vol. 26, no. 2, pp. 205–242, 1978. View at Zentralblatt MATH
  35. D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, pp. 67–82, 1997.