Abstract

This work presents a hybrid real-coded genetic algorithm with a particle swarm optimization (RGA-PSO) algorithm and a hybrid artificial immune algorithm with a PSO (AIA-PSO) algorithm for solving 13 constrained global optimization (CGO) problems, including six nonlinear programming and seven generalized polynomial programming optimization problems. External RGA and AIA approaches are used to optimize the constriction coefficient, cognitive parameter, social parameter, penalty parameter, and mutation probability of an internal PSO algorithm. CGO problems are then solved using the internal PSO algorithm. The performances of the proposed RGA-PSO and AIA-PSO algorithms are evaluated using 13 CGO problems. Moreover, numerical results obtained using the proposed RGA-PSO and AIA-PSO algorithms are compared with those obtained using published individual GA and AIA approaches. Experimental results indicate that the proposed RGA-PSO and AIA-PSO algorithms converge to a global optimum solution to a CGO problem. Furthermore, the optimum parameter settings of the internal PSO algorithm can be obtained using the external RGA and AIA approaches. Also, the proposed RGA-PSO and AIA-PSO algorithms outperform some published individual GA and AIA approaches. Therefore, the proposed RGA-PSO and AIA-PSO algorithms are highly promising stochastic global optimization methods for solving CGO problems.

1. Introduction

Many scientific, engineering, and management problems can be expressed as constrained global optimization (CGO) problems, as follows: Minimize𝑓(𝐱),s.t.π‘”π‘šβ„Ž(𝐱)≀0,π‘š=1,2,…,𝑀,π‘˜π‘₯(𝐱)=0,π‘˜=1,2,…,𝐾,𝑙𝑛≀π‘₯𝑛≀π‘₯𝑒𝑛,𝑛=1,2,…,𝑁,(1.1) where 𝑓(𝐱) denotes an objective function; π‘”π‘š(𝐱) represents a set of π‘š nonlinear inequality constraints; β„Žπ‘˜(𝐱) refers to a set of π‘˜ nonlinear equality constraints; 𝐱 represents a vector of decision variables which take real values, and each decision variable π‘₯𝑛 is constrained by its lower and upper boundaries [π‘₯𝑙𝑛,π‘₯𝑒𝑛]; 𝑁 is the total number of decision variables π‘₯𝑛. For instance, generalized polynomial programming (GPP) belongs to the nonlinear programming (NLP) method. The formulation of GPP is a nonconvex objective function subject to nonconvex inequality constraints and possibly disjointed feasible region. The GPP approach has been successfully used to solve problems including alkylation process design, heat exchanger design, optimal reactor design [1], inventory decision problem (economic production quantity) [2], process synthesis and the design of separations, phase equilibrium, nonisothermal complex reactor networks, and molecular conformation [3].

Traditional local NLP optimization approaches based on a gradient algorithm are inefficient for solving CGO problems, while an objective function is nondifferentiable. Global optimization methods can be divided into deterministic or stochastic [4]. Often involving a sophisticated optimization process, deterministic global optimization methods typically make assumptions regarding the problem to be solved [5]. Stochastic global optimization methods that do not require gradient information and numerous assumptions have received considerable attention. For instance, Sun et al. [6] devised an improved vector particle swarm optimization (PSO) algorithm with a constraint-preserving method to solve CGO problems. Furthermore, Tsoulos [7] developed a real-coded genetic algorithm (RGA) with a penalty function approach for solving CGO problems. Additionally, Deep and Dipti [8] presented a self-organizing GA with a tournament selection method for solving CGO problems. Meanwhile, Wu and Chung [9] developed a RGA with a static penalty function approach for solving GPP optimization problems. Finally, Wu [10] introduced an artificial immune algorithm (AIA) with an adaptive penalty function method to solve CGO problems.

Zadeh [11] defined β€œsoft computing” as the synergistic power of two or more fused computational intelligence (CI) schemes, which can be divided into several branches: granular computing (e.g., fuzzy sets, rough sets, and probabilistic reasoning), neurocomputing (e.g., supervised, unsupervised, and reinforcement neural learning algorithms), evolutionary computing (e.g., GAs, genetic programming, and PSO algorithms), and artificial life (e.g., artificial immune systems) [12]. Besides, outperforming individual algorithms in terms of solving certain problems, hybrid algorithms can solve general problems more efficiently [13]. Therefore, hybrid CI approaches have recently attracted considerable attention as a promising field of research. Various hybrid evolutionary computing (GA and PSO methods) and artificial life (such as AIA methods) approaches have been developed for solving optimization problems. These hybrid algorithms focus on developing diverse candidate solutions (such as chromosomes and particles) of population/swarm to solve optimization problems more efficiently. These hybrid algorithms use two different algorithms to create diverse candidate solutions using their specific operations and then merge these diverse candidate solutions to increase the diversity of the candidate population. For instance, Abd-El-Wahed et al. [14] developed an integrated PSO algorithm and GA to solve nonlinear optimization problems. Additionally, Kuo and Han [15] presented a hybrid GA and PSO algorithm for bilevel linear programming to solve a supply chain distribution problem. Furthermore, Shelokar et al. [16] presented a hybrid PSO method and ant colony optimization method for solving continuous optimization problems. Finally, Hu et al. [17] developed an immune cooperative PSO algorithm for solving the fault-tolerant routing problem.

Compared to the above hybrid CI algorithms, this work optimizes the parameter settings of an individual CI method by using another individual CI algorithm. A standard PSO algorithm has certain limitations [17, 18]. For instance, a PSO algorithm includes many parameters that must be set, such as the cognitive parameter, social parameter, and constriction coefficient. In practice, the optimal parameter settings of a PSO algorithm are tuned based on trial and error and prior knowledge is required to successfully manipulate the cognitive parameter, social parameter, and constriction coefficient. The exploration and exploitative capabilities of a PSO algorithm are limited to optimum parameter settings. Moreover, conventional PSO methods involve premature convergence that rapidly losses diversity during optimization.

Fortunately, optimization of parameter settings for a conventional PSO algorithm can be considered an unconstrained global optimization (UGO) problem, and the diversity of candidate solutions of the PSO method can be increased using a multi-nonuniform mutation operation [19]. Moreover, the parameter manipulation of a GA and AIA method is easy to implement without prior knowledge. Therefore, to overcome the limitations of a standard PSO algorithm, this work develops two hybrid CI algorithms to solve CGO problems efficiently. The first algorithm is a hybrid RGA and PSO (RGA-PSO) algorithm, while the second algorithm is a hybrid AIA and PSO (AIA-PSO) algorithm. The proposed RGA-PSO and AIA-PSO algorithms are considered to optimize two optimization problems simultaneously. The UGO problem (optimization of cognitive parameter, social parameter, constriction coefficient, penalty parameter, and mutation probability of an internal PSO algorithm based on a penalty function approach) is optimized using external RGA and AIA approaches, respectively. A CGO problem is then solved using the internal PSO algorithm. The performances of the proposed RGA-PSO and AIA-PSO algorithms are evaluated using a set of CGO problems (e.g., six benchmark NLP and seven GPP optimization problems).

The rest of this paper is organized as follows. Section 2 describes the RGA, PSO algorithm, AIA, and penalty function approaches. Section 3 then introduces the proposed RGA-PSO and AIA-PSO algorithms. Next, Section 4 compares the experimental results of the proposed RGA-PSO and AIA-PSO algorithms with those of various published individual GAs and AIAs [9, 10, 20–22] and hybrid algorithms [23, 24]. Finally, conclusions are drawn in Section 5.

2.1. Real-Coded Genetic Algorithm

GAs are stochastic global optimization methods based on the concepts of natural selection and use three genetic operators, that is, selection, crossover, and mutation, to explore and exploit the solution space. RGA outperforms binary-coded GA in solving continuous function optimization problems [19]. This work thus describes operators of a RGA [25].

2.1.1. Selection

A selection operation selects strong individuals from a current population based on their fitness function values and then reproduces these individuals into a crossover pool. The several selection operations developed include the roulette wheel, ranking, and tournament methods [19, 25]. This work uses the normalized geometric ranking method, as follows: 𝑝𝑗=π‘žξ…ž(1βˆ’π‘ž)π‘Ÿβˆ’1,𝑗=1,2,…,psRGA,(2.1)𝑝𝑗= probability of selecting individual 𝑗,π‘ž= probability of choosing the best individual (here π‘ž=0.35) π‘žξ…ž=π‘ž1βˆ’(1βˆ’π‘ž)psRGA,(2.2)π‘Ÿ= individual ranking based on fitness value, where 1 represents the best, π‘Ÿ=1,2,…,psRGA, psRGA = population size of the RGA.

2.1.2. Crossover

While exploring the solution space by creating new offspring, the crossover operation randomly selects two parents from the crossover pool and then uses these two parents to generate two new offspring. This operation is repeated until the psRGA/2 is satisfied. The whole arithmetic crossover is easily implemented, as follows: π―ξ…žπŸ=π›½Γ—π―πŸ+(1βˆ’π›½)Γ—π―πŸ,π―ξ…žπŸ=(1βˆ’π›½)𝐯𝟏+π›½Γ—π―πŸ,(2.3) where 𝐯𝟏 and 𝐯𝟐= parents, π―ξ…žπŸ and π―ξ…žπŸ= offspring, 𝛽= uniform random number in the interval [0,1.5].

2.1.3. Mutation

Mutation operation can increase the diversity of individuals (candidate solutions). Multi-nonuniform mutation is described as follows: π‘₯trial,𝑛=ξ‚»π‘₯current,𝑛+ξ€·π‘₯π‘’π‘›βˆ’π‘₯current,𝑛𝑔pertRGAξ€Έifπ‘ˆ1π‘₯(0,1)<0.5,current,π‘›βˆ’ξ€·π‘₯current,π‘›βˆ’π‘₯𝑙𝑛𝑔pertRGAξ€Έifπ‘ˆ1(0,1)β‰₯0.5,(2.4) where pert(𝑔RGA)=[π‘ˆ2(0,1)(1βˆ’π‘”RGA/𝑔max,RGA)]2, perturbed factor, π‘ˆ1(0,1) and π‘ˆ2(0,1) = uniform random variable in the interval [0,1], 𝑔max,RGA = maximum generation of the RGA, 𝑔RGA = current generation of the RGA, π‘₯current,𝑛 = current decision variable π‘₯𝑛, π‘₯trial,𝑛 = trial candidate solution π‘₯𝑛.

2.2. Particle Swarm Optimization

Kennedy and Eberhart [26] first introduced a conventional PSO algorithm, which is inspired by the social behavior of bird flocks or fish schools. Like GAs, a PSO algorithm is a population-based algorithm. A population of candidate solutions is called a particle swarm. The particle velocities can be updated by (2.5), as follows: 𝑣𝑗,𝑛𝑔PSOξ€Έ+1=𝑣𝑗,𝑛𝑔PSOξ€Έ+𝑐1π‘Ÿ1𝑗𝑔PSO𝑝𝑙𝑏𝑗,𝑛𝑔PSOξ€Έβˆ’π‘₯𝑗,𝑛𝑔PSOξ€Έξ‚„+𝑐2π‘Ÿ2𝑗𝑔PSO𝑝𝑔𝑏𝑗,𝑛𝑔PSOξ€Έβˆ’π‘₯𝑗,𝑛𝑔PSO𝑗=1,2,…,psPSO,𝑛=1,2,…,𝑁,(2.5)𝑣𝑗,𝑛(𝑔PSO+1) = particle velocity of decision variable π‘₯𝑛 of particle 𝑗 at generation 𝑔PSO+1, 𝑣𝑗,𝑛(𝑔PSO)= particle velocity of decision variable π‘₯𝑛 of particle 𝑗 at generation 𝑔PSO, 𝑐1= cognitive parameter, 𝑐2= social parameter, π‘₯𝑗,𝑛(𝑔PSO)= particle position of decision variable π‘₯𝑛 of particle 𝑗 at generation 𝑔PSO, π‘Ÿ1,𝑗(𝑔PSO),π‘Ÿ2,𝑗(𝑔PSO)= independent uniform random numbers in the interval [0,1] at generation 𝑔PSO, 𝑝𝑙𝑏𝑗,𝑛(𝑔PSO)= best local solution at generation 𝑔PSO,𝑝𝑔𝑏𝑗,𝑛(𝑔PSO)= best global solution at generation 𝑔PSO, psPSO= population size of the PSO algorithm.

The particle positions can be computed using (2.6), as follows: π‘₯𝑗,𝑛𝑔PSOξ€Έ+1=π‘₯𝑗,𝑛𝑔PSOξ€Έ+𝑣𝑗,𝑛𝑔PSOξ€Έ+1𝑗=1,2,…,psPSO,𝑛=1,2,…,𝑁.(2.6)

Shi and Eberhart [27] developed a modified PSO algorithm by incorporating an inertia weight (πœ”in) into (2.7) to control the exploration and exploitation capabilities of a PSO algorithm, as follows: 𝑣𝑗,𝑛𝑔PSOξ€Έ+1=πœ”in𝑣𝑗,𝑛𝑔PSOξ€Έ+𝑐1π‘Ÿ1𝑗𝑔PSO𝑝𝑙𝑏𝑗,𝑛𝑔PSOξ€Έβˆ’π‘₯𝑗,𝑛𝑔PSOξ€Έξ‚„+𝑐2π‘Ÿ2𝑗𝑔PSO𝑝𝑔𝑏𝑗,𝑛𝑔PSOξ€Έβˆ’π‘₯𝑗,𝑛𝑔PSO𝑗=1,2,…,psPSO,𝑛=1,2,…,𝑁.(2.7)

A constriction coefficient (πœ’) was inserted into (2.8) to balance the exploration and exploitation tradeoff [28–30], as follows: 𝑣𝑗,𝑛𝑔PSO𝑣+1=πœ’π‘—,𝑛𝑔PSOξ€Έ+𝜌1𝑔PSO𝑝𝑙𝑏𝑗,𝑛𝑔PSOξ€Έβˆ’π‘₯𝑗,𝑛𝑔PSOξ€Έξ‚„+𝜌2𝑔PSO𝑝𝑔𝑏𝑗,𝑛𝑔PSOξ€Έβˆ’π‘₯𝑗,𝑛𝑔PSO𝑗=1,2,…,psPSO,𝑛=1,2,…,𝑁,(2.8) where πœ’=2π‘ˆ3(0,1)|||√2βˆ’πœβˆ’πœ|||,(πœβˆ’4)(2.9)π‘ˆ3(0,1)= uniform random variable in the interval [0,1],𝜏=𝜏1+𝜏2,𝜏1=𝑐1π‘Ÿ1𝑗,𝜏1=𝑐2π‘Ÿ2𝑗.

This work considers parameters πœ”in and πœ’ to update the particle velocities, as follows: 𝑣𝑗,𝑛𝑔PSOξ€Έξ‚†πœ”+1=πœ’in𝑣𝑗,𝑛𝑔PSOξ€Έ+𝑐1π‘Ÿ1𝑗𝑔PSO𝑝𝑙𝑏𝑗,𝑛𝑔PSOξ€Έβˆ’π‘₯𝑗,𝑛𝑔PSOξ€Έξ‚„+𝑐2π‘Ÿ2𝑗𝑔PSO𝑝𝑔𝑏𝑗,𝑛𝑔PSOξ€Έβˆ’π‘₯𝑗,𝑛𝑔PSO𝑗=1,2,…,psPSO,𝑛=1,2,…,𝑁,(2.10) where πœ”in=((𝑔max,PSOβˆ’π‘”PSO)/𝑔max,PSO), increased 𝑔PSO value reduces the πœ”in, 𝑔max,PSO= maximum generation of the PSO algorithm.

According to (2.10), the optimal values of parameters 𝑐1, 𝑐2, and πœ’ are difficult to obtain through a trial and error. This work thus optimizes these parameter settings by using RGA and AIA approaches.

2.3. Artificial Immune Algorithm

Wu [10] presented an AIA based on clonal selection and immune network theories to solve CGO problems. The AIA approach comprises selection, hypermutation, receptor editing, and bone marrow operations. The selection operation is performed to reproduce strong antibodies (Abs). Also, diverse Abs are created using hypermutation, receptor editing, and bone marrow operations, as described in the following subsections.

2.3.1. Ab and Ag Representation

In the human immune system, an antigen (Ag) has multiple epitopes (antigenic determinants), which can be recognized by various Abs with paratopes (recognizers), on its surface. In the AIA approach, an Ag represents known parameters of a solved problem. The Abs are the candidate solutions (i.e., decision variables π‘₯𝑛, 𝑛=1,2,…,𝑁) of the solved problem. The quality of a candidate solution is evaluated using an Ab-Ag affinity that is derived from the value of an objective function of the solved problem.

2.3.2. Selection Operation

The selection operation, which is based on the immune network principle [31], controls the number of antigen-specific Abs. This operation is defined according to Ab-Ag and Ab-Ab recognition information, as follows: π‘π‘Ÿπ‘—=1𝑁𝑁𝑛=11𝑒𝑑𝑛𝑗,𝑑𝑛𝑗=||||π‘₯βˆ—π‘›βˆ’π‘₯𝑛𝑗π‘₯βˆ—π‘›||||,𝑗=1,2,…,rs,𝑛=1,2,…,𝑁,(2.11) where π‘π‘Ÿπ‘—= probability that 𝐀𝐛𝑗 recognizes π€π›βˆ— (the best solution), π‘₯βˆ—π‘›= the best π€π›βˆ— with the highest Ab-Ag affinity, π‘₯𝑛𝑗= decision variables π‘₯𝑛 of 𝐀𝐛𝑗,rs= repertoire (population) size of the AIA.

The π€π›βˆ— is recognized by other Ab  𝑗 in a current Ab repertoire. Large π‘π‘Ÿπ‘— implies that Ab  𝑗 can effectively recognize π€π›βˆ—. The Ab  𝑗 with π‘π‘Ÿπ‘— that is equivalent to or larger than the threshold degree π‘π‘Ÿπ‘‘ is reproduced to generate an intermediate Ab repertoire.

2.3.3. Hypermutation Operation

Multi-nonuniform mutation [19] is used as the somatic hypermutation operation, which can be expressed as follows: π‘₯trial,𝑛=ξ‚»π‘₯current,𝑛+ξ€·π‘₯π‘’π‘›βˆ’π‘₯current,𝑛𝑔pertAIAξ€Έ,ifπ‘ˆ4π‘₯(0,1)<0.5,current,π‘›βˆ’ξ€·π‘₯current,π‘›βˆ’π‘₯𝑙𝑛𝑔pertAIAξ€Έ,ifπ‘ˆ4(0,1)β‰₯0.5,(2.12) where pert(𝑔AIA)={π‘ˆ5(0,1)(1βˆ’π‘”AIA/𝑔max,AIA)}2= perturbation factor, 𝑔AIA= current generation of the AIA, 𝑔max,AIA= maximum generation number of the AIA, π‘ˆ4(0,1) and π‘ˆ5(0,1)= uniform random number in the interval [0,1].

This operation has two tasks, that is, a uniform search and local fine-tuning.

2.3.4. Receptor Editing Operation

A receptor editing operation is developed using the standard Cauchy distribution 𝐢(0,1), in which the local parameter is zero and the scale parameter is one. Receptor editing is performed using Cauchy random variables that are generated from 𝐢(0,1), owing to their ability to provide a large jump in the Ab-Ag affinity landscape to increase the probability of escaping from the local Ab-Ag affinity landscape. Cauchy receptor editing can be defined by 𝐱trial=𝐱current+π‘ˆ5(0,1)2Γ—πˆ,(2.13) where 𝝈=[𝜎1,𝜎2,…,πœŽπ‘]𝑇, vector of Cauchy random variables, π‘ˆ5(0,1)= uniform random number in the interval [0,1].

This operation is employed in local fine-tuning and large perturbation.

2.3.5. Bone Marrow Operation

The paratope of an Ab can be generated by recombining gene segments 𝐕𝐇𝐃𝐇𝐉𝐇 and 𝐕𝐋𝐉𝐋 [32]. Therefore, based on this metaphor, diverse Abs are synthesized using a bone marrow operation. This operation randomly chooses two Abs from the intermediate Ab repertoire and a recombination point from the gene segments of the paratope of the selected Abs. The selected gene segments (e.g., gene π‘₯1 of Ab 1 and gene π‘₯1 of the Ab 2) are reproduced to create a library of gene segments. The selected gene segments in the paratope are then deleted. The new Ab  1  is formed by inserting the gene segment, which is gene π‘₯1 of the Ab 2 in the library plus a random variable created from standard normal distribution 𝑁(0,1), at the recombination point. The literature details the implementation of the bone marrow [10].

2.4. Penalty Function Methods

Stochastic global optimization approaches, including GAs, AIAs, and PSO, are naturally unconstrained optimization methods. Penalty function methods, which are constraint handling approaches, are commonly used to create feasible solutions to a CGO problem and transform it into an unconstrained optimization problem. Two popular penalty functions exist, namely, the exterior and interior functions. Exterior penalty functions use an infeasible solution as a starting point, and convergence is from the infeasible region to the feasible one. Interior penalty functions start from a feasible solution, then move from the feasible region to the constrained boundaries. Exterior penalty functions are favored over interior penalty functions, because they do not require a feasible starting point and are easily implemented. The exterior penalty functions developed to date include static, dynamic, adaptive, and death penalty functions [33]. This work uses the form of a static penalty function, as follows: Minimize𝑓pseudo⎧βŽͺ⎨βŽͺ⎩(𝐱,𝜌)=𝑓(𝐱)+πœŒπ‘€ξ“π‘š=1ξƒ―ξ€Ίmax0,π‘”π‘šξ€»+(𝐱)πΎξ“π‘˜=1ξ€Ίβ„Žπ‘˜ξ€»ξƒ°(𝐱)2⎫βŽͺ⎬βŽͺ⎭,(2.14) where 𝑓pseudo(𝐱,𝜌)= pseudo-objective function obtained using an original objective function plus a penalty term, 𝜌= penalty parameter.

Unfortunately, the penalty function scheme is limited by the need to fine-tune the penalty parameter 𝜌 [8]. To overcome this limitation, this work attempts to find the optimum 𝜌 for each CGO problem using the RGA and AIA approaches. Additionally, to obtain high-quality RGA-PSO and AIA-PSO solutions accurate to at least five decimal places for the violation of each constraint to a specific CGO problem, the parameter 𝜌 is within the search space [1Γ—109,1Γ—1011].

3. Method

3.1. RGA-PSO Algorithm

Figure 1 shows the pseudocode of the proposed RGA-PSO algorithm. The external RGA approach is used to optimize the best parameter settings of the internal PSO algorithm, and the internal PSO algorithm is employed to solve CGO problems.

External RGA

Step 1 (initialize the parameter settings). Parameter settings are given such as psRGA, crossover probability 𝑝𝑐, mutation probability of the external RGA approach π‘π‘š,RGA, the lower and upper boundaries of these parameters 𝑐1, 𝑐2, πœ’, 𝜌, and the mutation probability of the internal PSO algorithm π‘π‘š,PSO. The candidate solutions (individuals) of the external RGA represent the optimized parameters of the internal PSO algorithm. Finally, Figure 2 illustrates the candidate solution of the external RGA approach.

Step 2 (compute the fitness function value). The fitness function value fitness𝑗 of the external RGA approach is the best objective function value 𝑓(π±βˆ—PSO) obtained from the best solution π±βˆ—PSO of each internal PSO algorithm execution, as follows: fitness𝑗𝐱=π‘“βˆ—PSOξ€Έ,𝑗=1,2,…,psRGA.(3.1)

Candidate solution 𝑗 of the external RGA approach is incorporated into the internal PSO algorithm, and a CGO problem is then solved using the internal PSO algorithm, which is executed as follows.

Internal PSO Algorithm
Step (1) (create an initial particle swarm). An initial particle swarm is created based on the psPSO from [π‘₯𝑙𝑛, π‘₯𝑒𝑛] of a CGO problem. A particle represents a candidate solution of a CGO problem, as shown in Figure 3. Step (2) (calculate the objective function value). According to (2.14), the pseudo-objective function value of the internal PSO algorithm is defined by 𝑓pseudo,𝑗𝐱=𝑓PSO,𝑗+ξƒ―πœŒΓ—π‘€ξ“π‘š=1ξ€½ξ€Ίmax0,π‘”π‘šξ€·π±PSO,𝑗2ξƒ°,𝑗=1,2,…,psPSO.(3.2)Step (3) (update the particle velocity and position). The particle position and velocity can be updated using (2.6) and (2.10), respectively.Step (4) (implement a mutation operation). The standard PSO algorithm lacks evolution operations of GAs such as crossover and mutation. To maintain the diversity of particles, this work uses the multi-nonuniform mutation operator defined by (2.4).Step (5) (perform an elitist strategy). A new particle swarm is created from internal step (3). Notably, 𝑓(𝐱PSO,𝑗) of a candidate solution j (particle j) in the particle swarm is evaluated. Here, a pairwise comparison is made between the 𝑓(𝐱PSO,𝑗) value of candidate solutions in the new and current particle swarms. A situation in which the candidate solution j (𝑗=1,2,…,psPSO) in the new particle swarm is superior to candidate solution 𝑗 in the current particle swarm implies that the strong candidate solution 𝑗 in the new particle swarm replaces the candidate solution 𝑗 in the current particle swarm. The elitist strategy guarantees that the best candidate solution is always preserved in the next generation. The current particle swarm is updated to the particle swarm of the next generation.

Internal steps (2) to (5) are repeated until the 𝑔max,PSO value of the internal PSO algorithm is satisfied.

End

Step 3 (implement selection operation). The parents in a crossover pool are selected using (2.1).

Step 4 (perform crossover operation). In GAs, the crossover operation performs a global search. Thus, the crossover probability 𝑝c usually exceeds 0.5. Additionally, candidate solutions are created using (2.3).

Step 5 (conduct mutation operation). In GAs, the mutation operation implements a local search. Additionally, a solution space is exploited using (2.4).

Step 6 (implement an elitist strategy). This work updates the population using an elitist strategy. A situation in which the fitness𝑗 of candidate solution 𝑗 in the new population is larger than that in the current population suggests that the weak candidate solution 𝑗 is replaced. Additionally, a situation in which the fitness𝑗 of candidate solution 𝑗 in the new population is equal to or worse than that in the current population implies that the candidate solution 𝑗 in the current population survives. In addition to maintaining the strong candidate solutions, this strategy eliminates weak candidate solutions.

External Steps 2 to 6 are repeated until the 𝑔max,RGA value of the external RGA approach is met.

3.2. AIA-PSO Algorithm

Figure 4 shows the pseudocode of the proposed AIA-PSO algorithm, in which the external AIA approach is used to optimize the parameter settings of the internal PSO algorithm and the PSO algorithm is used to solve CGO problems.

External AIA

Step 1 (initialize the parameter settings). Several parameters must be predetermined. These include rs and the threshold for Ab-Ab recognition π‘π‘Ÿπ‘‘, as well as the lower and upper boundaries of these parameters 𝑐1, 𝑐2, πœ’, 𝜌, and π‘π‘š,PSO. Figure 5 shows the Ab and Ag representation.

Step 2 (evaluate the Ab-Ag affinity).   
Internal PSO Algorithm
The external AIA approach offers parameter settings 𝑐1, 𝑐2, πœ’, 𝜌, and π‘π‘š,PSO for the internal PSO algorithm, subsequently leading to the implementation of internal steps (1)–(5) of the PSO algorithm. The PSO algorithm returns the best fitness value of PSO 𝑓(π±βˆ—PSO) to the external AIA approach. Step (1) (create an initial particle swarm). An initial particle swarm is created based on psPSO from [π‘₯𝑙𝑛, π‘₯𝑒𝑛] of a CGO problem. A particle represents a candidate solution of a CGO problem.Step (2) (calculate the objective function value). Equation (3.2) is used as the pseudo-objective function value of the internal PSO algorithm. Step (3) (update the particle velocity and position). Equations (2.6) and (2.10) can be used to update the particle position and velocity. Step (4) (implement a mutation operation). The diversity of the particle swarm is increased using (2.4). Step (5) (perform an elitist strategy). A new particle swarm (population) is generated from internal step (3). Notably, 𝑓(𝐱PSO,𝑗) of a candidate solution 𝑗 (particle 𝑗) in the particle swarm is evaluated. Here, a pairwise comparison is made between the 𝑓(𝐱PSO,𝑗) value of candidate solutions in the new and current particle swarms. The elitist strategy guarantees that the best candidate solution is always preserved in the next generation. The current particle swarm is updated to the particle swarm of the next generation.

Internal steps (2) to (5) are repeated until the 𝑔max,PSO value of the internal PSO algorithm is satisfied.

End
Consistent with the Ab-Ag affinity metaphor, an Ab-Ag affinity is determined using (3.3), as follows: ξ€·maxaffinity𝑗𝐱=βˆ’1Γ—π‘“βˆ—PSO𝑗=1,2,…,rs.(3.3) Following the evaluation of the Ab-Ag affinities of Abs in the current Ab repertoire, the Ab with the highest Ab-Ag affinity (π€π›βˆ—) is chosen to undergo clonal selection operation in external Step 3.

Step 3 (perform clonal selection operation). To control the number of antigen-specific Abs, (2.11) is used.

Step 4 (implement Ab-Ag affinity maturation). The intermediate Ab repertoire that is created in external Step 3 is divided into two subsets. These Abs undergo somatic hypermutation operation by using (2.12) when the random number is 0.5 or less. Notably, these Abs suffer receptor editing operation using (2.13) when the random number exceeds 0.5.

Step 5 (introduce diverse Abs). Based on the bone marrow operation, diverse Abs are created to recruit the Abs suppressed in external Step 3.

Step 6 (update an Ab repertoire). A new Ab repertoire is generated from external Steps 3–5. The Ab-Ag affinities of the Abs in the generated Ab repertoire are evaluated. This work presents a strategy for updating the Ab repertoire. A situation in which the Ab-Ag affinity of Ab  𝑗 in the new Ab repertoire exceeds that in the current Ab repertoire implies that a strong Ab in the new Ab repertoire replaces the weak Ab in the current Ab repertoire. Additionally, a situation in which the Ab-Ag affinity of Ab  𝑗 in the new Ab repertoire equals to or is worse than that in the current Ab repertoire implies that the Ab  𝑗 in the current Ab repertoire survives. In addition to maintaining the strong Abs, this strategy eliminates nonfunctional Abs.

External Steps 2–6 are repeated until the termination criterion 𝑔max,AIA is satisfied.

4. Results

The 13 CGO problems were taken from other studies [1, 20, 21, 23, 34]. The set of CGO problems comprises six benchmark NLP problems (TPs 1–4 and 12–13), and seven GPP problems, in which TP 5 (alkylation process design in chemical engineering), TP 6 (optimal reactor design), TP 12 (a tension/compression string design problem), and TP 13 (a pressure vessel design problem) are constrained engineering problems, were used to evaluate the performances of the proposed RGA-PSO and AIA-PSO algorithms. In the appendix, the objective function, constraints, boundary conditions of decision variables, and known global optimum for TPs 1βˆ’11 are described and the problem characteristics of TPs 5, 12, and 13 are further detailed.

The proposed RGA-PSO and AIA-PSO algorithms were coded in MATLAB software and executed on a Pentium D 3.0 (GHz) personal computer. Fifty independent runs were conducted to solve each test problem (TP). Numerical results were summarized, including the best, median, mean, and worst results, as well as the standard deviation (S.D.) of objective function values obtained using RGA-PSO and AIA-PSO solutions, mean computational CPU times (MCCTs), and mean absolute percentage error MAPE, as defined by βˆ‘MAPE=50𝑠=1||ξ€·π‘“ξ€·π±βˆ—ξ€Έξ€·π±βˆ’π‘“π‘ stochastic𝐱/π‘“βˆ—ξ€Έ||50Γ—100%,𝑠=1,2,…,50,(4.1) where 𝑓(π±βˆ—)= value of the known global solution, 𝑓(𝐱𝑠stochastic) = values obtained from solutions of stochastic global optimization approaches (e.g., RGA-PSO and AIA-PSO algorithms).

Table 1 lists the parameter settings for the RGA-PSO and AIA-PSO algorithms, as shown in Table 1.

4.1. Comparison of the Results Obtained Using the RGA-PSO and AIA-PSO Algorithms

Table 2 summarizes the numerical results obtained using the proposed RGA-PSO and AIA-PSO algorithms for TPs 1–13. Numerical results indicate that the RGA-PSO and the AIA-PSO algorithms can obtain the global minimum solution to TPs 1–11, since each MAPE% is small. Moreover, the best, median, worst, and S.D. of objective function values obtained using the RGA-PSO and AIA-PSO solutions are identical for TPs 1, 2, 3, 4, 6, 7, 8, 9, and 11. Furthermore, the worst values obtained using the AIA-PSO algorithm for TPs 5 and 13 are smaller than those obtained using the RGA-PSO algorithm. Additionally, 𝑑-test is performed for each TP, indicating that the mean values obtained using the RGA-PSO and AIA-PSO algorithms are statistically significant for TPs 5, 10, 12, and 13, since 𝑃 value is smaller than a significant level 0.05. Based on the results of 𝑑-test, the AIA-PSO algorithm yields better mean values than the RGA-PSO algorithm for TPs 5, 12, and 13, and the AIA-PSO algorithm yields worse mean value than the RGA-PSO algorithm for TP 10.

Tables 3 and 4 list the best solutions obtained using the RGA-PSO and AIA-PSO algorithms from TPs 1–13, respectively, indicating that each constraint is satisfied (i.e., the violation of each constraint is accurate to at least five decimal places) for every TP. Tables 5 and 6 list the best parameter settings of the internal PSO algorithm obtained using the external RGA and AIA approaches, respectively.

4.2. Comparison of the Results for the Proposed RGA-PSO and AIA-PSO Algorithms with Those Obtained Using the Published Individual GA and AIA Approaches and Hybrid Algorithms

Table 7 compares the numerical results of the proposed RGA-PSO and AIA-PSO algorithms with those obtained using published individual GA and AIA approaches for TPs 1–4. In this table, GA-1 is a GA with a penalty function methods, as used by Michalewicz [20]. Notably, GA-2 represents a GA with a penalty function, but without any penalty parameter, as used by Deb [21]. Also, GA-3 is an RGA with a static penalty function, as developed by Wu and Chung [9]. Notably, AIA-1 is an AIA method called CLONALG, as proposed by Cruz-CortΓ©s et al. [22]. Finally, AIA-2 is an AIA approach based on an adaptive penalty function, as developed by Wu [10]. The numerical results of GA-1, GA-2, and AIA-1 methods for solving TPs 1–4 were collected from the published literature [20–22]. Furthermore, the GA-1, GA-2, and AIA-1 approaches were executed under 350,000 objective function evaluations. To fairly compare the performances of the proposed hybrid CI algorithms and the individual GA and AIA approaches, the GA-3, AIA-2, the internal PSO algorithm of RGA-PSO method, and the internal PSO algorithm of AIA-PSO method were independently executed 50 times under 350,000 objective function evaluations for solving TPs 1–4.

For solving TP 1, the median values obtained using the RGA-PSO and AIA-PSO algorithms are smaller than those obtained using the GA-1, GA-3, and AIA-2 approaches, and the worst values obtained using RGA-PSO and AIA-PSO algorithms are smaller than those obtained using the GA-1, GA-2, GA-3, AIA-1, and AIA-2 approaches. For solving TP 2, the median and worst values obtained using the RGA-PSO and AIA-PSO algorithms are smaller than those obtained using the GA-3 method. For solving TP 3, the median and worst values obtained using the RGA-PSO and AIA-PSO algorithms are smaller than those obtained using the GA-1 and GA-3 approaches. For solving TP 4, the median and worst values obtained using the RGA-PSO and AIA-PSO algorithms are smaller than those obtained using the GA-3 method, and the worst values obtained using the RGA-PSO and AIA-PSO algorithms are smaller than those obtained using the AIA-1 approach. Moreover, the GA-3 method obtained the worst MAPE% for TP 1 and TP 4. Table 8 lists the results of the 𝑑-test for the GA-3, AIA-2, RGA-PSO, and AIA-PSO methods. This table indicates that the mean values of the RGA-PSO, and AIA-PSO algorithms are not statistically significant, since 𝑃 values are larger than a significant level 0.05, and the mean values between GA-3 versus AIA-2, GA-3 versus RGA-PSO, GA-3 versus AIA-PSO, AIA-2 versus RGA-PSO, and AIA-2 versus AIA-PSO are statistically significant. According to Tables 7 and 8, the mean values obtained using the RGA-PSO and AIA-PSO algorithms are better than those of obtained using the GA-3 and AIA-1 methods for TPs 1–4.

Table 9 compares the numerical results obtained using the proposed RGA-PSO and AIA-PSO algorithms and those obtained using AIA-2 and GA-3 for solving TPs 5–13. The AIA-2, GA-3, the internal PSO algorithm of the RGA-PSO approach, and the internal PSO algorithm of AIA-PSO approach were independently executed 50 times under 300,000 objective function evaluations. Table 9 shows that MAPE% obtained using the proposed RGA-PSO and AIA-PSO algorithms is close to 1%, or smaller than 1% for TPs 5–11, indicating that the proposed RGA-PSO and AIA-PSO algorithms can converge to global optimum for TPs 5–11. Moreover, the worst values obtained using the RGA-PSO and AIA-PSO algorithms are significantly smaller than those obtained using the GA-3 method for TPs 5, 6, 11, and 13. Additionally, the worst values obtained using the RGA-PSO and AIA-PSO algorithms are smaller than those obtained using the AIA-2 method for TPs 5, 6, and 13.

Table 10 summarizes the results of the 𝑑-test for TPs 5–13. According to Tables 9 and 10, the mean values of the RGA-PSO and AIA-PSO algorithms are smaller than those of the GA-3 approach for TPs 5, 6, 7, 8, 9, 10, 11, and 13. Moreover, the mean values obtained using the RGA-PSO and AIA-PSO algorithms are smaller than those of the AIA-2 approach for TPs 6, 7, 8, 10, and 12. Totally, according to Tables 7βˆ’10, the performances of the hybrid CI methods are superior to those of individual GA and AIA methods.

The TPs 12 and 13 have been solved by many hybrid algorithms. For instance, Huang et al. [23] presented a coevolutionary differential evolution (CDE) that integrates a coevolution mechanism and a DE approach. Zahara and Kao [24] developed a hybrid Nelder-Mead simplex search method and a PSO algorithm (NM-PSO). Table 11 compares the numerical results of the CDE, NM-PSO, RGA-PSO, and AIA-PSO methods for solving TPs 12βˆ’13. The table indicates that the best, mean, and worst values obtained using the NM-PSO method are superior to those obtained using the CDE, RGA-PSO, and AIA-PSO approaches for TP 12. Moreover, the best, mean, and worst values obtained using the AIA-PSO algorithm are better than those of the CDE, NM-PSO, and RGA-PSO algorithms.

According to the No Free Lunch theorem [35], if algorithm A outperforms algorithm B on average for one class of problems, then the average performance of the former must be worse than that of the latter over the remaining problems. Therefore, it is unlikely that any unique stochastic global optimization approach exists that performs best for all CGO problems.

4.3. Summary of Results

The proposed RGA-PSO and AIA-PSO algorithms with a penalty function method have the following benefits.(1)Parameter manipulation of the internal PSO algorithm is based on the solved CGO problems. Owing to their ability to efficiently solve an UGO problem, the external RGA and AIA approaches are substituted for trial and error to manipulate the parameters (πœ’, 𝑐1, 𝑐2, 𝜌, and π‘π‘š,PSO). (2)Besides obtaining the optimum parameter settings of the internal PSO algorithm, the RGA-PSO and AIA-PSO algorithms can yield a global optimum for a CGO problem.(3)In addition to performing better than approaches of some published individual GA and AIA approaches, the proposed RGA-PSO and AIA-PSO algorithms reduce the parametrization for the internal PSO algorithm, despite the RGA-PSO and AIA-PSO algorithms being more complex than individual GA and AIA approaches.

The proposed RGA-PSO and AIA-PSO algorithms have the following limitations. (1)The proposed RGA-PSO and AIA-PSO algorithms increase the computational CPU time, as shown in Table 2.(2)The proposed RGA-PSO and AIA-PSO algorithms are designed to solve CGO problems with continuous decision variables π‘₯𝑛. Therefore, the proposed algorithms cannot be applied to manufacturing problems such as job shop scheduling and quadratic assignment problems (combinatorial optimization problems).

5. Conclusions

This work presents novel RGA-PSO and AIA-PSO algorithms. The synergistic power of the RGA with PSO algorithm and the AIA with PSO algorithm is also demonstrated by using 13 CGO problems. Numerical results indicate that, in addition to converging to a global minimum for each test CGO problem, the proposed RGA-PSO and AIA-PSO algorithms obtain the optimum parameter settings of the internal PSO algorithm. Moreover, the numerical results obtained using the RGA-PSO and AIA-PSO algorithms are superior to those obtained using alternative stochastic global optimization methods such as individual GA and AIA approaches. The RGA-PSO and AIA-PSO algorithms are highly promising stochastic global optimization approaches for solving CGO problems.

Appendices

A. TP 1 [20, 21]

TP 1 has ten decision variables, eight inequality constraints, and 20 boundary conditions, as follows: Minimize𝑓(𝐱)=π‘₯21+π‘₯22+π‘₯1π‘₯2βˆ’14π‘₯1βˆ’16π‘₯2+ξ€·π‘₯3ξ€Έβˆ’102ξ€·π‘₯+44ξ€Έβˆ’52+ξ€·π‘₯5ξ€Έβˆ’32ξ€·π‘₯+26ξ€Έβˆ’12+5π‘₯27ξ€·π‘₯+78ξ€Έβˆ’112ξ€·π‘₯+29ξ€Έβˆ’102+ξ€·π‘₯10ξ€Έβˆ’72+45Subjectto𝑔1(𝐱)β‰‘βˆ’105+4π‘₯1+5π‘₯2βˆ’3π‘₯7+9π‘₯8𝑔≀0,2(𝐱)≑10π‘₯1βˆ’8π‘₯2βˆ’17π‘₯7+2π‘₯8𝑔≀0,3(𝐱)β‰‘βˆ’8π‘₯1+2π‘₯2+5π‘₯9βˆ’2π‘₯10π‘”βˆ’12≀0,4ξ€·π‘₯(𝐱)≑31ξ€Έβˆ’22ξ€·π‘₯+42ξ€Έβˆ’32+2π‘₯23βˆ’7π‘₯4π‘”βˆ’120≀0,5(𝐱)≑5π‘₯21+8π‘₯2+ξ€·π‘₯3ξ€Έβˆ’62βˆ’2π‘₯4π‘”βˆ’40≀0,6(𝐱)≑π‘₯21ξ€·π‘₯+22ξ€Έβˆ’22βˆ’2π‘₯1π‘₯2+14π‘₯5βˆ’6π‘₯6𝑔≀0,7ξ€·π‘₯(𝐱)≑0.51ξ€Έβˆ’82ξ€·π‘₯+22ξ€Έβˆ’42+3π‘₯25βˆ’π‘₯6π‘”βˆ’30≀0,8(𝐱)β‰‘βˆ’3π‘₯1+6π‘₯2ξ€·π‘₯+129ξ€Έβˆ’82βˆ’7π‘₯10≀0,βˆ’10≀π‘₯𝑛≀10,𝑛=1,2,…,10.(A.1) The global solution to TP 1 is as follows: π±βˆ—π‘“ξ€·π±=(2.171996,2.363683,8.773926,5.095984,0.9906548,1.430574,1.321644,9.828726,8.280092,8.375927),βˆ—ξ€Έ=24.306.(A.2)

B. TP 2 [21]

TP 2 involves five decision variables, six inequality constraints, and ten boundary conditions, as follows: Minimize𝑓(𝐱)=5.3578547π‘₯23+0.8356891π‘₯1π‘₯5+37.293239π‘₯1βˆ’40792.141Subjectto𝑔1(𝐱)β‰‘βˆ’85.334407βˆ’0.0056858π‘₯2π‘₯5βˆ’0.0006262π‘₯1π‘₯4+0.0022053π‘₯3π‘₯5𝑔≀0,2(𝐱)β‰‘βˆ’6.665593+0.0056858π‘₯2π‘₯5+0.0006262π‘₯1π‘₯4βˆ’0.0022053π‘₯3π‘₯5𝑔≀0,3(𝐱)≑9.48751βˆ’0.0071371π‘₯2π‘₯5βˆ’0.0029955π‘₯1π‘₯2βˆ’0.0021813π‘₯23𝑔≀0,4(𝐱)β‰‘βˆ’29.48751+0.0071371π‘₯2π‘₯5+0.0029955π‘₯1π‘₯2+0.0021813π‘₯23𝑔≀0,5(𝐱)≑10.669039βˆ’0.0047026π‘₯3π‘₯5βˆ’0.0012547π‘₯1π‘₯3βˆ’0.0019085π‘₯3π‘₯4𝑔≀0,6(𝐱)β‰‘βˆ’15.699039+0.0047026π‘₯3π‘₯5+0.0012547π‘₯1π‘₯3+0.0019085π‘₯3π‘₯4≀0,78≀π‘₯1≀102,33≀π‘₯2≀45,27≀π‘₯𝑛≀45,𝑛=3,4,5.(B.1)

The global solution to TP 2 is π±βˆ—π‘“ξ€·π±=(78.0,33.0,29.995256,45.0,36.775812),βˆ—ξ€Έ=βˆ’30665.539.(B.2)

C. TP 3 [20, 21]

TP 3 has seven decision variables, four inequality constraints, and 14 boundary conditions, as follows: ξ€·π‘₯Minimize𝑓(𝐱)=1ξ€Έβˆ’102ξ€·π‘₯+52ξ€Έβˆ’122+π‘₯43ξ€·π‘₯+34ξ€Έβˆ’112+10π‘₯65+7π‘₯26+π‘₯47βˆ’4π‘₯6π‘₯7βˆ’10π‘₯6βˆ’8π‘₯7Subjectto𝑔1(𝐱)β‰‘βˆ’127+2π‘₯21+3π‘₯42+π‘₯3+4π‘₯24+5π‘₯5𝑔≀0,2(𝐱)β‰‘βˆ’282+7π‘₯1+3π‘₯2+10π‘₯23+π‘₯4βˆ’π‘₯5𝑔≀0,3(𝐱)β‰‘βˆ’196+23π‘₯1+π‘₯22+6π‘₯26βˆ’8π‘₯7𝑔≀0,4(𝐱)≑4π‘₯21+π‘₯22βˆ’3π‘₯1π‘₯2+2π‘₯23+5π‘₯6βˆ’11π‘₯7≀0,βˆ’10≀π‘₯𝑛≀10,𝑛=1,2,…,7.(C.1) The global solution to TP 3 is π±βˆ—π‘“ξ€·π±=(2.330499,1.951372,βˆ’0.4775414,4.365726,βˆ’0.6244870,1.038131,1.594227),βˆ—ξ€Έ=680.630.(C.2)

D. TP 4 [20, 21]

TP 4 involves 13 decision variables, nine inequality constraints, and 26 boundary conditions, as follows: Minimize𝑓(𝐱)=54𝑛=1π‘₯π‘›βˆ’54𝑛=1π‘₯2π‘›βˆ’13𝑛=5π‘₯𝑛Subjectto𝑔1(𝐱)≑2π‘₯1+2π‘₯2+π‘₯10+π‘₯11π‘”βˆ’10≀0,2(𝐱)≑2π‘₯1+2π‘₯3+π‘₯10+π‘₯12π‘”βˆ’10≀0,3(𝐱)≑2π‘₯2+2π‘₯3+π‘₯11+π‘₯12π‘”βˆ’10≀0,4(𝐱)β‰‘βˆ’8π‘₯1+π‘₯10𝑔≀0,5(𝐱)β‰‘βˆ’8π‘₯2+π‘₯11𝑔≀0,6(𝐱)β‰‘βˆ’8π‘₯3+π‘₯12𝑔≀0,7(𝐱)β‰‘βˆ’2π‘₯4βˆ’π‘₯5+π‘₯10𝑔≀0,8(𝐱)β‰‘βˆ’2π‘₯6βˆ’π‘₯7+π‘₯11𝑔≀0,9(𝐱)β‰‘βˆ’2π‘₯8βˆ’π‘₯9+π‘₯12≀0,0≀π‘₯𝑛≀1,𝑛=1,2,…,9,0≀π‘₯𝑛≀100,𝑛=10,11,12,0≀π‘₯13≀1.(D.1) The global solution to TP 4 is π±βˆ—=𝐱(1,1,1,1,1,1,1,1,1,3,3,3,1),π‘“βˆ—ξ€Έ=βˆ’15.(D.2)

E. TP 5 (Alkylation Process Design Problem in Chemical Engineering) [1]

TP 5 has seven decision variables subject to 12 nonconvex, two linear, and 14 boundary constraints. The objective function is to improve the octane number of some olefin feed by reacting it with isobutane in the presence of acid. The decision variables π‘₯𝑛 are olefin feed rate (barrels/day) π‘₯1, acid addition rate (thousands of pounds/day) π‘₯2, alkylate yield (barrels/day) π‘₯3, acid strength π‘₯4, motor octane number π‘₯5, external isobutane-to-olefin ration π‘₯6, and F-4 performance number π‘₯7: ξ€·πœ”Minimize𝑓(𝐱)=1π‘₯1+πœ”2π‘₯1π‘₯6+πœ”3π‘₯3+πœ”4π‘₯2+πœ”5βˆ’πœ”6π‘₯3π‘₯5ξ€Έs.t.𝑔1(𝐱)=πœ”7π‘₯26+πœ”8π‘₯1βˆ’1π‘₯3βˆ’πœ”9π‘₯6𝑔≀1,2(𝐱)=πœ”10π‘₯1π‘₯3βˆ’1+πœ”11π‘₯1π‘₯3βˆ’1π‘₯6βˆ’πœ”12π‘₯1π‘₯3βˆ’1π‘₯26𝑔≀1,3(𝐱)=πœ”13π‘₯26+πœ”14π‘₯5βˆ’πœ”15π‘₯4βˆ’πœ”16π‘₯6𝑔≀1,4(𝐱)=πœ”17π‘₯5βˆ’1+πœ”18π‘₯5βˆ’1π‘₯6+πœ”19π‘₯4π‘₯5βˆ’1βˆ’πœ”20π‘₯5βˆ’1π‘₯26𝑔≀1,5(𝐱)=πœ”21π‘₯7+πœ”22π‘₯2π‘₯3βˆ’1π‘₯4βˆ’1βˆ’πœ”23π‘₯2π‘₯3βˆ’1𝑔≀1,6(𝐱)=πœ”24π‘₯7βˆ’1+πœ”25π‘₯2π‘₯3βˆ’1π‘₯7βˆ’1βˆ’πœ”26π‘₯2π‘₯3βˆ’1π‘₯4βˆ’1π‘₯7βˆ’1𝑔≀1,7(𝐱)=πœ”27π‘₯5βˆ’1+πœ”28π‘₯5βˆ’1π‘₯7𝑔≀1,8(𝐱)=πœ”29π‘₯5βˆ’πœ”30π‘₯7𝑔≀1,9(𝐱)=πœ”31π‘₯3βˆ’πœ”32π‘₯1𝑔≀1,10(𝐱)=πœ”33π‘₯1π‘₯3βˆ’1+πœ”34π‘₯3βˆ’1𝑔≀1,11(𝐱)=πœ”35π‘₯2π‘₯3βˆ’1π‘₯4βˆ’1βˆ’πœ”36π‘₯2π‘₯3βˆ’1𝑔≀1,12(𝐱)=πœ”37π‘₯4+πœ”38π‘₯2βˆ’1π‘₯3π‘₯4𝑔≀1,13(𝐱)=πœ”39π‘₯1π‘₯6+πœ”40π‘₯1βˆ’πœ”41π‘₯3𝑔≀1,14(𝐱)=πœ”42π‘₯1βˆ’1π‘₯3+πœ”43π‘₯1βˆ’1βˆ’πœ”44π‘₯6≀1,1500≀π‘₯1≀2000,1≀π‘₯2≀120,3000≀π‘₯3≀3500,85≀π‘₯4≀93,90≀π‘₯5≀95,3≀π‘₯6≀12,145≀π‘₯7≀162,(E.1) where πœ”π‘™(𝑙=1,2,…,44) denotes positive parameters given in Table 12. The global solution to TP 5 is π±βˆ—=𝐱(1698.18,53.66,3031.30,90.11,10.50,153.53),π‘“βˆ—ξ€Έ=1227.1978.(E.2)

F. TP 6 (Optimal Reactor Design Problem) [1]

TP 6 contains eight decision variables subject to four nonconvex inequality constraints and 16 boundary conditions, as follows: ξ€·Minimize𝑓(𝐱)=0.4π‘₯10.67π‘₯7βˆ’0.67+0.4π‘₯20.67π‘₯8βˆ’0.67+10βˆ’π‘₯1βˆ’π‘₯2ξ€Έs.t.𝑔1(𝐱)=0.0588π‘₯5π‘₯7+0.1π‘₯1𝑔≀1,2(𝐱)=0.0588π‘₯6π‘₯8+0.1π‘₯1+0.1π‘₯2𝑔≀1,3(𝐱)=4π‘₯3π‘₯5βˆ’1+2π‘₯3βˆ’0.71π‘₯5βˆ’1+0.0588π‘₯3βˆ’1.3π‘₯7𝑔≀1.4(𝐱)=4π‘₯4π‘₯6βˆ’1+2π‘₯4βˆ’0.71π‘₯6βˆ’1+0.0588π‘₯4βˆ’1.3π‘₯8≀1,0.1≀π‘₯𝑛≀10,𝑛=1,2,…,8.(F.1) The global solution to TP 6 is π±βˆ—=𝐱(6.4747,2.2340,0.6671,0.5957,5.9310,5.5271,1.0108,0.4004),π‘“βˆ—ξ€Έ=3.9511.(F.2)

G. TP 7 [1]

TP 7 has four decision variables, two nonconvex inequality constraints, and eight boundary conditions, as follows: ξ€·Minimize𝑓(𝐱)=βˆ’π‘₯1+0.4π‘₯10.67π‘₯3βˆ’0.67ξ€Έs.t.𝑔1(𝐱)=0.05882π‘₯3π‘₯4+0.1π‘₯1𝑔≀12(𝐱)=4π‘₯2π‘₯4βˆ’1+2π‘₯2βˆ’0.71π‘₯4βˆ’1+0.05882π‘₯2βˆ’1.3π‘₯3≀1,0.1≀π‘₯1,π‘₯2,π‘₯3,π‘₯4≀10.(G.1)

The global solution to TP 7 is π±βˆ—=𝐱(8.1267,0.6154,0.5650,5.6368),π‘“βˆ—ξ€Έ=βˆ’5.7398.(G.2)

H. TP 8 [1]

TP 8 contains three decision variables subject to one nonconvex inequality constraint and six boundary conditions, as follows: ξ€·Minimize𝑓(𝐱)=0.5π‘₯1π‘₯2βˆ’1βˆ’π‘₯1βˆ’5π‘₯2βˆ’1ξ€Έs.t.𝑔1(𝐱)=0.01π‘₯2π‘₯3βˆ’1+0.01π‘₯1+0.0005π‘₯1π‘₯3≀1,1≀π‘₯𝑛≀100,𝑛=1,2,3.(H.1) The global solution to TP 8 is π±βˆ—=𝐱(88.2890,7.7737,1.3120),π‘“βˆ—ξ€Έ=βˆ’83.2540.(H.2)

I. TP 9 [1]

TP 9 contains eight decision variables subject to four nonconvex inequality constraints and 16 boundary conditions, as follows: ξ€·Minimize𝑓(𝐱)=βˆ’π‘₯1βˆ’π‘₯5+0.4π‘₯10.67π‘₯3βˆ’0.67+0.4π‘₯50.67π‘₯7βˆ’0.67ξ€Έs.t.𝑔1(𝐱)=0.05882π‘₯3π‘₯4+0.1π‘₯1𝑔≀1,2(𝐱)=0.05882π‘₯7π‘₯8+0.1π‘₯1+0.1π‘₯5𝑔≀1,3(𝐱)=4π‘₯2π‘₯4βˆ’1+2π‘₯2βˆ’0.71π‘₯4βˆ’1+0.05882π‘₯2βˆ’1.3π‘₯3𝑔≀1,4(𝐱)=4π‘₯6π‘₯8βˆ’1+2π‘₯6βˆ’0.71π‘₯8βˆ’1+0.05882π‘₯6βˆ’1.3π‘₯7≀1,0.01≀π‘₯𝑛≀10,𝑛=1,2,…,8.(I.1) The global solution to TP 9 is π±βˆ—=𝐱(6.4225,0.6686,1.0239,5.9399,2.2673,0.5960,0.4029,5.5288),π‘“βˆ—ξ€Έ=βˆ’6.0482.(I.2)

J. TP 10 [34]

TP 10 contains three decision variables subject to one nonconvex inequality constraint and six boundary conditions, as follows: ξ€·Minimize𝑓(𝐱)=5π‘₯1+50000π‘₯1βˆ’1+20π‘₯2+72000π‘₯2βˆ’1+10π‘₯3+144000π‘₯3βˆ’1ξ€Έs.t.𝑔1(𝐱)=4π‘₯1βˆ’1+32π‘₯2βˆ’1+120π‘₯3βˆ’1≀1,1≀π‘₯𝑛≀1000,𝑛=1,2,3.(J.1) The global solution to TP 10 is π±βˆ—=𝐱(107.4,84.9,204.5),π‘“βˆ—ξ€Έ=6300.(J.2)

K. TP 11 [1, 34]

TP 11 involves five decision variables, six inequality constraints, and ten boundary conditions, as follows: Minimize𝑔0ξ€·(𝐱)=5.3578π‘₯23+0.8357π‘₯1π‘₯5+37.2392π‘₯1ξ€Έs.t.𝑔1(𝐱)=0.00002584π‘₯3π‘₯5βˆ’0.00006663π‘₯2π‘₯5βˆ’0.0000734π‘₯1π‘₯4𝑔≀1,2(𝐱)=0.000853007π‘₯2π‘₯5+0.00009395π‘₯1π‘₯4βˆ’0.00033085π‘₯3π‘₯5𝑔≀1,4(𝐱)=0.00024186π‘₯2π‘₯5+0.00010159π‘₯1π‘₯2+0.00007379π‘₯23𝑔≀1,3(𝐱)=1330.3294π‘₯2βˆ’1π‘₯5βˆ’1βˆ’0.42π‘₯1π‘₯5βˆ’1βˆ’0.30586π‘₯2βˆ’1π‘₯23π‘₯5βˆ’1𝑔≀1,5(𝐱)=2275.1327π‘₯3βˆ’1π‘₯5βˆ’1βˆ’0.2668π‘₯1π‘₯5βˆ’1βˆ’0.40584π‘₯4π‘₯5βˆ’1𝑔≀1,6(𝐱)=0.00029955π‘₯3π‘₯5+0.00007992π‘₯1π‘₯3+0.00012157π‘₯3π‘₯4≀1,78≀π‘₯1≀102,33≀π‘₯2≀45,27≀π‘₯3≀45,27≀π‘₯4≀45,27≀π‘₯5≀45.(K.1)

The global solution to TP 11 is π±βˆ—=𝐱(78.0,33.0,29.998,45.0,36.7673),π‘“βˆ—ξ€Έ=10122.6964.(K.2)

L. TP 12 (a Tension/Compression String Design Problem) [23]

TP 12 involves three decision variables, six inequality constraints, and six boundary conditions. This problem is taken from Huang et al. [23]. This problem attempts to minimize the weight (i.e., 𝑓(𝐱)) of a tension/compression spring subject to constraints on minimum deflection, shear stress, and surge frequency. The design variables are the mean coil diameter π‘₯2, wire diameter π‘₯1, and number of active coils π‘₯3: ξ€·π‘₯Minimize𝑓(𝐱)=3ξ€Έπ‘₯+22π‘₯21s.t.𝑔1π‘₯(𝐱)=1βˆ’32π‘₯371785π‘₯41𝑔≀0,2(𝐱)=4π‘₯22βˆ’π‘₯1π‘₯2ξ€·π‘₯125662π‘₯31βˆ’π‘₯41ξ€Έ+15108π‘₯21π‘”βˆ’1≀0,3(𝐱)=1βˆ’140.45π‘₯1π‘₯22π‘₯3𝑔≀0,4π‘₯(𝐱)=1+π‘₯21.5βˆ’1≀0,0.05≀π‘₯1≀2,0.25≀π‘₯2≀1.3,2≀π‘₯3≀15.(L.1)

M. TP 13 (Pressure Vessel Design Problem) [23]

TP 13 involves four decision variables, four inequality constraints, and eight boundary conditions. This problem attempts to minimize the total cost (𝑓(𝐱)), including cost of materials forming and welding. A cylindrical vessel is capped at both ends by hemispherical heads. Four design variables exist: thickness of the shell π‘₯1, thickness of the head π‘₯2, inner radius π‘₯3, and length of the cylindrical section of the vessel, excluding the head π‘₯4: Minimize𝑓(𝐱)=0.6224π‘₯1π‘₯3π‘₯4+1.7781π‘₯2π‘₯23+3.1661π‘₯21π‘₯4+19.84π‘₯21π‘₯3s.t.𝑔1(𝐱)=βˆ’π‘₯1+0.0193π‘₯3𝑔≀0,2(𝐱)=βˆ’π‘₯2+0.00954π‘₯3𝑔≀0,3(𝐱)=βˆ’πœ‹π‘₯32π‘₯4βˆ’43πœ‹π‘₯33𝑔+1296000≀0,4(𝐱)=π‘₯4βˆ’240≀0,0≀π‘₯1≀100,0≀π‘₯2≀100,10≀π‘₯3≀200,10≀π‘₯3≀200.(M.1)

Acknowledgment

The author would like to thank the National Science Council of the Republic of China, Taiwan, for financially supporting this research under Contract no. NSC 100-2622-E-262-006-CC3.