Research Article  Open Access
JinnTsong Tsai, JyhHorng Chou, WenHsien Ho, "Improved QuantumInspired Evolutionary Algorithm for Engineering Design Optimization", Mathematical Problems in Engineering, vol. 2012, Article ID 836597, 27 pages, 2012. https://doi.org/10.1155/2012/836597
Improved QuantumInspired Evolutionary Algorithm for Engineering Design Optimization
Abstract
An improved quantuminspired evolutionary algorithm is proposed for solving mixed discretecontinuous nonlinear problems in engineering design. The proposed Latin square quantuminspired evolutionary algorithm (LSQEA) combines Latin squares and quantuminspired genetic algorithm (QGA). The novel contribution of the proposed LSQEA is the use of a QGA to explore the optimal feasible region in macrospace and the use of a systematic reasoning mechanism of the Latin square to exploit the better solution in microspace. By combining the advantages of exploration and exploitation, the LSQEA provides higher computational efficiency and robustness compared to QGA and realcoded GA when solving global numerical optimization problems with continuous variables. Additionally, the proposed LSQEA approach effectively solves mixed discretecontinuous nonlinear design optimization problems in which the design variables are integers, discrete values, and continuous values. The computational experiments show that the proposed LSQEA approach obtains better results compared to existing methods reported in the literature.
1. Introduction
Solving engineering design optimization problems usually requires consideration of many different types of design variables and many constraints. Practical problems in engineering design often involve a mix of integers, discrete variables, and continuous variables. These constraints are often problematic during the engineering design optimization process. Since the 1960s, researchers have attempted to solve this problem, which is known as the mixed discrete nonlinear programming (MDNLP) problem. One of the most effective solutions reported so far is a nonlinear branch and bound method (BBM) for solving nonlinear and discrete programming in mechanical design optimization [1, 2]. In BBM, however, subproblems result from portioning the feasible domain to obtain solutions by ignoring discrete conditions, and the number of times the problem needs to be resolved increases exponentially with the number of variables [3]. The better method, such as the sequential linear programming (SLP), was developed by Bremicker et al. [4] and by Loh and Papalambros [5] to solve general MDNLP problems. The linearized discrete problem is solved by the simplex method to obtain information at each node of the tree. Their SLP approach is compared with the pure BBM, where the sequential quadratic programming is used to solve the nonlinear problem to obtain information at each node. The study shows the SLP method to be superior to the pure BBM. Other approaches to solving MDNLP problems include the penalty function approach [6–8] and the Lagrangian relaxation approach [9]. The penalty function approach to treat the requirement of discreteness is to define additional constraints and construct a penalty function for them. This term imposes penalty for deviations from the discrete values. The difficulties with a penalty approach are the introduction of additional local minima and repeated minimizations by adjusting the penalty parameters [3]. The Lagrangian relaxation method is similar to the penalty function method. The main difference is that the additional terms due to discrete variables are added to a Lagrangian function instead of a penalty function. The Lagrangian relaxation approach does not guarantee finding a global solution, even if it is a convex problem before discrete variables are introduced. It is observed that some of the methods for discrete variable optimization use the structure of the problem to speed up the search for the discrete solution. These methods are not suitable for implementation into general purpose applications. The BBM is the most general methods; however, it is time consuming. In recent years, the focus has shifted to applications of softcomputing optimization techniques that naturally use mixeddiscrete and continuous variables for solving practical engineering problems. These approaches include genetic algorithms (GAs) [10–18], simulated annealing [19], differential evolution [20, 21], and evolutionary programming approach [22]. The major challenge when solving MDNLP problems is that numerous local optima can result in the methods becoming trapped in the local optima of the objective functions [12]. Therefore, an efficient and robust algorithm is needed to solve mixed discretecontinuous nonlinear design optimization problems in the engineering design field.
In the past decade, the emerging field of quantuminspired computing has motivated intensive studies of algorithms such as the Shor factorizing algorithm [23] and the Grover quantum search algorithm [24, 25]. By applying quantum mechanical principles such as quantumbit representation and superposition of states, quantuminspired computing can simultaneously process huge numbers of quantum states simultaneously and in parallel. To introduce a strong parallelism in the evolutionary algorithm, Han et al. [26] and Han and Kim [27–29] proposed the quantuminspired genetic algorithm (QGA). For solving combinatorial optimization problems, QGA has proven superior to conventional GAs. Malossini et al. [30] showed that, by taking advantage of quantum phenomena, QGA improves the speed and efficiency of genetic procedures. Quantuminspired evolutionary algorithms have also been used to solve optimization problems such as partition function calculation [31], nonlinear blind source separation [32], filter design [33], numerical optimization [34–36], hyperspectral anomaly detection [37], multiple sequence alignment [38], thermal process identification [39], and multiobjective optimization [40]. However, the performance of the simple QGA is often unsatisfactory, and it is easily trapped in the local optima, which results in premature convergence. That is, the quantuminspired bit (Qbit) search with quantum mechanism must be well coordinated with the genetic search with evolution mechanism, and the exploration and exploitation behaviors must also be well balanced [35]. Therefore, a big challenge is to improve QGA capability of exploration and exploitation and develop an efficient and robust algorithm.
The efficient and robust Latin square quantuminspired evolutionary algorithm (LSQEA) proposed in this study solves global numerical optimization problems with continuous variables and mixed discretecontinuous nonlinear design optimization problems. The LSQEA approach integrates Latin squares [41–43] and QGA (i.e., quantuminspired individual and mechanism with GAs). The concept of the use of QGA came from the works of Han et al. [26] and Han and Kim [27–29], while the development steps were implemented by authors and shown in Section 3. The role of the Latin square is to generate better individuals by implementing the Latin squarebased recombination since the systematic reasoning ability of Latin square of the Taguchi method is, in due course, incorporated into the recombination operation to select better Qbits. This role is important for improving the efficiency of the crossover operation in generating representative individuals and betterfit trial individuals. The Latin square is applied to recombine the better Qbits so that potential individuals in microspace can be exploited. The QGA is used to explore the optimal feasible region in macrospace. Therefore, the LSQEA approach is highly robust and achieves quick convergence.
The paper is organized as follows. Section 2 gives the problem statements. The LSQEA for solving the mixed discretecontinuous nonlinear design optimization problems is described in Section 3. In Section 4, the proposed LSQEA approach is compared with QGA and realcoded GA (RGA) [44–47] in terms of performance in solving global numerical optimization problems with continuous variables. The LSQEA approach is then used to solve mixed discretecontinuous nonlinear design problems encountered in the engineering design field, and the results obtained by LSQEA are compared with those obtained by existing methods reported in the literature. Finally, Section 5 concludes the study.
2. Problem Statements
This section states the considered problems, which include a global numerical optimization problem with continuous variables and a mixed discretecontinuous nonlinear programming problem.
The following global numerical optimization problem with continuous variables is considered: where is a continuous variable vector, is an objective function, , and define the feasible solution vector spaces. The domain of is denoted by [, ], and the feasible solution space is defined by [, ]. For this problem, and are the constraint functions. Although many design problems can be cast as the above optimization problem, efficiently obtaining optimal solution is difficult because the problem involves designs that are highdimensional, nondifferentiable, and multimodal [35].
The mixed discretecontinuous nonlinear programming problem is expressed as follows [12]:
where are nonnegative discrete variables with permissible values equally spaced; are nonnegative discrete variables with permissible values unequally spaced; are nonnegative continuous variables. Practical optimization problems encountered in the engineering design field often have many constraints and require consideration of different types of design variables as shown in (2.2). Again, because these problems involve highdimensional, nondifferentiable, and multimodal properties, an effective algorithm is needed to solve them optimally and efficiently. According to the above statements, the problem in (2.2) is a special case of the problem in (2.1).
For the mixed discretecontinuous design problem in (2.2), the discrete variables with equal spacing (i.e., individuals with arithmetical progression) are , where is the lower bound of ; is the natural number corresponding to is the discrete increment of the discrete variable; is the number of discrete variables with equal spacing. Let for the discrete variables with unequal spacing, where , denotes the natural number corresponding to , is generated from , denotes the element of the vector , represents the vector of values of discrete variables with unequal spacing, and is the maximum permissible number of discrete values for the discrete variable with unequal spacing.
3. The LSQEA Approach for Solving Mixed DiscreteContinuous Design Problems
This section describes the details of the LSQEA approach for solving mixed discretecontinuous nonlinear programming problems.
3.1. QBit Representation
In quantuminspired computing, the smallest unit of information stored in a twostate quantum computer is called a quantum bit. It may be in a “0” state, a “1” state, or any superposition of the two. The state of a quantum bit can be represented as where and are complex numbers that describe the probability amplitudes of the corresponding states. The and are probabilities of the quantum bit being in the “0” state and “1” state, respectively, such that .
The use of a Qbit to represent an individual in this study was inspired by quantum computing concepts. The advantage of the representation is the capability to use linear superposition method to generate any possible solution. A Qbit individual can be represented by a string of Qbits such as where . Since Qbits represent a linear superposition of states, a Qbit representation provides better population diversity compared to other representations used in evolutionary computing. For example, for following three Qbits system with three pairs of amplitudes the states of the system can be represented as The above result means that the probabilities to represent the states are 1/16, 3/16, 1/16, 3/16, 1/16, 3/16, 1/16, and 3/16, respectively. By consequence, the above three Qbits system contains the information of eight states.
3.2. Initial Population
The initialization procedure produces Qbit individuals where denotes the population size.
3.3. Crossover Operation
The crossover operators are the one cutpoint operator, which randomly determines one cutpoint and exchanges the cutpoint right parts of the Qbits of two parents to generate new offspring. If the position is selected as one cutpoint, the one cutpoint crossover operator is used for Qbits as shown in (3.5) For example, for following four Qbits system, the 2nd position is selected as one cutpoint, the crossover operation is shown below
3.4. Mutation Operation
Mutation of Qbits is performed by randomly determining one position (e.g., position ) and then exchanging the corresponding and For example, for following four Qbits system, the 2nd position is selected for mutation, the mutation operation is shown below.
3.5. QBit Rotation Operation
The purpose of a rotation gate is to update a Qbit individual by rotating the Qbit toward the direction of the corresponding Qbit to obtain a better value. The (, ) of the Qbit is updated as follows: where is a rotation angle by .
For example, if = + = and , the result obtained by Qbit rotation operation is . The probability of becomes larger, and the probability of becomes smaller.
3.6. Penalty Function
When using evolutionary method to solve a constrained optimization problem, a penalty function is used to relax the constraints by penalizing the unfeasible individuals in the population. This method improves the probability of approaching a feasible region of the search space by navigating through unfeasible regions and by reducing the penalty when a feasible region is approached. To clarify this point, it is important to distinguish between feasible and unfeasible individuals. The unfeasible individuals violate constraints included in the range where is the number of design constraints. The higher this index of violation is, the larger the penalty should be. Given these considerations, the penalty value is defined as follows: where is a value computed from the constraint function when the values of design variables are determined; and are the upper and lower bounds, respectively, of the constraint function; is a value distinguishing the feasible from the unfeasible individuals; and and denote the weights. Additionally, if , then and ; if , then and ; if , then and . Equation (3.10) generally requires that each value calculated for of the constraint function should be limited to its upper and lower bounds. If the value is located within the feasible region, the value is not punished. Otherwise, the value is punished by being multiplied with a large number . The penalty value equals 0 when the optimization process is complete since the values of the design variables no longer violate the design constraints.
3.7. Latin Square
The Latin square experimental design method screens for the important factors that impact product performance. Therefore, it can be used to study a large number of decision variables with a small number of experiments. The design variables (parameters) are called factors, and parameter settings are called levels. The name Latin square originates from Leonhard Euler, who used Latin characters as symbols. The details regarding the experimental design method can be found in texts by Phadke [41], Montgomery [42], and Park [43]. For an orthogonal array, matrix experiments are conducted by randomly choosing two individuals from the Qbit population pool. Each factor of one Qbit individual is designated level 1, and each factor of the other Qbit individual is designated level 2. The twolevel orthogonal array of Latin squares applied here is . Additionally, each of number of design factors has two levels. To establish a twolevel orthogonal array of factors, let represent columns and individual experiments corresponding to the rows, where , is a positive integer and . If , only the first columns are used while the other columns are ignored. For example, if each of six factors has two levels, only six columns are needed to allocate these factors. In this case, is sufficient for this purpose because it has seven columns.
The better combinations of decision variables are also determined by integrating the orthogonal array of the Latin square and the signaltonoise ratio of the Taguchi method. The concept of Taguchi method is to maximize signaltonoise ratios used as performance measures by using the orthogonal array to run a partial set of experiments. The signaltonoise ratio refers to the meansquare deviation in the objective function. For cases of the largerthebetter characteristic and the smallerthebetter characteristic, Taguchi defined , which is expressed in decibels, as and , respectively, where denotes a set of characteristics. Further details can be found in works by Phadke [41], Montgomery [42], and Park [43].
If only the degree of in the orthogonal array experiments is being described, the previous equations can be modified as if the objective function is to be maximized (largerthebetter) and as if the objective function is to be minimized (smallerthebetter). Let denote the evaluation value of the objective function of experiment , where , and is the number of orthogonal array experiments. The effects of the various factors (variables or individuals) can be defined as follows: where is the number of experiments, is the factor name or number, and is the level number.
The main objective of the matrix experiments is to choose a new Qbit individual from the two Qbit individuals at each locus (factor). At each locus (factor), a Qbit is chosen if the has the highest value in the experimental region. That is, the objective is to determine the best level for each factor. The best level for a factor is the level that maximizes the value of in the experimental region. For the twolevel problem, if , the better level is level 1 for factor . Otherwise, level 2 is better. After the best level is determined for each factor, the best levels can be combined to obtain the new individual. Therefore, systematic reasoning ability of the orthogonal array of the Latin square combined with the signaltonoise ratio of the Taguchi method ensures that the new Qbit individual has the best or closetobest evaluation value of the objective function among the combinations of factor levels, where is the total number of experiments needed for all combinations of factor levels.
For the matrix experiments of an orthogonal array of Latin squares, generation of better individuals requires random selection of two Qbit individuals at a time from the Qbit population pool generated by the one cutpoint crossover operation. A new individual generated by each matrix experiment is superior to its parents by using the systematic reasoning ability of an orthogonal array of Latin squares and by following the below algorithm [46]. The two individuals recombine the better Qbits to be a betterfit individual, so that potential individuals in microspace can be exploited. The detailed steps for each matrix experiment are described as follows.
Algorithm
Step 1. . Generate two sets and , each of which has design factors (variables). From the first columns of the orthogonal array , allocate design factors, where .
Step 2. Designate sets and as level 1 and level 2, respectively, by using a uniformly distributed random method to choose two Qbit individuals from the Qbit population pool generated by the crossover operation.
Step 3. Assign the level 1 values obtained from and the level 2 values obtained from to level cells of the experiment in the orthogonal array.
Step 4. Calculate the fitness value and the signaltonoise ratio for the new individual.
Step 5. If , then go to Step 6. Otherwise, , and repeat Steps 3–5.
Step 6. Calculate the effects of the various factors and, where .
Step 7. The Qbit of locus of the new Qbit individual is obtained from if . Otherwise, it is obtained from , where . Implementing the process for each Qbit at each locus then obtains the new Qbit individual.
3.8. Steps of LSQEA
The LSQEA approach is a method of integrating Latin squares and QGA. The Latin square method is performed between the onecutpoint crossover operation and the mutation operation. The penalty function is considered for a constrained problem, as the fitness value is calculated. The steps of the LSQEA approach are described as follows.
Step 1. Set parameters, including population size , crossover rate , mutation rate , and number of generations.
Step 2. Generate an initial Qbit population, and calculate the fitness values for the population.
Step 3. Perform selection operation by roulette wheel approach [44].
Step 4. Perform the onecutpoint crossover operation for each Qbit. Select Qbit individuals for crossover according to crossover rate .
Step 5. Perform matrix experiments for Latin squares method, and use signaltonoise ratios to generate the better offspring.
Step 6. Repeat Step 5 until the loop number has been met.
Step 7. Generate the Qbit population via Latin squares method.
Step 8. Perform the mutation operation in the Qbit population. Select Qbits for mutation according to mutation rate .
Step 9. Except for the best individual, select Qbit individuals for the Qbit rotation operation.
Step 10. Generate the new Qbit population.
Step 11. Has the stopping criterion been met? If so, then go to Step 12. Otherwise, repeat Step 3 to Step 11.
Step 12. Display the best individual and fitness value.
4. Design Examples and Comparisons
This section first describes the performance evaluation results for the proposed LSQEA approach. The performance of the LSQEA is then compared with those of the QGA and RGA methods in solving nonlinear programming optimization problems with continuous variables. Finally, the LSQEA approach is used to solve mixed discretecontinuous nonlinear design problems in the engineering design field, and its solutions are compared with those of other methods reported in the literature.
4.1. Solving Nonlinear Programming Optimization Problems with Continuous Variables
For performance evaluation, the proposed LSQEA approach was used to solve the nonlinear programming optimization problems shown in Table 1 [48–50]. The test functions included quadratic, linear, polynomial, and nonlinear forms. The constraints of these functions (, , , and ) were linear and nonlinear inequalities, and their dimensions were 13, 8, 7, and 10, respectively. The penalty function of (3.10) was used to handle constrains of linear and nonlinear inequalities for optimization. Therefore, the test functions had sufficient local minima to provide a challenging problem for the purpose of performance evaluation. To identify any performance improvements obtained by application of Latin square and quantum computinginspired concepts, the QGA and RGA approaches were used to solve the test functions.

Optimizing the main parameters in evolutionary environments continues to be a area of active research in this field. Studies have shown how the performance of a GA can be improved by modifying its main parameters [51, 52]. For example, Chou et al. [53] applied an experimental design method to improve the performance of a GA by optimizing its evolutionary parameters. Therefore, this study adjusted evolutionary parameters by using the same experimental design method applied in Chou et al. [53]. The evolutionary environments used for experimental computation by LSQEA, QGA, and RGA approaches were as follows. For , , , and , the population size was 300, the crossover rate was 0.9, and the mutation rate was 0.1. For , , and , the stopping criterion for all methods and test functions was 540000 function calls. For , however, the stopping criterion was set to only 300000 function calls because it approached the optimal value fastest. Additionally, each test function was performed in 30 independent runs, and data collection included the best value, the mean function value, and the standard deviation of the function values.
Table 1 shows that the test functions involved 13, 8, 7, or 10 variables (factors), which required 13, 8, 7, or 10 columns, respectively, to allocate them in the Latin square used in the LSQEA approach. The Latin square was used for 7 variables because it had 7 columns. The Latin square was used for 13, 8, or 10 variables because it had 15 columns. In this case, the first 13, 8, or 10 columns were used, whereas the remaining 2, 7, or 5 columns, respectively, were ignored. The computational procedures and evolutionary environments used to solve the test functions by QGA and RGA approaches were the same as those used in the LSQEA approach. However, Latin square was not used in QGA and RGA, and quantuminspired computing was not used in the RGA.
In Table 2, the comparison of results obtained by LSQEA, QGA, and RGA approaches reveals the following.(1)The proposed LSQEA finds optimal or nearoptimal solutions. (2)For all test functions, the LSQEA solutions are closer to optimal compared to the QGA and RGA solutions.(3)For all test functions, the deviations in function values are smaller in the proposed LSQEA than in the QGA and RGA. That is, the proposed LSQEA has a relatively more stable solution quality. Since the RGA is largely based on stochastic search techniques, the standard deviations in all evaluations of test functions are higher in the RGA than in the LSQEA and QGA.

Figure 1 shows convergence results on test functions , , , and by using the LSQEA, QGA, and RGA. The LSQEA requires fewer function calls to reach the best value and has the sharper decline than the QGA and GA. That is, the LSQEA has faster convergence speed than the QGA and GA.
(a)
(b)
(c)
(d)
In the computational experiment by using the systematic reasoning ability of Latin squares, it was confirmed that a new individual generated by each matrix experiment is superior to its parents, two Qbit individuals. That is, potential individuals in microspace can be exploited. In micro Qbit space, the systematic reasoning mechanism of the Latin square with signaltonoise ratio enhanced the performance of the LSQEA by accelerating convergence to the global solution. In macro Qbit space, quantuminspired computing with the GA enhanced the performance of the LSQEA. Table 2 shows that the QGA outperformed the RGA, which indicates that quantuminspired computing with GA improves the performance of the QGA. Therefore, the LSQEA outperforms the QGA and RGA methods in both exploration and exploitation.
García et al. [54, 55] confirmed the use of the most powerful nonparametric statistical tests to carry out multiple comparisons. Therefore, this study used the nonparametric Wilcoxon matchedpairs signedrank test [56] to tackle a multipleproblem analysis to compare two algorithms over a set of problems simultaneously. Let be the difference between the performance scores of the two algorithms on out of different runs. The differences are ranked according to their absolute values, and average ranks are assigned in case of ties. Let be the sum of ranks for the different runs on which the second algorithm outperformed the first, and let be the sum of ranks for the opposite. Ranks of are split evenly among the sums, and if there is an odd number of them, one is ignored
Let be the smaller of the two values, . If is less than or equal to the value of the distribution of Wilcoxon for degrees of freedom, the null hypothesis of equality of means is rejected [54, 55]. Also, to calculate the significance of the test statistic , the mean and standard error were defined as follows [57]: where is the sample size. Therefore, . If is bigger than 1.96 (ignoring the minus sign) then it is significant at .
The sample data obtained by Table 2 include the best values, mean function values, and standard deviation of mean function values. The values are all in LSQEA versus QGA, LSQEA versus RGA, and QGA versus RGA. So, the tests mean there is a significant difference between LSQEA and the two algorithms, QGA and RGA. That is, the performance of LSQEA really outperform those of QGA and RGA, since the Wilcoxon test and . There is a significant difference between QGA and RGA. The performance of QGA is superior to that of RGA.
For evaluating the LSQEA in a problem which has a relatively larger dimensionality, two test examples which are 100 dimensions were used and minimized. They are , , and , −5 ≤ ≤ 10, where . To ensure a fair comparison of the performance of the LSQEA with that of recently proposed algorithms which are particle swarm optimization (PSO) [58] and artificial immune algorithm (AIA) [59], the study use the same population size that is 200 for 50 independent runs. The results of LSQEA on in terms of mean function value (standard deviation) and mean function call (standard deviation) are −92.830 (0) and 178347 (14362), respectively, and on are 0.7 (0) and 60377 (3368), respectively. The results of PSO on in terms of mean function value (standard deviation) and mean function call (standard deviation) are −92.825 (0.03) and 330772 (29516), respectively, and on are 0.752 (0.02) and 168736 (19325), respectively, while the results of AIA on are −90.54 (0.93) and 346972 (29842), respectively, and on are 2.95 (1.29) and 178048 (75619), respectively. Figure 2 shows convergence results on test functions and by using the LSQEA, PSO, and AIA. The LSQEA requires fewer function calls to reach the best value and has the sharper decline than the PSO and AIA. That is, the LSQEA has faster convergence speed than the PSO and AIA. In general, the performance of the LSQEA is superior to those of the PSO and AIA in the two examples. Additionally, the nonparametric Wilcoxon matchedpairs signedrank test was used to evaluate the performance in two algorithms. The Wilcoxon test values are all 2.521 () in LSQEA versus PSO and LSQEA versus AIA. So, the tests mean there is a significant difference between LSQEA and the two algorithms, PSO and AIA. That is, the performance of LSQEA really outperforms those of PSO and AIA, since the Wilcoxon test and . There is also a significant difference between PSO and AIA, since value is 2.521, and value is 0.0117. The performance of PSO is superior to that of AIA in the two examples.
(a)
(b)
Hence, the performance improvement in the proposed LSQEA is achieved by using quantuminspired computing and the systematic reasoning mechanism of Latin square with signaltonoise ratio. Therefore, we conclude that the proposed LSQEA approach effectively solves the six nonlinear programming optimization problems with continuous variables. After confirming this capability of the LSQEA approach, the LSQEA approach was then evaluated for use in solving mixed discretecontinuous nonlinear design problems.
4.2. Solving Mixed DiscreteContinuous Nonlinear Design Problems
To evaluate the use of the LSQEA approach for solving mixed discretecontinuous nonlinear design problems in the engineering design field, this study applied the experimental design method reported in Chou et al. [53] for parameter adjustment in different evolutionary environments.
Example 4.1 (compression coil spring design). Figure 3 shows the first example, which is the design for a compression coil spring under a constant load for minimum volume of material. The relationships among the three design variables can be expressed as , where is an integer representing the number of coil springs; is a discrete value representing the wire diameter according to ASTM code; is a continuous variable representing winding diameter. As described by Sandgren [2], the objective and constraint equations can be mathematically derived as follows:
subject to
where
The assigned parameter values are , [12].
The evolutionary environmental parameters applied in the computational experiments performed using the proposed LSQEA approach are (population size) 100, (crossover rate) 0.9, (mutation rate) 0.3, and generation number 100. The design function is performed in 30 independent runs. Table 3 shows that the computational results obtained by the proposed LSQEA approach are superior to those obtained by the methods developed by Sandgren [2] and by Rao and Xiong [12]. Another observed advantage is that, unlike the approach presented in Sandgren [2], the LSQEA can use arbitrary starting points, which enhances its versatility and effectiveness. Table 4 further shows that the robustness analysis of the LSQEA obtains a very small standard deviation in 30 independent runs, which confirms its robustness for designing a compression coil spring.
Example 4.2 (pressure vessel design). Figure 4 shows a compressed air storage tank consisting of a cylindrical pressure vessel capped by hemispherical heads at both ends [2]. The vessel design problem is formulated according to the ASME boiler and pressure vessel code. The relationships among the design variables can be expressed as , where is shell thickness, is spherical head thickness, is shell radius, and is shell length. The purpose of the objective function is to minimize the total cost, including the cost of the material and the cost of forming and welding the pressure vessel. The problem can be modeled as
where and represent discrete values, integer multiples of 0.0625 inches, while and are continuous variables.
In the computational experiments to evaluate the proposed LSQEA approach, population size is 300, the crossover rate is 0.9, the mutation rate is 0.3, and the generation number is 200. The design function was performed in 30 independent runs. Table 5 compares the computational results obtained by the proposed LSQEA approach and by the methods introduced by Sandgren [2], Fu et al. [7], Rao and Xiong [12], and Shih and Lai [60]. The comparison shows that the proposed LSQEA approach outperforms the methods developed by Sandgren [2], Fu et al. [7], Rao and Xiong [12], and Shih and Lai [60]. Additionally, unlike the methods developed by Sandgren [2], Fu et al. [7], Rao and Xiong [12], and Shih and Lai [60], LSQEA approach can use arbitrary starting points, which enhances its versatility and effectiveness. Table 6 further shows the results of the robustness analysis of the LSQEA. The standard deviation obtained in 30 independent runs is small, and the average value is better than the best values obtained by the methods reported in Sandgren [2], Fu et al. [7], Rao and Xiong [12], and Shih and Lai [60].
Example 4.3 (welded beam design). The objective in this example, which was given in Ragsdell and Phillips [61], is to find the structural welded beam design with the lowest cost (Figure 5). The considered constraints are weld stress, buckling load, beam deflection, and beam bending stress. The relationships among the design variables can be expressed as , where is bar thickness, is bar breadth, is weld thickness, and is weld length. The objective of the problem is to minimize design cost. The major cost components of such a welded beam include the setup labor cost, welding labor cost, and material cost. The objective function can be described as
The following behavior constraints are considered [12].
(a) Upper bound of maximum shear stress on the weld:
with
(b) Upper bound of maximum normal stress on the beam:
with
(c) Lower bound of bulking load on the beam:
with
(d) Upper bound of end deflection DEL() on the beam:
with
Additionally, the side constraints, which are expressed as , are considered along with the following numerical data [12]:
The variables and are considered discrete variables (integer multiples of 0.5); and are considered integers.
The parameters used in the computational experiments performed to evaluate the proposed LSQEA approach are population size of 10, crossover rate of 0.9, mutation rate of 0.3, and generation number of 20. The design function was performed in 30 independent runs. The computational results obtained by the proposed LSQEA approach are comparable to those obtained by Rao and Xiong [12]. The minimum cost is 5.67334, and . Table 7 also shows the results of robustness analysis of the LSQEA. The standard deviation of 0 obtained in 30 independent runs indicates that the LSQEA finds the optimal solution each run. That is, the LSQEA is a very robust and stable method for designing the welded beam. Additionally, the solution space in this welded beam design problem is small since only 4 design variables are used, and all are discrete variables (integer multiples and integers). Therefore, the LSQEA easily finds the optimal solution.

Example 4.4 (twentyfive bar truss design). Figure 6 shows Example 4.4, a twentyfive bar truss which is a classic test case often used to test optimization algorithms for both continuous and discrete structural optimization problems. Table 8 gives the two load conditions for this truss, which is designed under constraints on both member stress and Euler buckling stress. Since the truss is symmetrical, the member areas can be divided into eight groups: ,, thus eight independent areas are selected as continuous or discrete design variables. Three objective functions are considered in this example: (i) minimization of weight, (ii) minimization of deflection on nodes 1 and 2, and (iii) maximization of the fundamental natural frequency of vibration in the truss. The objective functions are as follows [12]:
where denotes the length of the member ; is the weight density; , , and are the , and components, respectively, of deflections in node ; is the fundamental natural frequency of vibration. The constraints can be stated as
where denotes the tension or compression stress in member under load condition ; allowable stress is set to 40000 psi; and are the lower and upper bounds of , which are set to be 0.1 and 5.0 , respectively; the Euler bulking stress in member is
in which is the Young modulus of .
If each is considered a discrete variable, the following expression is used: , where . The length of each bar is calculated according to the coordinates of the nodes of the truss. The deflections of node , the fundamental natural vibration frequency , and the stress in member under the load condition are obtained by finiteelement analysis of the truss with ANSYS CAE Toolbox [62].
Table 9 shows the best continuous and discrete results obtained by the proposed LSQEA approach with ANSYS CAE Toolbox. In contrast, Table 10 shows the best results obtained by the Rao and Xiong [12] approach using the ANSYS CAE Toolbox. The comparison of results in Tables 9 and 10 confirm that the proposed LSQEA approach provides better results in each main objective function compared to the method developed by Rao and Xiong [12]. Meanwhile, for each specified design objective function, the other two objectives are also better than those reported in Rao and Xiong [12]. An interesting question is why the discrete results given in Rao and Xiong [12] achieve a higher optimal natural frequency compared to the continuous results (Table 10). Intuitively, the answers found in a continuous space should be best. That is, in the case in which the natural frequency is maximized, the results presented by Rao and Xiong [12] are inapplicable. Another issue is the three individual objective functions, which are apparently dependent. Reducing the weight of the truss requires an increase in deflection and a decrease in the fundamental natural frequency, and vice versa. Although reaching the global optimum is possible in singleobjective optimization problems, it is reached at the expense of performance in achieving the other two objectives. Therefore, the reasonable design specification for a practical engineering application is an essential consideration. If the defined objective is maximizing the fundamental natural frequency of vibration in the truss subject to the limited deflections in nodes 1 and 2, reasonable results are obtained. In this example, the results in the third case (maximizing the fundamental natural frequency of vibration in the truss) show the design characteristics.
In summary, the above results confirm that the LSQEA approach obtains robust and stable results. Therefore, we conclude that the LSQEA is highly feasible for solving the four examples that are mixeddiscretecontinuous nonlinear design optimization problems.

