About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 836597, 27 pages
http://dx.doi.org/10.1155/2012/836597
Research Article

Improved Quantum-Inspired Evolutionary Algorithm for Engineering Design Optimization

1Department of Computer Science, National Pingtung University of Education, 4-18 Min-Sheng Road, Pingtung 900, Taiwan
2Institute of System Information and Control, National Kaohsiung First University of Science and Technology, 1 University Road, Yenchao, Kaohsiung 824, Taiwan
3Department of Electrical Engineering, National Kaohsiung University of Applied Sciences, 415 Chien-Kung Road, Kaohsiung 807, Taiwan
4Department of Healthcare Administration and Medical Informatics, Kaohsiung Medical University, 100 Shi-Chuan 1st Road, Kaohsiung 807, Taiwan

Received 31 August 2012; Revised 26 October 2012; Accepted 31 October 2012

Academic Editor: Jung-Fa Tsai

Copyright © 2012 Jinn-Tsong Tsai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

An improved quantum-inspired evolutionary algorithm is proposed for solving mixed discrete-continuous nonlinear problems in engineering design. The proposed Latin square quantum-inspired evolutionary algorithm (LSQEA) combines Latin squares and quantum-inspired genetic algorithm (QGA). The novel contribution of the proposed LSQEA is the use of a QGA to explore the optimal feasible region in macrospace and the use of a systematic reasoning mechanism of the Latin square to exploit the better solution in microspace. By combining the advantages of exploration and exploitation, the LSQEA provides higher computational efficiency and robustness compared to QGA and real-coded GA when solving global numerical optimization problems with continuous variables. Additionally, the proposed LSQEA approach effectively solves mixed discrete-continuous nonlinear design optimization problems in which the design variables are integers, discrete values, and continuous values. The computational experiments show that the proposed LSQEA approach obtains better results compared to existing methods reported in the literature.

1. Introduction

Solving engineering design optimization problems usually requires consideration of many different types of design variables and many constraints. Practical problems in engineering design often involve a mix of integers, discrete variables, and continuous variables. These constraints are often problematic during the engineering design optimization process. Since the 1960s, researchers have attempted to solve this problem, which is known as the mixed discrete nonlinear programming (MDNLP) problem. One of the most effective solutions reported so far is a nonlinear branch and bound method (BBM) for solving nonlinear and discrete programming in mechanical design optimization [1, 2]. In BBM, however, subproblems result from portioning the feasible domain to obtain solutions by ignoring discrete conditions, and the number of times the problem needs to be resolved increases exponentially with the number of variables [3]. The better method, such as the sequential linear programming (SLP), was developed by Bremicker et al. [4] and by Loh and Papalambros [5] to solve general MDNLP problems. The linearized discrete problem is solved by the simplex method to obtain information at each node of the tree. Their SLP approach is compared with the pure BBM, where the sequential quadratic programming is used to solve the nonlinear problem to obtain information at each node. The study shows the SLP method to be superior to the pure BBM. Other approaches to solving MDNLP problems include the penalty function approach [68] and the Lagrangian relaxation approach [9]. The penalty function approach to treat the requirement of discreteness is to define additional constraints and construct a penalty function for them. This term imposes penalty for deviations from the discrete values. The difficulties with a penalty approach are the introduction of additional local minima and repeated minimizations by adjusting the penalty parameters [3]. The Lagrangian relaxation method is similar to the penalty function method. The main difference is that the additional terms due to discrete variables are added to a Lagrangian function instead of a penalty function. The Lagrangian relaxation approach does not guarantee finding a global solution, even if it is a convex problem before discrete variables are introduced. It is observed that some of the methods for discrete variable optimization use the structure of the problem to speed up the search for the discrete solution. These methods are not suitable for implementation into general purpose applications. The BBM is the most general methods; however, it is time consuming. In recent years, the focus has shifted to applications of soft-computing optimization techniques that naturally use mixed-discrete and continuous variables for solving practical engineering problems. These approaches include genetic algorithms (GAs) [1018], simulated annealing [19], differential evolution [20, 21], and evolutionary programming approach [22]. The major challenge when solving MDNLP problems is that numerous local optima can result in the methods becoming trapped in the local optima of the objective functions [12]. Therefore, an efficient and robust algorithm is needed to solve mixed discrete-continuous nonlinear design optimization problems in the engineering design field.

In the past decade, the emerging field of quantum-inspired computing has motivated intensive studies of algorithms such as the Shor factorizing algorithm [23] and the Grover quantum search algorithm [24, 25]. By applying quantum mechanical principles such as quantum-bit representation and superposition of states, quantum-inspired computing can simultaneously process huge numbers of quantum states simultaneously and in parallel. To introduce a strong parallelism in the evolutionary algorithm, Han et al. [26] and Han and Kim [2729] proposed the quantum-inspired genetic algorithm (QGA). For solving combinatorial optimization problems, QGA has proven superior to conventional GAs. Malossini et al. [30] showed that, by taking advantage of quantum phenomena, QGA improves the speed and efficiency of genetic procedures. Quantum-inspired evolutionary algorithms have also been used to solve optimization problems such as partition function calculation [31], nonlinear blind source separation [32], filter design [33], numerical optimization [3436], hyperspectral anomaly detection [37], multiple sequence alignment [38], thermal process identification [39], and multiobjective optimization [40]. However, the performance of the simple QGA is often unsatisfactory, and it is easily trapped in the local optima, which results in premature convergence. That is, the quantum-inspired bit (Q-bit) search with quantum mechanism must be well coordinated with the genetic search with evolution mechanism, and the exploration and exploitation behaviors must also be well balanced [35]. Therefore, a big challenge is to improve QGA capability of exploration and exploitation and develop an efficient and robust algorithm.

The efficient and robust Latin square quantum-inspired evolutionary algorithm (LSQEA) proposed in this study solves global numerical optimization problems with continuous variables and mixed discrete-continuous nonlinear design optimization problems. The LSQEA approach integrates Latin squares [4143] and QGA (i.e., quantum-inspired individual and mechanism with GAs). The concept of the use of QGA came from the works of Han et al. [26] and Han and Kim [2729], while the development steps were implemented by authors and shown in Section 3. The role of the Latin square is to generate better individuals by implementing the Latin square-based recombination since the systematic reasoning ability of Latin square of the Taguchi method is, in due course, incorporated into the recombination operation to select better Q-bits. This role is important for improving the efficiency of the crossover operation in generating representative individuals and better-fit trial individuals. The Latin square is applied to recombine the better Q-bits so that potential individuals in microspace can be exploited. The QGA is used to explore the optimal feasible region in macro-space. Therefore, the LSQEA approach is highly robust and achieves quick convergence.

The paper is organized as follows. Section 2 gives the problem statements. The LSQEA for solving the mixed discrete-continuous nonlinear design optimization problems is described in Section 3. In Section 4, the proposed LSQEA approach is compared with QGA and real-coded GA (RGA) [4447] in terms of performance in solving global numerical optimization problems with continuous variables. The LSQEA approach is then used to solve mixed discrete-continuous nonlinear design problems encountered in the engineering design field, and the results obtained by LSQEA are compared with those obtained by existing methods reported in the literature. Finally, Section 5 concludes the study.

2. Problem Statements

This section states the considered problems, which include a global numerical optimization problem with continuous variables and a mixed discrete-continuous nonlinear programming problem.

The following global numerical optimization problem with continuous variables is considered: where is a continuous variable vector, is an objective function, , and define the feasible solution vector spaces. The domain of is denoted by [, ], and the feasible solution space is defined by [, ]. For this problem, and are the constraint functions. Although many design problems can be cast as the above optimization problem, efficiently obtaining optimal solution is difficult because the problem involves designs that are high-dimensional, nondifferentiable, and multimodal [35].

The mixed discrete-continuous nonlinear programming problem is expressed as follows [12]:

where are nonnegative discrete variables with permissible values equally spaced; are nonnegative discrete variables with permissible values unequally spaced; are nonnegative continuous variables. Practical optimization problems encountered in the engineering design field often have many constraints and require consideration of different types of design variables as shown in (2.2). Again, because these problems involve high-dimensional, nondifferentiable, and multimodal properties, an effective algorithm is needed to solve them optimally and efficiently. According to the above statements, the problem in (2.2) is a special case of the problem in (2.1).

For the mixed discrete-continuous design problem in (2.2), the discrete variables with equal spacing (i.e., individuals with arithmetical progression) are , where is the lower bound of ; is the natural number corresponding to is the discrete increment of the discrete variable; is the number of discrete variables with equal spacing. Let for the discrete variables with unequal spacing, where , denotes the natural number corresponding to , is generated from , denotes the element of the vector , represents the vector of values of discrete variables with unequal spacing, and is the maximum permissible number of discrete values for the discrete variable with unequal spacing.

3. The LSQEA Approach for Solving Mixed Discrete-Continuous Design Problems

This section describes the details of the LSQEA approach for solving mixed discrete-continuous nonlinear programming problems.

3.1. Q-Bit Representation

In quantum-inspired computing, the smallest unit of information stored in a two-state quantum computer is called a quantum bit. It may be in a “0” state, a “1” state, or any superposition of the two. The state of a quantum bit can be represented as where and are complex numbers that describe the probability amplitudes of the corresponding states. The and are probabilities of the quantum bit being in the “0” state and “1” state, respectively, such that .

The use of a Q-bit to represent an individual in this study was inspired by quantum computing concepts. The advantage of the representation is the capability to use linear superposition method to generate any possible solution. A Q-bit individual can be represented by a string of Q-bits such as where . Since Q-bits represent a linear superposition of states, a Q-bit representation provides better population diversity compared to other representations used in evolutionary computing. For example, for following three Q-bits system with three pairs of amplitudes the states of the system can be represented as The above result means that the probabilities to represent the states are 1/16, 3/16, 1/16, 3/16, 1/16, 3/16, 1/16, and 3/16, respectively. By consequence, the above three Q-bits system contains the information of eight states.

3.2. Initial Population

The initialization procedure produces Q-bit individuals where denotes the population size.

3.3. Crossover Operation

The crossover operators are the one cut-point operator, which randomly determines one cut-point and exchanges the cut-point right parts of the Q-bits of two parents to generate new offspring. If the position is selected as one cut-point, the one cut-point crossover operator is used for Q-bits as shown in (3.5) For example, for following four Q-bits system, the 2nd position is selected as one cut-point, the crossover operation is shown below

3.4. Mutation Operation

Mutation of Q-bits is performed by randomly determining one position (e.g., position ) and then exchanging the corresponding and For example, for following four Q-bits system, the 2nd position is selected for mutation, the mutation operation is shown below.

3.5. Q-Bit Rotation Operation

The purpose of a rotation gate is to update a Q-bit individual by rotating the Q-bit toward the direction of the corresponding Q-bit to obtain a better value. The (, ) of the Q-bit is updated as follows: where is a rotation angle by .

For example, if = + = and , the result obtained by Q-bit rotation operation is . The probability of becomes larger, and the probability of becomes smaller.

3.6. Penalty Function

When using evolutionary method to solve a constrained optimization problem, a penalty function is used to relax the constraints by penalizing the unfeasible individuals in the population. This method improves the probability of approaching a feasible region of the search space by navigating through unfeasible regions and by reducing the penalty when a feasible region is approached. To clarify this point, it is important to distinguish between feasible and unfeasible individuals. The unfeasible individuals violate constraints included in the range where is the number of design constraints. The higher this index of violation is, the larger the penalty should be. Given these considerations, the penalty value is defined as follows: where is a value computed from the constraint function when the values of design variables are determined; and are the upper and lower bounds, respectively, of the constraint function; is a value distinguishing the feasible from the unfeasible individuals; and and denote the weights. Additionally, if , then and ; if , then and ; if , then and . Equation (3.10) generally requires that each value calculated for of the constraint function should be limited to its upper and lower bounds. If the value is located within the feasible region, the value is not punished. Otherwise, the value is punished by being multiplied with a large number . The penalty value equals 0 when the optimization process is complete since the values of the design variables no longer violate the design constraints.

3.7. Latin Square

The Latin square experimental design method screens for the important factors that impact product performance. Therefore, it can be used to study a large number of decision variables with a small number of experiments. The design variables (parameters) are called factors, and parameter settings are called levels. The name Latin square originates from Leonhard Euler, who used Latin characters as symbols. The details regarding the experimental design method can be found in texts by Phadke [41], Montgomery [42], and Park [43]. For an orthogonal array, matrix experiments are conducted by randomly choosing two individuals from the Q-bit population pool. Each factor of one Q-bit individual is designated level 1, and each factor of the other Q-bit individual is designated level 2. The two-level orthogonal array of Latin squares applied here is . Additionally, each of number of design factors has two levels. To establish a two-level orthogonal array of factors, let represent columns and individual experiments corresponding to the rows, where ,   is a positive integer and . If , only the first columns are used while the other columns are ignored. For example, if each of six factors has two levels, only six columns are needed to allocate these factors. In this case, is sufficient for this purpose because it has seven columns.

The better combinations of decision variables are also determined by integrating the orthogonal array of the Latin square and the signal-to-noise ratio of the Taguchi method. The concept of Taguchi method is to maximize signal-to-noise ratios used as performance measures by using the orthogonal array to run a partial set of experiments. The signal-to-noise ratio refers to the mean-square deviation in the objective function. For cases of the larger-the-better characteristic and the smaller-the-better characteristic, Taguchi defined , which is expressed in decibels, as and , respectively, where denotes a set of characteristics. Further details can be found in works by Phadke [41], Montgomery [42], and Park [43].

If only the degree of in the orthogonal array experiments is being described, the previous equations can be modified as if the objective function is to be maximized (larger-the-better) and as if the objective function is to be minimized (smaller-the-better). Let denote the evaluation value of the objective function of experiment , where , and is the number of orthogonal array experiments. The effects of the various factors (variables or individuals) can be defined as follows: where is the number of experiments, is the factor name or number, and is the level number.

The main objective of the matrix experiments is to choose a new Q-bit individual from the two Q-bit individuals at each locus (factor). At each locus (factor), a Q-bit is chosen if the has the highest value in the experimental region. That is, the objective is to determine the best level for each factor. The best level for a factor is the level that maximizes the value of in the experimental region. For the two-level problem, if , the better level is level 1 for factor . Otherwise, level 2 is better. After the best level is determined for each factor, the best levels can be combined to obtain the new individual. Therefore, systematic reasoning ability of the orthogonal array of the Latin square combined with the signal-to-noise ratio of the Taguchi method ensures that the new Q-bit individual has the best or close-to-best evaluation value of the objective function among the combinations of factor levels, where is the total number of experiments needed for all combinations of factor levels.

For the matrix experiments of an orthogonal array of Latin squares, generation of better individuals requires random selection of two Q-bit individuals at a time from the Q-bit population pool generated by the one cut-point crossover operation. A new individual generated by each matrix experiment is superior to its parents by using the systematic reasoning ability of an orthogonal array of Latin squares and by following the below algorithm [46]. The two individuals recombine the better Q-bits to be a better-fit individual, so that potential individuals in microspace can be exploited. The detailed steps for each matrix experiment are described as follows.

Algorithm

Step 1. . Generate two sets and , each of which has design factors (variables). From the first columns of the orthogonal array , allocate design factors, where .

Step 2. Designate sets and as level 1 and level 2, respectively, by using a uniformly distributed random method to choose two Q-bit individuals from the Q-bit population pool generated by the crossover operation.

Step 3. Assign the level 1 values obtained from and the level 2 values obtained from to level cells of the experiment in the orthogonal array.

Step 4. Calculate the fitness value and the signal-to-noise ratio for the new individual.

Step 5. If , then go to Step 6. Otherwise, , and repeat Steps 35.

Step 6. Calculate the effects of the various factors and, where .

Step 7. The Q-bit of locus of the new Q-bit individual is obtained from if . Otherwise, it is obtained from , where . Implementing the process for each Q-bit at each locus then obtains the new Q-bit individual.

3.8. Steps of LSQEA

The LSQEA approach is a method of integrating Latin squares and QGA. The Latin square method is performed between the one-cut-point crossover operation and the mutation operation. The penalty function is considered for a constrained problem, as the fitness value is calculated. The steps of the LSQEA approach are described as follows.

Step 1. Set parameters, including population size , crossover rate , mutation rate , and number of generations.

Step 2. Generate an initial Q-bit population, and calculate the fitness values for the population.

Step 3. Perform selection operation by roulette wheel approach [44].

Step 4. Perform the one-cut-point crossover operation for each Q-bit. Select Q-bit individuals for crossover according to crossover rate .

Step 5. Perform matrix experiments for Latin squares method, and use signal-to-noise ratios to generate the better offspring.

Step 6. Repeat Step 5 until the loop number has been met.

Step 7. Generate the Q-bit population via Latin squares method.

Step 8. Perform the mutation operation in the Q-bit population. Select Q-bits for mutation according to mutation rate .

Step 9. Except for the best individual, select Q-bit individuals for the Q-bit rotation operation.

Step 10. Generate the new Q-bit population.

Step 11. Has the stopping criterion been met? If so, then go to Step 12. Otherwise, repeat Step 3 to Step 11.

Step 12. Display the best individual and fitness value.

4. Design Examples and Comparisons

This section first describes the performance evaluation results for the proposed LSQEA approach. The performance of the LSQEA is then compared with those of the QGA and RGA methods in solving nonlinear programming optimization problems with continuous variables. Finally, the LSQEA approach is used to solve mixed discrete-continuous nonlinear design problems in the engineering design field, and its solutions are compared with those of other methods reported in the literature.

4.1. Solving Nonlinear Programming Optimization Problems with Continuous Variables

For performance evaluation, the proposed LSQEA approach was used to solve the nonlinear programming optimization problems shown in Table 1 [4850]. The test functions included quadratic, linear, polynomial, and nonlinear forms. The constraints of these functions (, , , and ) were linear and nonlinear inequalities, and their dimensions were 13, 8, 7, and 10, respectively. The penalty function of (3.10) was used to handle constrains of linear and nonlinear inequalities for optimization. Therefore, the test functions had sufficient local minima to provide a challenging problem for the purpose of performance evaluation. To identify any performance improvements obtained by application of Latin square and quantum computing-inspired concepts, the QGA and RGA approaches were used to solve the test functions.

tab1
Table 1: Test functions.

Optimizing the main parameters in evolutionary environments continues to be a area of active research in this field. Studies have shown how the performance of a GA can be improved by modifying its main parameters [51, 52]. For example, Chou et al. [53] applied an experimental design method to improve the performance of a GA by optimizing its evolutionary parameters. Therefore, this study adjusted evolutionary parameters by using the same experimental design method applied in Chou et al. [53]. The evolutionary environments used for experimental computation by LSQEA, QGA, and RGA approaches were as follows. For , , , and , the population size was 300, the crossover rate was 0.9, and the mutation rate was 0.1. For   , , and , the stopping criterion for all methods and test functions was 540000 function calls. For , however, the stopping criterion was set to only 300000 function calls because it approached the optimal value fastest. Additionally, each test function was performed in 30 independent runs, and data collection included the best value, the mean function value, and the standard deviation of the function values.

Table 1 shows that the test functions involved 13, 8, 7, or 10 variables (factors), which required 13, 8, 7, or 10 columns, respectively, to allocate them in the Latin square used in the LSQEA approach. The Latin square was used for 7 variables because it had 7 columns. The Latin square was used for 13, 8, or 10 variables because it had 15 columns. In this case, the first 13, 8, or 10 columns were used, whereas the remaining 2, 7, or 5 columns, respectively, were ignored. The computational procedures and evolutionary environments used to solve the test functions by QGA and RGA approaches were the same as those used in the LSQEA approach. However, Latin square was not used in QGA and RGA, and quantum-inspired computing was not used in the RGA.

In Table 2, the comparison of results obtained by LSQEA, QGA, and RGA approaches reveals the following.(1)The proposed LSQEA finds optimal or near-optimal solutions. (2)For all test functions, the LSQEA solutions are closer to optimal compared to the QGA and RGA solutions.(3)For all test functions, the deviations in function values are smaller in the proposed LSQEA than in the QGA and RGA. That is, the proposed LSQEA has a relatively more stable solution quality. Since the RGA is largely based on stochastic search techniques, the standard deviations in all evaluations of test functions are higher in the RGA than in the LSQEA and QGA.

tab2
Table 2: Results of performance comparisons of LSQEA, QGA, and RGA.

Figure 1 shows convergence results on test functions , , , and by using the LSQEA, QGA, and RGA. The LSQEA requires fewer function calls to reach the best value and has the sharper decline than the QGA and GA. That is, the LSQEA has faster convergence speed than the QGA and GA.

fig1
Figure 1: Convergence results on test functions , , , and by using the LSQEA, QGA, and RGA.

In the computational experiment by using the systematic reasoning ability of Latin squares, it was confirmed that a new individual generated by each matrix experiment is superior to its parents, two Q-bit individuals. That is, potential individuals in microspace can be exploited. In micro Q-bit space, the systematic reasoning mechanism of the Latin square with signal-to-noise ratio enhanced the performance of the LSQEA by accelerating convergence to the global solution. In macro Q-bit space, quantum-inspired computing with the GA enhanced the performance of the LSQEA. Table 2 shows that the QGA outperformed the RGA, which indicates that quantum-inspired computing with GA improves the performance of the QGA. Therefore, the LSQEA outperforms the QGA and RGA methods in both exploration and exploitation.

García et al. [54, 55] confirmed the use of the most powerful nonparametric statistical tests to carry out multiple comparisons. Therefore, this study used the nonparametric Wilcoxon matched-pairs signed-rank test [56] to tackle a multiple-problem analysis to compare two algorithms over a set of problems simultaneously. Let be the difference between the performance scores of the two algorithms on out of different runs. The differences are ranked according to their absolute values, and average ranks are assigned in case of ties. Let be the sum of ranks for the different runs on which the second algorithm outperformed the first, and let be the sum of ranks for the opposite. Ranks of are split evenly among the sums, and if there is an odd number of them, one is ignored

Let be the smaller of the two values, . If is less than or equal to the value of the distribution of Wilcoxon for degrees of freedom, the null hypothesis of equality of means is rejected [54, 55]. Also, to calculate the significance of the test statistic , the mean and standard error were defined as follows [57]: where is the sample size. Therefore, . If is bigger than 1.96 (ignoring the minus sign) then it is significant at .

The sample data obtained by Table 2 include the best values, mean function values, and standard deviation of mean function values. The values are all in LSQEA versus QGA, LSQEA versus RGA, and QGA versus RGA. So, the tests mean there is a significant difference between LSQEA and the two algorithms, QGA and RGA. That is, the performance of LSQEA really outperform those of QGA and RGA, since the Wilcoxon test and . There is a significant difference between QGA and RGA. The performance of QGA is superior to that of RGA.

For evaluating the LSQEA in a problem which has a relatively larger dimensionality, two test examples which are 100 dimensions were used and minimized. They are , , and , −5 ≤ ≤ 10, where . To ensure a fair comparison of the performance of the LSQEA with that of recently proposed algorithms which are particle swarm optimization (PSO) [58] and artificial immune algorithm (AIA) [59], the study use the same population size that is 200 for 50 independent runs. The results of LSQEA on in terms of mean function value (standard deviation) and mean function call (standard deviation) are −92.830 (0) and 178347 (14362), respectively, and on are 0.7 (0) and 60377 (3368), respectively. The results of PSO on in terms of mean function value (standard deviation) and mean function call (standard deviation) are −92.825 (0.03) and 330772 (29516), respectively, and on are 0.752 (0.02) and 168736 (19325), respectively, while the results of AIA on are −90.54 (0.93) and 346972 (29842), respectively, and on are 2.95 (1.29) and 178048 (75619), respectively. Figure 2 shows convergence results on test functions and by using the LSQEA, PSO, and AIA. The LSQEA requires fewer function calls to reach the best value and has the sharper decline than the PSO and AIA. That is, the LSQEA has faster convergence speed than the PSO and AIA. In general, the performance of the LSQEA is superior to those of the PSO and AIA in the two examples. Additionally, the nonparametric Wilcoxon matched-pairs signed-rank test was used to evaluate the performance in two algorithms. The Wilcoxon test values are all 2.521 () in LSQEA versus PSO and LSQEA versus AIA. So, the tests mean there is a significant difference between LSQEA and the two algorithms, PSO and AIA. That is, the performance of LSQEA really outperforms those of PSO and AIA, since the Wilcoxon test and . There is also a significant difference between PSO and AIA, since value is 2.521, and value is 0.0117. The performance of PSO is superior to that of AIA in the two examples.

fig2
Figure 2: Convergence results on test functions and by using the LSQEA, PSO, and AIA.

Hence, the performance improvement in the proposed LSQEA is achieved by using quantum-inspired computing and the systematic reasoning mechanism of Latin square with signal-to-noise ratio. Therefore, we conclude that the proposed LSQEA approach effectively solves the six nonlinear programming optimization problems with continuous variables. After confirming this capability of the LSQEA approach, the LSQEA approach was then evaluated for use in solving mixed discrete-continuous nonlinear design problems.

4.2. Solving Mixed Discrete-Continuous Nonlinear Design Problems

To evaluate the use of the LSQEA approach for solving mixed discrete-continuous nonlinear design problems in the engineering design field, this study applied the experimental design method reported in Chou et al. [53] for parameter adjustment in different evolutionary environments.

Example 4.1 (compression coil spring design). Figure 3 shows the first example, which is the design for a compression coil spring under a constant load for minimum volume of material. The relationships among the three design variables can be expressed as , where is an integer representing the number of coil springs; is a discrete value representing the wire diameter according to ASTM code; is a continuous variable representing winding diameter. As described by Sandgren [2], the objective and constraint equations can be mathematically derived as follows: subject to where
The assigned parameter values are , [12].
The evolutionary environmental parameters applied in the computational experiments performed using the proposed LSQEA approach are (population size) 100, (crossover rate) 0.9, (mutation rate) 0.3, and generation number 100. The design function is performed in 30 independent runs. Table 3 shows that the computational results obtained by the proposed LSQEA approach are superior to those obtained by the methods developed by Sandgren [2] and by Rao and Xiong [12]. Another observed advantage is that, unlike the approach presented in Sandgren [2], the LSQEA can use arbitrary starting points, which enhances its versatility and effectiveness. Table 4 further shows that the robustness analysis of the LSQEA obtains a very small standard deviation in 30 independent runs, which confirms its robustness for designing a compression coil spring.

tab3
Table 3: Optimal design of compression coil spring.
tab4
Table 4: Robustness analysis results for compression coil spring design obtained by LSQEA.
836597.fig.003
Figure 3: Compression coil spring.

Example 4.2 (pressure vessel design). Figure 4 shows a compressed air storage tank consisting of a cylindrical pressure vessel capped by hemispherical heads at both ends [2]. The vessel design problem is formulated according to the ASME boiler and pressure vessel code. The relationships among the design variables can be expressed as , where is shell thickness, is spherical head thickness, is shell radius, and is shell length. The purpose of the objective function is to minimize the total cost, including the cost of the material and the cost of forming and welding the pressure vessel. The problem can be modeled as where and represent discrete values, integer multiples of 0.0625 inches, while and are continuous variables.
In the computational experiments to evaluate the proposed LSQEA approach, population size is 300, the crossover rate is 0.9, the mutation rate is 0.3, and the generation number is 200. The design function was performed in 30 independent runs. Table 5 compares the computational results obtained by the proposed LSQEA approach and by the methods introduced by Sandgren [2], Fu et al. [7], Rao and Xiong [12], and Shih and Lai [60]. The comparison shows that the proposed LSQEA approach outperforms the methods developed by Sandgren [2], Fu et al. [7], Rao and Xiong [12], and Shih and Lai [60]. Additionally, unlike the methods developed by Sandgren [2], Fu et al. [7], Rao and Xiong [12], and Shih and Lai [60], LSQEA approach can use arbitrary starting points, which enhances its versatility and effectiveness. Table 6 further shows the results of the robustness analysis of the LSQEA. The standard deviation obtained in 30 independent runs is small, and the average value is better than the best values obtained by the methods reported in Sandgren [2], Fu et al. [7], Rao and Xiong [12], and Shih and Lai [60].

tab5
Table 5: Optimal design for pressure vessel.
tab6
Table 6: Robustness analysis of pressure vessel design obtained by LSQEA.
836597.fig.004
Figure 4: Pressure vessel.

Example 4.3 (welded beam design). The objective in this example, which was given in Ragsdell and Phillips [61], is to find the structural welded beam design with the lowest cost (Figure 5). The considered constraints are weld stress, buckling load, beam deflection, and beam bending stress. The relationships among the design variables can be expressed as , where is bar thickness, is bar breadth, is weld thickness, and is weld length. The objective of the problem is to minimize design cost. The major cost components of such a welded beam include the setup labor cost, welding labor cost, and material cost. The objective function can be described as The following behavior constraints are considered [12].
(a) Upper bound of maximum shear stress on the weld: with
(b) Upper bound of maximum normal stress on the beam: with
(c) Lower bound of bulking load on the beam: with
(d) Upper bound of end deflection DEL() on the beam: with Additionally, the side constraints, which are expressed as , are considered along with the following numerical data [12]: The variables and are considered discrete variables (integer multiples of 0.5); and are considered integers.

836597.fig.005
Figure 5: Welded beam.

The parameters used in the computational experiments performed to evaluate the proposed LSQEA approach are population size of 10, crossover rate of 0.9, mutation rate of 0.3, and generation number of 20. The design function was performed in 30 independent runs. The computational results obtained by the proposed LSQEA approach are comparable to those obtained by Rao and Xiong [12]. The minimum cost is 5.67334, and . Table 7 also shows the results of robustness analysis of the LSQEA. The standard deviation of 0 obtained in 30 independent runs indicates that the LSQEA finds the optimal solution each run. That is, the LSQEA is a very robust and stable method for designing the welded beam. Additionally, the solution space in this welded beam design problem is small since only 4 design variables are used, and all are discrete variables (integer multiples and integers). Therefore, the LSQEA easily finds the optimal solution.

tab7
Table 7: Robustness analysis of welded beam design obtained by LSQEA.

Example 4.4 (twenty-five bar truss design). Figure 6 shows Example 4.4, a twenty-five bar truss which is a classic test case often used to test optimization algorithms for both continuous and discrete structural optimization problems. Table 8 gives the two load conditions for this truss, which is designed under constraints on both member stress and Euler buckling stress. Since the truss is symmetrical, the member areas can be divided into eight groups: ,, thus eight independent areas are selected as continuous or discrete design variables. Three objective functions are considered in this example: (i) minimization of weight, (ii) minimization of deflection on nodes 1 and 2, and (iii) maximization of the fundamental natural frequency of vibration in the truss. The objective functions are as follows [12]: where denotes the length of the member ; is the weight density; , , and are the , and components, respectively, of deflections in node ; is the fundamental natural frequency of vibration. The constraints can be stated as where denotes the tension or compression stress in member under load condition ; allowable stress is set to 40000 psi; and are the lower and upper bounds of , which are set to be 0.1  and 5.0 , respectively; the Euler bulking stress in member is in which is the Young modulus of .
If each is considered a discrete variable, the following expression is used: , where . The length of each bar is calculated according to the coordinates of the nodes of the truss. The deflections of node , the fundamental natural vibration frequency , and the stress in member under the load condition are obtained by finite-element analysis of the truss with ANSYS CAE Toolbox [62].
Table 9 shows the best continuous and discrete results obtained by the proposed LSQEA approach with ANSYS CAE Toolbox. In contrast, Table 10 shows the best results obtained by the Rao and Xiong [12] approach using the ANSYS CAE Toolbox. The comparison of results in Tables 9 and 10 confirm that the proposed LSQEA approach provides better results in each main objective function compared to the method developed by Rao and Xiong [12]. Meanwhile, for each specified design objective function, the other two objectives are also better than those reported in Rao and Xiong [12]. An interesting question is why the discrete results given in Rao and Xiong [12] achieve a higher optimal natural frequency compared to the continuous results (Table 10). Intuitively, the answers found in a continuous space should be best. That is, in the case in which the natural frequency is maximized, the results presented by Rao and Xiong [12] are inapplicable. Another issue is the three individual objective functions, which are apparently dependent. Reducing the weight of the truss requires an increase in deflection and a decrease in the fundamental natural frequency, and vice versa. Although reaching the global optimum is possible in single-objective optimization problems, it is reached at the expense of performance in achieving the other two objectives. Therefore, the reasonable design specification for a practical engineering application is an essential consideration. If the defined objective is maximizing the fundamental natural frequency of vibration in the truss subject to the limited deflections in nodes 1 and 2, reasonable results are obtained. In this example, the results in the third case (maximizing the fundamental natural frequency of vibration in the truss) show the design characteristics.
In summary, the above results confirm that the LSQEA approach obtains robust and stable results. Therefore, we conclude that the LSQEA is highly feasible for solving the four examples that are mixed-discrete-continuous nonlinear design optimization problems.

tab8
Table 8: Load conditions for twenty-five bar truss.
tab9
Table 9: Best results for individual objective functions obtained by LSQEA.
tab10
Table 10: Best results for individual objective functions [12].
836597.fig.006
Figure 6: Twenty-five bar truss.

5. Conclusions

The LSQEA approach proposed in this study solves the problems of mixed discrete-continuous nonlinear design optimization. The approach combines the merit of QGA, which is its powerful global exploration capability for exploring the optimal feasible region, with that of the Latin square, which is exploitation of the better offspring. In this study, the LSQEA approach efficiently solved global numerical optimization problems with continuous variables. The computational experiments show that, compared to QGA and RGA, the proposed LSQEA is more efficient in finding optimal or near-optimal solutions for nonlinear programming optimization problems with continuous variables. Additional advantages of the LSQEA approach include its superior robustness compared to QGA and RGA and its global exploration capability. Finally, applications of the LSQEA to solve various mixed discrete-continuous nonlinear design optimization problems in computational experiments in this study confirm its superior solution quality and superior robustness compared to existing methods. Therefore, the proposed LSQEA approach can be used as a global optimization method for solving mixed discrete-continuous nonlinear design problems complicated by multiple constraints.

Acknowledgment

This work was partially supported by the National Science Council, Taiwan, under Grant no. NSC96-2221-E-153-002-MY2, NSC99-2221-E-151-071-MY3, NSC100-2221-E-153-001, NSC101-2221-E-153-003, and NSC 101-2320-B-037-022.

References

  1. P. Hajela and C. J. Shih, “Optimal design of laminated composites using a modified mixed integer and discrete programming algorithm,” Computers and Structures, vol. 32, no. 1, pp. 213–221, 1989. View at Scopus
  2. E. Sandgren, “Nonlinear integer and discrete programming in mechnical design optimization,” ASME Journal of Mechanical Design, vol. 112, no. 2, pp. 223–229, 1990. View at Scopus
  3. J. S. Arora, M. W. Huang, and C. C. Hsieh, “Methods for optimization of nonlinear problems with discrete variables: a review,” Structural Optimization, vol. 8, no. 2-3, pp. 69–85, 1994. View at Publisher · View at Google Scholar · View at Scopus
  4. M. Bremicker, P. Y. Papalambros, and H. T. Loh, “Solution of mixed-discrete structural optimization problems with a new sequential linearization algorithm,” Computers and Structures, vol. 37, no. 4, pp. 451–461, 1990. View at Scopus
  5. H. T. Loh and P. Y. Papalambros, “Sequential linearization approach for solving mixed-discrete nonlinear design optimization problems,” ASME Journal of Mechanical Design, vol. 113, no. 3, pp. 325–334, 1991. View at Scopus
  6. D. K. Shin, Z. Gurdal, and O. H. Grin, “A penalty approach for nonlinear optimization with discrete design variables,” Engineering Optimization, vol. 16, pp. 29–42, 1990.
  7. J. F. Fu, R. G. Fenton, and W. L. Cleghorn, “A mixed integer-discrete continuous programming method and its application to engineering design optimization,” Engineering Optimization, vol. 17, pp. 263–280, 1991.
  8. J. Cai and G. Thieraut, “Discrete optimization of structures using an improved penalty function method,” Engineering Optimization, vol. 17, pp. 293–306, 1993.
  9. O. Jonsson and T. Larsson, “Lagrangean relaxation and sub-gradient optimization applied to optimal design with discrete sizing,” Engineering Optimization, vol. 16, pp. 221–233, 1990.
  10. S. S. Lin, C. Zhang, and H. P. Wang, “On mixed-discrete nonlinear optimization problems: a comparative study,” Engineering Optimization, vol. 23, pp. 287–300, 1995.
  11. S. J. Wu and P. T. Chow, “Applications of genetic algorithms to discrete optimization problems,” Journal of the Chinese Society of Mechanical Engineers, vol. 16, no. 6, pp. 587–598, 1995. View at Scopus
  12. S. S. Rao and Y. Xiong, “A hybrid genetic algorithm for mixed-discrete design optimization,” Journal of Mechanical Design, vol. 127, no. 6, pp. 1100–1112, 2005. View at Publisher · View at Google Scholar · View at Scopus
  13. W. Tang and Q. Yuan, “Improved genetic algorithm for shape optimization of truss structures,” Chinese Journal of Theoretical and Applied Mechanics, vol. 38, no. 6, pp. 843–849, 2006. View at Scopus
  14. R. L. Haupt, “Antenna design with a mixed integer genetic algorithm,” IEEE Transactions on Antennas and Propagation, vol. 55, no. 3, pp. 577–582, 2007. View at Publisher · View at Google Scholar · View at Scopus
  15. K. Deep, K. P. Singh, M. L. Kansal, and C. Mohan, “A real coded genetic algorithm for solving integer and mixed integer optimization problems,” Applied Mathematics and Computation, vol. 212, no. 2, pp. 505–518, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  16. K. M. Lee, J. T. Tsai, T. K. Liu, and J. H. Chou, “Improved genetic algorithm for mixed-discrete-continuous design optimization problems,” Engineering Optimization, vol. 42, no. 10, pp. 927–941, 2010. View at Publisher · View at Google Scholar · View at Scopus
  17. W. H. Ho and C. S. Chang, “Genetic-algorithm-based artificial neural network modeling for platelet transfusion requirements on acute myeloblastic leukemia patients,” Expert Systems with Applications, vol. 38, no. 5, pp. 6319–6323, 2011. View at Publisher · View at Google Scholar · View at Scopus
  18. W. H. Ho, J. X. Chen, I. N. Lee, and H. C. Su, “An ANFIS-based model for predicting adequacy of vancomycin regimen using improved genetic algorithm,” Expert Systems with Applications, vol. 38, no. 10, pp. 13050–13056, 2011. View at Publisher · View at Google Scholar · View at Scopus
  19. C. Zhang and H. P. Wang, “Mixed-discrete nonlinear optimization with simulated annealing,” Engineering Optimization, vol. 21, pp. 277–291, 1993.
  20. W.-H. Ho, J.-H. Chou, and C.-Y. Guo, “Parameter identification of chaotic systems using improved differential evolution algorithm,” Nonlinear Dynamics, vol. 61, no. 1-2, pp. 29–41, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  21. W.-H. Ho and A. L.-F. Chan, “Hybrid Taguchi-differential evolution algorithm for parameter estimation of differential equation models with application to HIV dynamics,” Mathematical Problems in Engineering, vol. 2011, Article ID 514756, 14 pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  22. Y. J. Cao, L. Jiang, and Q. H. Wu, “An evolutionary programming approach to mixed-variable optimization problems,” Applied Mathematical Modelling, vol. 24, no. 12, pp. 931–942, 2000. View at Publisher · View at Google Scholar · View at Scopus
  23. P. W. Shor, “Algorithms for quantum computation: discrete logarithms and factoring,” in Proceedings of the 35th Annual Symposium on Foundations of Computer Science, pp. 124–134, Santa Fe, NM, USA, 1994.
  24. L. K. Grover, “Fast quantum mechanical algorithm for database search,” in Proceedings of the 28th Annual ACM Symposium on the Theory of Computing, pp. 212–219, New York, NY, USA, May 1996. View at Scopus
  25. L. K. Grover, “Quantum mechanics helps in searching for a needle in a haystack,” Physical Review Letters, vol. 79, no. 2, pp. 325–328, 1997. View at Scopus
  26. K. H. Han, K. H. Park, C. H. Lee, and J. H. Kim, “Parallel quantum-inspired genetic algorithm for combinatorial optimization problem,” in Proceedings of IEEE Conference on Evolutionary Computation, pp. 1422–1429, Seoul, Korea, May 2001. View at Scopus
  27. K. H. Han and J. H. Kim, “Genetic quantum algorithm and its application to combinatorial optimization problem,” in Proceedings of the Congress on Evolutionary Computation, pp. 1354–1360, San Diego, Calif, USA, July 2000. View at Scopus
  28. K. H. Han and J. H. Kim, “Quantum-inspired evolutionary algorithm for a class of combinatorial optimization,” IEEE Transactions on Evolutionary Computation, vol. 6, no. 6, pp. 580–593, 2002. View at Publisher · View at Google Scholar · View at Scopus
  29. K. H. Han and J. H. Kim, “Quantum-inspired evolutionary algorithms with a new termination criterion, Hε gate, and two-phase scheme,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 2, pp. 156–169, 2004. View at Publisher · View at Google Scholar · View at Scopus
  30. A. Malossini, E. Blanzieri, and T. Calarco, “Quantum genetic optimization,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 2, pp. 231–241, 2008. View at Publisher · View at Google Scholar · View at Scopus
  31. I. Grigorenko and M. E. Garcia, “Calculation of the partition function using quantum genetic algorithms,” Physica A, vol. 291, pp. 463–470, 2001.
  32. J. A. Yang, B. Li, and Z. Zhuang, “Multi-universe parallel quantum genetic algorithm and its application to blind source separation,” in Proceedings of the International Conference on Neural Networks and Signal Processing (ICNNSP '03), pp. 393–398, Nanjing, China, December 2003. View at Publisher · View at Google Scholar · View at Scopus
  33. G. Zhang, W. Jin, and L. Hu, “A novel parallel quantum genetic algorithm,” in Proceedings of the 4th International Conference on Parallel and Distributed Computing, Applications and Technologies, pp. 693–697, Chengdu, China, August 2003. View at Publisher · View at Google Scholar · View at Scopus
  34. C. Hui, Z. Jiashu, and Z. Chao, “Chaos updating rotated gates quantum-inspired genetic algorithm,” in Proceedings of the International Conference on Communications, Circuits and Systems, pp. 1108–1112, Chengdu, China, June 2004. View at Scopus
  35. L. Wang, F. Tang, and H. Wu, “Hybrid genetic algorithm based on quantum computing for numerical optimization and parameter estimation,” Applied Mathematics and Computation, vol. 171, no. 2, pp. 1141–1156, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  36. Q. Yang and S. Ding, “Methodology and case study of hybrid quantum-inspired evolutionary algorithm for numerical optimization,” in Proceedings of the 3rd International Conference on Natural Computation (ICNC '07), pp. 608–612, Haikou, China, August 2007. View at Publisher · View at Google Scholar · View at Scopus
  37. N. Li, P. Du, and H. Zhao, “Independent component analysis based on improved quantum genetic algorithm: application in hyperspectral images,” in Proceedings of IEEE International Geoscience and Remote Sensing Symposium (IGARSS '05), pp. 4323–4326, Seoul, Korea, July 2005. View at Publisher · View at Google Scholar · View at Scopus
  38. L. Abdesslem, M. Soham, and B. Mohamed, “Multiple sequence alignment by quantum genetic algorithm,” in Proceedings of the 20th International Parallel and Distributed Processing Symposium, pp. 8–15, Rhodes Island, Greece, 2006.
  39. Z. Dong, Y. Huang, and P. Han, “Thermal process identification with radial basis function network based on quantum genetic algorithm,” Proceedings of the Chinese Society of Electrical Engineering, vol. 28, no. 17, pp. 99–104, 2008. View at Scopus
  40. J. Gao and J. Wang, “A hybrid quantum-inspired immune algorithm for multiobjective optimization,” Applied Mathematics and Computation, vol. 217, no. 9, pp. 4754–4770, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  41. M. S. Phadke, Quality Engineering Using Robust Design, Prentice-Hall, Upper Saddle River, NJ, USA, 1989.
  42. D. C. Montgomery, Design and Analysis of Experiments, John Wiley & Sons, New York, NY, USA, 1991.
  43. S. H. Park, Robust Design and Analysis for Quality Engineering, Chapman and Hall, London, UK, 1996.
  44. D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, Boston, Mass, USA, 1989.
  45. M. Gen and R. Cheng, Genetic Algorithms and Engineering Design, John Wiley and Sons, New York, NY, USA, 1997.
  46. J. T. Tsai, T. K. Liu, and J. H. Chou, “Hybrid Taguchi-genetic algorithm for global numerical optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 4, pp. 365–377, 2004. View at Publisher · View at Google Scholar · View at Scopus
  47. J. T. Tsai, J. H. Chou, and T. K. Liu, “Tuning the structure and parameters of a neural network by using hybrid Taguchi-genetic algorithm,” IEEE Transactions on Neural Networks, vol. 17, no. 1, pp. 69–80, 2006. View at Publisher · View at Google Scholar · View at Scopus
  48. W. Hock and K. Schittkowski, Test Examples for Nonlinear Programming Codes, vol. 187 of Lecture Notes in Economics and Mathematical Systems, Springer, Berlin, Germany, 1981. View at Publisher · View at Google Scholar
  49. C. A. Floudas and P. M. Pardalos, Recent Advances in Global Optimization, Princeton University Press, Princeton, NJ, USA, 1992.
  50. Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer, Berlin, Germany, 1994.
  51. J. J. Grefenstette, “Optimization of control parameters for genetic algorithms,” IEEE Transactions on Systems, Man and Cybernetics, vol. 16, no. 1, pp. 122–128, 1986. View at Scopus
  52. L. Davis, “Adapting operator probabilities in genetic algorithms,” in Proceedings of the International Conference on Genetic Algorithms (ICGA '89), pp. 61–69, San Mateo, Calif, USA, 1989.
  53. J. H. Chou, W. H. Liao, and J. J. Li, “Application of Taguchi-genetic method to design optimal grey-fuzzy controller of a constant turning force system,” in Proceedings of the 15th CSME Annual Conference, pp. 31–38, Taiwan, 1998.
  54. S. García, A. Fernández, J. Luengo, and F. Herrera, “A study of statistical techniques and performance measures for genetics-based machine learning: accuracy and interpretability,” Soft Computing, vol. 13, no. 10, pp. 959–977, 2009. View at Publisher · View at Google Scholar · View at Scopus
  55. S. García, D. Molina, M. Lozano, and F. Herrera, “A study on the use of non-parametric tests for analyzing the evolutionary algorithms' behaviour: a case study on the CEC'2005 Special Session on Real Parameter Optimization,” Journal of Heuristics, vol. 15, no. 6, pp. 617–644, 2009. View at Publisher · View at Google Scholar · View at Scopus
  56. F. Wilcoxon, “Individual comparisons by ranking method,” Biometrics, vol. 1, pp. 80–83, 1945.
  57. A. Field, Discovering Statistics Using SPSS, SAGE Publications, London, UK, 2006.
  58. J. Kennedy, R. C. Eberhart, and Y. Shi, Swarm Intelligence, Morgan Kaufmann, San Francisco, Calif, USA, 2001.
  59. L. N. de Castro and J. Timmis, Artificial Immune Systems: A New Computational Intelligence Approach, Springer, London, UK, 2002.
  60. C. J. Shih and T. K. Lai, “Mixed-discrete fuzzy programming for nonlinear engineering optimization,” Engineering Optimization, vol. 23, pp. 187–199, 1995.
  61. K. M. Ragsdell and D. T. Phillips, “Optimal design of a class of welded structure using geometric programming,” ASME Journal of Engineering for Industry-Transactions, vol. 98, no. 3, pp. 1021–1025, 1976. View at Scopus
  62. ANSYS, APDL Programmer’s Guide: ANSYS Release 10.0, ANSYS, Canonsburg, Pa, USA, 2005.