Abstract

In real-world structural problems, a number of factors may cause geometric imperfections, load variability, or even uncertainties in material properties. Therefore, a deterministic optimization procedure may fail to account such uncertainties present in the actual system leading to optimum designs that are not reliable; the designed system may show excessive safety or sometimes not sufficient reliability to carry applied load due to uncertainties. In this paper, we introduce a hybrid reliability-based design optimization (RBDO) algorithm based on the genetic operations of Genetic Algorithm, the position and velocity update of the Particle Swarm Algorithm (for global exploration), and the sequential quadratic programming, for local search. The First-Order Reliability Method is used to account uncertainty in design and parameter variables and to evaluate the associated reliability. The hybrid method is analyzed based on RBDO benchmark examples that range from simple to complex truss parametric sizing optimizations with stress, displacements, and frequency deterministic and probabilistic constraints. The proposed final problem, which cannot be handled by single loop RBDO algorithms, highlights the importance of the proposed approach in cases where the discrete design variables are also random variables.

1. Introduction

In the engineering science, the cost reduction in manufacturing is pursued in order to obtain efficiency and save time in production. Although the use of optimization methods is increasing, in practical terms, the design parameters and/or design variables used in deterministic optimization procedures may present uncertainties. It means that analyzing the responses obtained by deterministic optimization procedures in terms of failure probability, nonacceptable levels may be present in the engineering point of view. According to [1], uncertainties in real-world optimization problems include factors such as data incompleteness, mathematical model inaccuracies, and environmental condition variation, just to name a few. This directly affects the optimization problem since in, a structural engineering design, economy and safety are competing goals [2]. One way to link such uncertainties in an optimization problem is using a reliability index (), a measure of the degree of reliability in the design, according to [1]. This procedure is referred to as reliability-based design optimization (RBDO); in this case, the reliability index becomes an extra constraint to the optimization problem. According to [3], in the last decade, the RBDO significance and conceptual and mathematical complexity have been intensively studied. The optimization procedure may require a high computational cost [4]. For this reason, several authors have studied methods that use single loop RBDO [3, 57]. These authors have applied reliability-based design optimization strategies like SAP (Sequential Approximate Programming) and SORA (Sequential Optimization and Reliability Assessment), which are capable of solving the majority of practical cases, when multiple modes of failure are present, but for linear and smooth functions. According to [8], the use of approximation procedures in the sequential optimization and the evaluation of the reliability index in a single step may result in spurious optimal points. That is evident since these methods are deterministic and they show a tendency of convergence to local minima points due to possible nonlinear function behavior in probabilistic problems. In order to increase robustness, authors [9] proposed to construct a response surface that is the product of individual performance functions and be sued as surrogate to get optimal design solutions. Despite the large amount of literature in the theme, a good introduction to the RBDO subject can be found in [10].

In this article, a hybrid RBDO methodology is proposed and applied to a range of spatial trusses in order to find the optimum parameters for minimum mass, considering maximum allowable deflections and limited stress. The First-Order Reliability Method (FORM) is used to evaluate the reliability index () along the optimization steps. Due to the high computational cost of the traditional algorithms to solve nonconvex optimization problems, we propose a hybrid optimization method. Comparisons about the computational cost and quality of the solution are performed using as baseline standard structural problems. The proposed hybrid global optimization method based on existent global search (metaheuristic) and deterministic algorithms is detailed. This new method is presented and discussed by using reliability-based optimization on structural truss examples.

2. Reliability Index Evaluation

As stated by [11], reliability analysis can be applied to many engineering fields such as in aeronautical, mechanical, and civil problems. By definition, the reliability is the complement of the probability of failure, that is, the likelihood of failure of a specific event or set of events from a complex system. A limit state function that relates the failure event (violation of specific set of constraints) as function of several variables is stated by the mathematical expression:where means the limit state function that defines a constraint. Some design variables may present random components. In case is a set of random variables that affects that constraint, this limit state function becomes also random and some probability of violation is implicit [11, 12]. means that the system is in the failure domain, , and means that the system is in the safe domain (constraint was not violated). The probability of failure can be evaluated by the integration of the joint probability density function as indicated in [11]: wherethe failure domain is defined by means that , is the cumulative standard distribution function, and is a safety index metric. Equation (2) presents a close solution in some particular cases where is Gaussian and for linear and quadratic . However, it is difficult to be handled in case of several random variables . In this case, Monte Carlo (MC) Simulation can be used to get approximate asymptotic solutions. Moreover, statistic values for function are not known a priori and the number of MC samples frequently is not enough to ensure confidence. So, a very simple but not so robust way to get and estimate for the reliability index (that is related to the probability of survival) is using the probability density function first and second moments (mean and variance) for the limit state function . When the limit state function is linear and the random variables are normally distributed and uncorrelated, the reliability index can be approximated by the following (see Figure 1):where and represent, respectively, the mean value and the standard deviation (square root of variance) for function .

Let be the mechanical stress that can be measured in a loaded component , assuming a failure situation where this value exceeds the imposed material strength limit value (). Equation (1) can be rewritten, for the limit state function, as follows:

Linearization of function by a Taylor expansion up to linear terms can be used to get approximated values for and when the limit state functions are nonlinear. The point around the linearization performed affects and values. A method to obtain the reliability index that is independent of the limit state function formulation is known as AFOSM (Advanced First-Order Second Moment) and was first proposed by [13]. For uncorrelated Gaussian random variables , they are transformed into normalized ones by the transformation: where and are the cumulative distribution function of the random variable and the inverse of the cumulative standard Gaussian distribution, respectively. In this way, the limit state function in the real space is transformed to the uncorrelated normalized space , so

The linearization of the limit state function is performed at the point that presents the shorter distance to the origin of the uncorrelated space and that ensures . The point is called the design point and the reliability index is evaluated as mentioned previously as

2.1. Rackwitz-Fiessler Algorithm

In order to solve (7), the efficient algorithm proposed by Rackwitz and Fiessler [13] is used. It can be described in the following steps.

Step 1. Define the limit state function for the problem .

Step 2. Assume initial values for the design point in the real space and evaluate the corresponding values for the limit state function (e.g., assume an initial design point as the mean values of random variables).

Step 3. Evaluate the equivalent Gaussian mean value and standard deviation for the random variables:

Step 4. Transform random variables from real space to normal uncorrelated . The design variables values at the design point will be evaluated as follows:

Step 5. Evaluate the sensitivities at the design point .

Step 6. Evaluate the partial derivatives in the normal uncorrelated space using the chain rule:

Step 7. Evaluate the new value for the design point in the normal uncorrelated space using the recurrent equation:

Step 8. Evaluate the distance from the origin to this new point and estimate the new reliability index:

Step 9. Check convergence of along iterations using a predefined tolerance.

Step 10. Evaluate the random variables at the new design point using

Step 11. Evaluate value for the new random variables and verify a final convergence criterion, for instance, and .

Step 12. If both criteria are met, stop iterating; otherwise, repeat Steps 311.

This algorithm assumes all the random variables as noncorrelated in the original actual space. If there exists correlation between random variables, using Cholesky decomposition of the covariance matrix, the correlated variables can be transformed to uncorrelated ones and the algorithm is still valid [8, 11, 14]. This transformation is presented in

3. Reliability-Based Design Optimization (RBDO)

In the RBDO, the objective function should satisfy predefined probabilistic constraints, which are set as new constraints to the problem. Probability failure analyses are performed along the optimization process in order to check if the probabilistic constraints are met. This is used to guide the optimization to the minimum weight that complies with the target reliability levels. The easier formulation for RBDO implements the algorithm with a double loop where the optimization is split into two stages: (a) on a first stage, the objective function optimization is performed focusing on the design variables; (b) on a second stage, the optimization is performed focusing on the random variables starting from the design variables from the outer loop. More details can be found in [15]. A deterministic model for the minimization can be defined generally as follows [16]:where is the vector of design variables, is the vector of parameters for the optimization problem, is th model’s equality constraint from a total of equality constraints, are the inequality constraints, and and are the vectors that contains upper and lower values for the design variables. However, a deterministic optimization does not consider the uncertainties in the design variables nor fixed design parameters. In RBDO, the probabilistic constraints are added increasing the set of constraint equations. Since the reliability index can be defined in terms of the accumulated probability function for the limit state function (and vice versa), the following holds:where Φ is the cumulative standard distribution function. In this article, the reliability constraint is formulated as follows:where is the probabilistic constraint defined by the dimensionless ratio between the evaluated reliability index and the target reliability index . This means that if the evaluated reliability index during the optimization is larger than the reliability target index , then and the probabilistic criterion is met. Otherwise, a penalization for the objective function will take place. Numerically, during the optimization process, the failure function needs to be evaluated a number of times so one can find the probability of failure value (or conversely the reliability index). In a RBDO, both parameter vector and design variables can be random variables.

Figure 2 shows a geometric interpretation for the difference between a deterministic and a reliability-based optimization. The implementation of the optimization may be performed using two different approaches: RIA (Reliability Index Approach) or PMA (Performance Measure Approach).

3.1. Reliability Index Approach (RIA)

This approximation for the reliability constraint is treated as an extra constraint that is formulated by the reliability index (for the sake of simplicity, in the uncorrelated design space). So, the following can be written:where is the standard uncorrelated random variables and and are the inequality and equality deterministic constraints with corresponding probabilistic equalities and probabilistic inequalities constraints. means the expected values. In order to find , the reliability problem is defined as , subject to .

3.2. Performance Measure Approach (PMA)

This formulation is performed with the inverse of the previous analysis by RIA, in such a way that the following can be written:where is the vector the normalized uncorrelated random variables, and are the and inequality and equality constraints, and and are the and probabilistic inequalities constraints, respectively. So, differently from RIA, for a fixed reliability index , the equality and inequality constraints are ensured beforehand, so a line search for the point where hypersphere cuts the constraints should be performed. The advantages and disadvantages in using this or the previous PMA formulation can be found in [17].

4. Proposed Hybrid Optimization Method

Evolutionary algorithms are widely considered powerful and robust techniques for global optimization and can be used to solve full-scale problems which contain several local optima [18]. Although the implementation of such methods is easy, they can demand a high computational cost due to high number of function calls. Therefore, they are not recommended for problems in which the objective function presents high computational cost [19]. Besides, it is important that the algorithm parameters be tuned adequately in order to avoid premature convergence of the algorithm.

Pioneering works with hybrid algorithms for optimization have been performed by [20] with the so-called Augmented Lagrangian Genetic Algorithm that could deal with constraints by penalization functions. The algorithm has an outer loop to update penalization parameter based on constraints values and an inner loop for traditional GA using the penalized fitness function. The algorithm can also avoid trial and error selection and manually increase penalty constants in the optimization problem. In a different way, the hybrid method proposed here is based on the use of genetic operators of GA like mutation, crossover, and elitism associated with a gradient search by SQP algorithm performed on best individual of the GA population. Besides, 20% of the best individuals will pass by a hybrid PSO operation, which allows position update based on velocity and global and local cognitive coefficients. These operators are inherited from the traditional PSO algorithm [21]. Using elitism, the best individual of the group passes to a cost function enhancement by sequential quadratic programming (SQP) and then is sent to the next generation. This method is hereafter called HGPS. The main goal of hybrid method is to accelerate the convergence rate of a global search while maintaining the exploration capability. The SQP method is justified by the intrinsic speed in finding local optima, while GA and PSO preserve the diversity in the exploration of new regions of the search space.

Although the convergence rate can be accelerated, getting stuck in local optima still remains possible. For this reason, the adaptive mutation is inherited from GA. PSO operators (like momentum) are also used in order to help the escape from local minima. Initial tests carried out by [22, 23] showed that loss in diversity may still occur, resulting in premature convergence; thus adaptive mutation operation is recommended mainly in cases where the problem has a complex solution (e.g., in nonsmooth or discontinuous functions). Their use is also based on the good results found in preliminary tests in literature [24].

A simple sketch illustrating the idea of mixing best features from several algorithms is shown in Figure 3, considering individuals and generations. Algorithm 1 presents the corresponding pseudocode with the steps followed by the proposed algorithm. It is important to notice that the codification for the design variables (in a GA sense) is the real one; that is, each individual for the generation is represented by where is the number of genes (design variables).

Initialize generations, Initialize Population size: “”, mutation probability: “”, crossover probability: “”,
number of genes per individual: “”, upper and lower bounds values for each gene: “”, “”.
Main Loop (while stopping criteria are not met)
Evaluate and rank the population of individuals by the objective function vector.
Generation of new population
(3.1) SQP method: The best individual of the population is the starting point of search SQP method, which
replaces the same point in the next generation.
(3.2) PSO method: 20% of the best individuals suffers PSO operations, constrained to and ,
According to:
This work assumes and , according to [25]).
(3.3) GA Method (genetic operators)
(3.3.1) Crossover It is assumed according to [26]
“One point Recombination”
Loop to Step
If do
End If
End Loop
(3.3.2) Adaptive Mutation  [22]
Loop to
If then
Else
End If
End Loop
If do
End If
End of the Main Loop

In the pseudocode in Algorithm 1, is the current value of the design variable of the particle of the generation of the GA. The is the updated velocity of the design variable of the particle of the generation of the GA. The is the best design variable from the selected 20% of best individuals, and is the best design variable of generation. Related to the stopping criteria, a criterion is chosen that is based on the coefficient of variation () of the objective function of the elite individuals within a defined number of generations. For the following problems, it has been considered that, within five consecutive generations, if the change in the coefficient of variation of the objective function of the elite individuals is lower than a defined tolerance, the convergence criterion is met.

This hybridized methodology (HGPS) was proposed aiming at convergence acceleration and maintaining the diversity of individuals to avoid premature convergence. Therefore, this approach takes advantage from the use of adaptive mutation, PSO operator performed on individuals from consecutive generations, and SQP improvements on the solution of the best individual found so far. The probability of mutation is not a constant value but varies according to the fitness function used by the GA, that is, the mean of individual objective function value . According to [22], the value of should depend not only on (a measure of convergence) but also on the fitness function value of the solution. The closer is to , the smaller should be, assuming zero at . To prevent the overshooting of beyond 1.0, the constraint for is achieved by setting whenever (since such solutions need to be completely disrupted); otherwise, it is set to .

5. Results

The comparisons are performed using only simple optimization algorithms like GA (Genetic Algorithm), DE (Differential Evolution), PSO (Particle Swarm Optimization), FMA (Firefly Metaheuristic Algorithm), and the proposed HGPS algorithm. When possible, the deterministic SQP (sequential quadratic programming) algorithm will be used for comparisons. Specifically, in example 3, where discrete design variables are present, this comparison will not be possible. Since there is no theorem that gives the best parameters values for the algorithms that may serve to any problem, based on literature recommendations and some trial and error, some effort was made in finding those parameters for each metaheuristic algorithm. In all studied cases, in order to have fair comparisons, the same set of optimization parameters was applied that were previously tuned for best performance in each algorithm. Table 1 shows such parameters for the tested algorithms.

5.1. Case  1: 10 Bar Truss Problem

This classical model was presented by [6] that performed the minimization of the truss mass with stress and displacement constraints taking into account reliability index for stress and displacements. The analyzed truss is presented in Figure 4. The bar length  m (360 in). The ten bar cross sections are the design variables. The lower and upper limit for the design variable are and , respectively. The member’s elastic modulus are and assumed deterministic. The applied loads are assumed random and uncorrelated, following a lognormal distribution, with mean and standard deviation . Material strength is assumed as Gaussian random variable with mean and standard deviation being equal to and , respectively. The vertical displacement has a constraint of .

The deterministic constraints for stress and displacements are treated as limit state functions from the probabilistic point of view. The reliability indexes for the stress values in any member (tension or compression) and for the maximum vertical displacement at any node are set to 3.0. Therefore, the mathematical RBDO problem can be stated as follows:where is the vector of design variable (member area), , are the target reliability indexes, and and are the reliability indexes for stress and displacement, respectively that are function of vector of design variables and vector of random variables (two loads and the material strength). The parameters and represent lower and upper values for design variables, respectively. is the total number of nodes.

Table 2 shows the obtained results with the optimization methods averaged for 20 independent runs. In the same table, the efficiency parameter (ratio between total number of objective function (deterministic) and limit state function (probabilistic) evaluations to the corresponding value for the best valid solution found so far, i.e., less weight and not constrained) and the quality parameter (best objective function values ratio) for each result are presented.

Results from GA, FMA, and SQP show to be heavier then that reported by [6] using SORA/SQP method, but both PSO and HGPS (particularly HGPS) achieved a lighter truss solution. The author [8] reported that the methods based on SORA may fall into local optima points; that is, local optimum originated mainly due to the used gradient-based methods involved in optimization of the objective function and constraints. So, the best results are attributed to the use of metaheuristic capabilities in the proposed HGPS algorithm.

In addition, it is possible to note that the GA and FMA solutions were not better (objective function) than SORA solution, meaning that the reported configuration seems not to be the best for this problem. Apparently, GA and FMA get stuck into local minima. It was also observed that the PSO, SQP, and HGPS methods found slightly different design variable solutions even with reliability index constraints being satisfied and having similar objective function values. This seems to represent a flat design space near the optimum solution. It is also possible to notice the not so good HGPS efficiency when compared to PSO algorithm or SPQ (the most efficient), although HGPS have a superior solution (quality). It should the emphasized that any of the deterministic and probabilistic constraints are violated for the HGPS solution. For the 20 independent runs, the HGPS presented the best mass value of 1252.31 kg, mean value of 1262.15 kg (median 1264.97 kg), the worst mass value of 1269.68 kg, and a coefficient of variation of 0.012.

5.2. Case  2: 37 Bar Truss Problem

In this example, a Pratt type truss with 37 members is analyzed. The goal of the deterministic optimization problem is to minimize the mass. There are nonstructural masses of attached to each of the bottom node; see Figure 5. The lower chord is modelled as bar elements with fixed cross-sectional area . The remaining symmetric bars are design variables also modelled as bar elements with initial cross section of . This problem is considered a truss optimization on size and geometry since all nodes of the upper chord are allowed to vary along the -axis in a symmetric way and all the diagonal and upper bars are allowed to vary the cross-sectional area within upper and lower values. Figure 5 describes the structure. In this example, the design variables are (). The lower limit for the design variables are () in and , respectively. The upper limit is () in and . A deterministic constraint for the 3 first natural frequencies is imposed to the original problem; see Table 3. The optimum mass found for the deterministic problem is 360.56 kg with the design variables being (0.93911 1.3327 1.5211 1.6656 1.7584 2.9941 × 10−4 1.0019 ×10−4 1.0033 × 10−4 2.4837 × 10−4 1.1804 × 10−4 1.2643 × 10−4 2.5903 × 10−4 1.5983 × 10−4 1.5209 × 10−4 2.5021 × 10−4 1.2443 × 10−4 1.3579 × 10−4 2.3928 × 10−4 1.0014 × 10−4) and and the natural frequencies constraints are satisfied (, and ).

The probabilistic problem is then adapted from this deterministic example, described in [27, 28], following the same geometry and deterministic frequency constraint. Displacement and stress constraints as well as concentrated loads are added to the original problem in order to result in extra probabilistic constraints for stress and displacements.

In this problem, the first three natural frequencies are of interest and a reliability constraint is considered in the probabilistic problem. Therefore, the goal of the optimization problem is to minimize the mass of the truss taking into account constraints for stress, displacement, and natural frequency reliability indexes. All remaining symmetric member areas are considered random variables with lognormal distribution with coefficient of variation of 0.01. Therefore, in this problem, the mean value of cross-sectional areas is either of the design variables for the optimization problem as random variables for the probabilistic problem as well. The modulus of elasticity is considered following a normal distribution with mean 2.1 × 1011 Pa and coefficient of variation of 0.05 (as reported in [29]). The areas of the lower chord bars are equal to 4 × 10−3 m2 and assumed deterministic and they are not design variables. The physical properties and the constraints are shown in Table 3. The violation of any of these constraints will represent a failure mode. In this particular model, as previously described, three reliability indexes are considered as extra constraints for the problem: , , and . They represent the reliability for frequency, stress, and displacement constraints.

Equation (21), in this case, can represent the RBDO mathematical model, where , , and represent the target reliability indexes for frequency, stress, and displacement constraints:where represents the vector of design variables (five coordinates and 14 member areas) and represents the random variable vector (14 member areas and the Young modulus). is the total number of nodes.

Table 4 presents a summary of the results and Table 5 shows the obtained values for each design variable for the tested optimization algorithms. For the 20 independent runs, the HGPS presented the best mass value of 374.187 kg, mean value of 396.680 kg (median 398.13 kg), worst mass value of 405.097 kg, and a coefficient of variation of 0.037.

Figure 6 presents the resulting superimposed geometric configurations obtained with the optimization methods. HGPS in this case give the best mass value (quality) although with not so good efficiency as the PSO algorithm. The reliability for displacements and stress constraints resulted in high values ( means negligible failure of probability), indicating that the frequency constraint, in this case, is a dominant mode of failure that prevails over the two other constraints. As expected, the absolute value for displacement, stress, and frequencies in the optimum designs are far below the corresponding limits.

5.3. Case  3: 25 Bar Space Truss Problem

In this example, the mass minimization of a 25 bar truss is performed, subject to constraints in the reliability indexes for stress, displacement, and first natural frequency. The deterministic optimization problem was previously presented by [30] where the mass minimization was performed not taking into account uncertainties; see Figure 7. The problem is described as a discrete optimization problem since the member areas can vary following a table of discrete values that ranges from  m2 (0.1 in2) to  m2 (3.5 in2) by increments of  m2 (0.1 in2). There are stress constraints in all members such that a strength limit of  Pa (±40 ksi) should be verified. A displacement constraint of  m (±0.35 in) is imposed to nodes 1 and 2 in and direction. The fundamental frequency should be constrained to  Hz. Figure 7 shows the dimensions of the 25 bar spatial truss example. The physical properties of the material, upper and lower limits for the design variables, and the deterministic constraints are listed in Table 6.

In this example, eight sets of grouped bars are assumed, resulting in eight design variables, namely, set 1, nodes 1-2; set 2, nodes 1–4, 2-3, 1–5, and 2–6; set 3, nodes 2–5, 2–4, 1–3, and 1–6; set 4, nodes 3–6 and 4-5; set 5, nodes 3-4 and 5-6; set 6, nodes 3–10, 6-7, 4–9, and 5–8; set 7, nodes 3–8, 4–7, 6–9, and 5–10; set 8, nodes 3–7, 4–8, 5–9, and 6–10. Table 7 presents the load direction and magnitude. The spatial truss is fixed at the lower nodes and the loads are applied as indicated in Figure 7 (where dimension  m). The obtained final mass by the deterministic optimization is 217.60 kg (479.72 lbm) with the optimum design variable vector (discrete areas) as (6.4516 × 10−5 1.9354 × 10−4 2.2580 × 10−3 6.4516 × 10−5 1.0322 × 10−3 5.8064 × 10−4 3.2258 × 10−4 2.2580 × 10−3) m2.

For the RBDO problem, the areas are either design variables (deterministic) or random (probabilistic). They are assumed uncorrelated, following a lognormal distribution with coefficient of variation of 0.02; the elastic modulus follows a Gaussian distribution with coefficient of variation of 0.05. Thus, there are, in total, nine random variables. The reliability index levels for each constraint (frequency, stress, and displacement) are set to , , and . Therefore, the reliability-based optimization problem is written as indicated bywhere represents the vector of design variables (eight discrete member areas) and represents the random variable vector (8 discrete member areas and the elastic modulus). and are the lower and upper limits for the design variables. is the total number of nodes.

Table 8 presents a summary of the results and Table 9 shows the obtained design variables for each optimization method. For the 20 independent runs, the HGPS presented the best mass value of 253.80 kg, mean value of 255.76 kg (median 254.01 kg), the worst mass value of 265.73 kg, and a coefficient of variation of 0.014. Particularly in this example, PSO and HGPS found the same close results (best quality indexes), although HGPS resulted in a better efficiency index. All reliability constraints were satisfied and the displacement constraint presented as the most important and dominant mode of failure. Stress and frequency constraints resulted in high reliability indexes (negligible probability of failure).

6. Final Remarks and Conclusions

In this article, a hybridized method for RBDO is proposed. It takes advantage of the main features presented in some heuristic algorithms like PSO, GA, and gradient-based methods, like SQP. Simple to gradually more complex truss examples are analyzed using the proposed methodology, compared with literature examples. The best result is obtained with the hybrid method HGPS in the analyzed examples, albeit in the last example PSO and HGPS, presented the same result. Moreover, it is relevant to mention that, due to complexity of the optimization problems with uncertainties, in most of situations, the heuristic methods solely were not able to find the optimal design, although they resulted in objective function with same order of magnitude. This suggests that a different convergence criterion may be used for each algorithm in order to avoid premature convergence.

In all problems, the resulting final mass obtained using RBDO is larger than that obtained by deterministic optimization; however, the corresponding reliability levels in stress, displacements, or frequency have been met. These reliability levels are not attained with the original deterministic optimization.

Even with a large number of design variables in the optimization analysis, all presented global search methods were able to find feasible solutions. The optimization methods found different optimum design variables for similar objective function. It can be argued that those design points do not represent the best quality converged design points and may represent a local minimum; however, for nonexplicit limit state function problems, this can only be guaranteed checking the results with different algorithms.

The hybrid method HGPS, when compared to the GA in the conventional form, presented a lower mass value (quality in result) with a low number of function evaluations (efficiency in the optimization process). The better efficiency was attained with the gradient- based methods at the expense of worse quality.

The proposed final problem, which cannot be handled by single loop RBDO algorithms (design variables are also random variables), highlights the importance of the proposed approach in cases where the discrete design variables are also random variables. Some new improvements in the HGPS method are being planned in future research like the parallelization of the code. More challenging problems are foreseen to test the proposed method.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors acknowledge Brazilian Agencies CNPq and CAPES for the partial financial support for this research.