Abstract

The uncertainty of the engineering system increases with the growing complexity of the engineering system; therefore, the tolerance to the uncertainty is essential. In the design phase, the output performance should reach the design criterion, even under large variations of design parameters. The tolerance to design parameter variations may be measured by the size of a solution space in which the output performance is guaranteed to deliver the required performance. In order to decouple dimensions, a maximum solution hyperbox, expressed by intervals with respect to each design parameter, is sought. The proposed approach combines the metaheuristic algorithm with the DIRECT algorithm where the former is used to seek the maximum size of hyperbox, and the latter is used as a checking technique that guarantees the obtained hyperbox is indeed a solution hyperbox. There are three advantages of the proposed approach. First, it is a global search and has a considerable high possibility to produce the globally maximum solution hyperbox. Second, it can be used for both analytically known and black-box performance functions. Third, it guarantees that any point selected within the obtained hyperbox satisfies the performance criterion as long as the performance function is continuous. Furthermore, the proposed approach is illustrated by numerical examples and real examples of complex systems. Results show that the proposed approach outperforms the GHZ and CES-IA methods in the literature.

1. Introduction

With the rapid development of technology in recent years, the systems applied in various engineering areas such as electronics, communication, and networking have become more and more complex. The increasing complexity of the system provides a new source for the uncertainty. Uncertainty arises because some design parameters cannot yet be specified exactly or they may be changed over the course of development [1, 2]. However, the higher requirements on systems place great importance on analyzing the uncertainty.

Traditional optimization techniques seek an optimum in the design space. Typically, without considering the uncertainty, the optimum design is frequently pushed to the constraint boundary of the design. Consequently, this type of optimum design may be nonrobust and sensitive to parameter variabilities. Some authors even believe that optimization is actually just the opposite of robustness [3]. As reliability is nevertheless mandatory, engineers have to look for robust designs in order to avoid unexpected deviations from the nominal performance [4]. To this end, more advanced methods have been developed to include uncertainties of the parameters and robustness criteria in the optimization. These include(i)Methods for selecting a design that is insensitive to the variations without eliminating possible variations of design parameters, i.e., robust design optimization (RDO), see [57](ii)Methods for identifying optimum designs which are characterized by a low probability of failure, i.e., reliability-based design optimization (RBDO), see [1, 2, 8](iii)Methods for computing the effect of variability of input parameters on variability of objective function value, i.e., sensitivity analysis (SA), see [9]

Unfortunately, RDO, RBDO, and SA suffer from certain disadvantages which exclude them from some important applications. RBDO and RDO deal with models in the case where the variability of design parameters is given. If, however, the variability of the design parameters is not known completely, it has to be estimated, which is not always possible. When applying SA, information on how to improve a nonrobust solution is limited, what design parameter needs to be adjusted and what value it should assume is unknown.

In this paper, we aim to infer the maximum uncertainty or variability of design parameters that we can tolerate without violating the required performance. That is, we focus on finding a maximum box-shaped solution space rather than a single optimum solution. The box-shaped solution space, representing a hyperbox, can be expressed by intervals for each design parameter and has the following benefits:(i)The performance function delivers the required performance as long as the design parameters lie in the box-shaped solution space(ii)For a design to be good, the choice of the value of one parameter within its assigned interval does not depend on the values of other parameters as long as they are within their respective intervals(iii)The intervals may be used to assess the robustness and sensitivity of the uncertain design parameters by the widths of the associated intervals.(iv)The intervals are independent of each other and may be combined with intervals of other disciplines, enabling distributed design in separate disciplines

Two categories of approaches that solve similar problems could be identified in the literature. The first category is based on data-mining and machine-learning techniques [1013]. The stochastic approach proposed in [10] combines query and online learning, which probes a box-shaped candidate solution space by stochastic sampling and then readjusts its boundaries in order to remove designs with insufficient performance and explore more design space that has not been probed before. The quality of the results and the efficiency of this stochastic approach are studied in detail in [11]. This stochastic approach, however, produces a box-shaped solution space that may contain some bad designs and may not have the best size because the Monte Carlo sampling is used to estimate the locations of good and bad designs. The second category is based on an analytical technique, called interval arithmetic [14]. The algorithm presented in [15], for example, applies interval arithmetic within an iterative optimization scheme to compute the feasibility of candidate boxes. Interval arithmetic, however, limits the applicability of the algorithm, as the accuracy of the results depends on the problem and cannot be assessed for general cases. Another drawback of the algorithm in [15] is that the performance function needs to be analytically known and cannot be treated as a black box [16].

The maximum box-shaped solution space must both fulfill the “quantitative” viewpoint (i.e., its volume is maximum) and “feasible” viewpoint (i.e., it is indeed a solution space). To address these problems, we propose an innovative approach which combines the metaheuristic algorithm with the DIRECT algorithm. More specifically, we use the metaheuristic algorithm to search the maximum hyperbox and use the DIRECT algorithm to guarantee that all designs in the obtained hyperbox satisfy the performance criterion (i.e., the obtained hyperbox is indeed a solution hyperbox). Actually, in the proposed approach, metaheuristic algorithms do not rely on mathematical properties to be applied, and DIRECT algorithm only requires evaluations of the performance function. Therefore, our approach in essence treats the performance function as a black box and can be applied to various types of practical engineering problems for complex systems, such as reliability allocation, availability allocation, and maintenance analysis. As a real example of the complex system, the reliability allocation problem of the power-shift steering transmission control system (PSSTCS) of a heavy vehicle is analyzed by the proposed approach in this paper.

The proposed approach is similar to the traditional RDO, as the variations of design parameters are considered. However, it differs from the traditional method in which the optimization is to seek the intervals of a permissible design range rather than a single design, and interval boundaries are used as degrees of freedom rather than design parameters.

The paper is organized as follows. In Section 2, motivated by a problem from engineering practice, the optimization problem is formulated for identifying the maximum hyperbox which guarantees the performance requirement. Section 3 introduces the DIRECT algorithm. Section 4 presents in detail the proposed approach. Section 5 discusses the numerical performance of the proposed approach. In Section 6, engineering cases are investigated. Section 7 presents some concluding remarks.

2. Motivation and Problem Statement

2.1. Example Problem

We consider the voltage divider shown in Figure 1 in [15]. The terminal voltage is expressed as and the resistance .

Good designs fulfilling the design goals should satisfy

Figure 2 shows the complete solution space (gray region). However, the classical optimum that minimizes and maximizes is not robust. The immediate neighborhood of the optimum includes designs that violate condition (1) and thus fails to meet at least one design goal.

The classical optimum would not be a suitable choice because of the lack of robustness. Using the complete solution space instead would also be impractical, as the tolerable range for would depend on the choice of . For decoupling the design parameters, hyperboxes contained in the complete solution space, expressed by intervals for each design parameter, are desirable.

The robustness of uncertain design parameters [17] is defined as the widths of the associated intervals whereby the performance criterion (1) is met. Maximum robustness is provided if the intervals are as large as possible. The choice of intervals is not unique, and the size of the interval for depends on the choice of the interval for . The approach proposed here is to maximize the volume of the solution hyperbox, and the resulting unique intervals are shown in Figure 2.

2.2. Problem Statement

Let represent the design parameters or design points or designs, where is the number of dimensions. The output performance at is given bywith being the performance function. If the analytical form of is not known, it is considered as a black box, and may have to be computed numerically. In typical optimization problems, the performance is optimized, that is, a design is sought such that obtains an extreme value. In the approach presented here, the performance has to be sufficient by satisfying the performance criterion:with the threshold value . The performance criterion is the requirement of performance. For example, if the performance function is the reliability, the performance criterion is that the reliability should be above the given threshold (e.g., 0.99).

Design points satisfying (3) are called good, otherwise are called bad. The design space is assumed to be continuous and defined by the lower and upper bounds of the design parameters, given by and , that is, . The complete solution space is the set of all good design points in the design space. The shape of the complete solution space depends on the output performance and the performance criterion given by (2) and (3), respectively. In general, boundaries of the range of one parameter depend on the values of other parameters. In order to obtain interval boundaries of each parameter that are independent of other parameters, only subspaces that are hyperboxes are considered. The hyperbox can be expressed bythat is, the lower bound and upper bound of the hyperbox are and , respectively. Moreover, let denote the bounds of the hyperbox, namely,

The volume of the hyperbox is thus given by

A hyperbox which only includes good design points is called a solution hyperbox. Simply, the hyperbox which meets the condition for all is the solution hyperbox. Furthermore, is defined as the global minimization of the performance function on the hyperbox, that is, .

With these preparations at hand, the problem of searching for a maximum solution hyperbox can be formulated as the following constrained optimization problem:

If is analytically known and (i.e., ) can be explicitly expressed by and , the optimization problem (7) can be solved by traditional optimization algorithms. Unfortunately, in most of the engineering problems, the following two cases are more common. First, the performance function is not analytically known; therefore, it is produced by numerical simulations. Second, cannot be explicitly expressed by and . To cope with these two cases, this paper aims to present a new approach based on performance function samples (i.e., the performance function can be analytically known or black box). There are two key difficulties for solving this optimization problem. First is how to maximize the size of the hyperbox. Second is how to guarantee the obtained hyperbox is a solution hyperbox.

Remark 1. If the performance criterion is , the optimization problem formulation (7) sets and .

Remark 2. Typically, engineering problems have multiple performance functions . In the optimization problem formulation (7), is replaced by .

3. The DIRECT Algorithm

We consider the global optimization problem of the formAs mentioned in Section 2, is an -dimensional hyperbox (hyper-rectangle), and is a performance function.

In the literature, there are numerous methods for dealing with problem (8) (see, e.g., [1827]). DIRECT (DIviding RECTangles) is one of the most known partitioning-based algorithms that balance local and global search in an attempt to find the global minimizer efficiently [2427]. The main procedure of the DIRECT algorithm involves partitioning potentially optimal (the most promising) hyper-rectangles and evaluating the performance function at the centers of these hyper-rectangles [2831].

DIRECT’s performance is independent of the scaling of and the problem is typically scaled as follows:

Thus, the bound-constrained problem (8) can be reduced to the following problem:over the unit hypercube . That is, the search space is transformed to the -dimensional unit hypercube.

3.1. Selection Scheme

To select the potentially optimal hyper-rectangles, DIRECT assesses the goodness based on the lower bound estimates for the performance function over each hyper-rectangle. The requirement of potential optimality is stated formally in Definition 1.

Definition 1. (potentially optimal hyper-rectangle). Suppose that we have a partition of the unit hypercube into hyper-rectangles. Let denote the center point of the th hyper-rectangle, denote the distance from the center point to the vertices, be a positive constant, and be the current best function value. A hyper-rectangle is said to be potentially optimal if there exist some such thatThe hyper-rectangle is potentially optimal if the lower Lipschitz bound for the performance function computed by the left-hand side of (11) is the smallest one with some positive constant among all hyper-rectangles. In (12), the parameter is used to protect from an excessive refinement of the local minimum [24, 31].
DIRECT identifies potentially optimal hyper-rectangles in at least two different ways: using modified Graham’s scan algorithm [32] or the rule described by Lemma 2.3 in [33]. Usually this does not impose significant difference; thus, in this paper, we use the modified Graham’s scan algorithm.

3.2. Division and Sampling Scheme

We start by sampling the points , , where is the center point of the hypercube, is one-third the side length of the hypercube, and is the th unit vector (i.e., a vector with a one in the th position and zeros elsewhere). The strategy in [31] is used to divide the hypercube.

More precisely, we adopt the following rule for subdividing a hypercube. Letbe the best of the function values sampled along dimension . Then, we start by splitting along the dimension with the smallest value. Once this is done, we split the rectangle containing into thirds along the dimension with the next smallest value. Continue in this way until we have split on all long dimensions. By dividing only along the long dimensions, we ensure that the rectangles shrink on every dimension. A formal description of the rectangle division procedure is given in Algorithm 1.

(1)Identify the set of dimensions with the maximum side length. Let equal one-third of this maximum side length.
(2)Sample the function at the points for all , where is the center of rectangle and is the th unit vector.
(3)Divide the rectangle containing into thirds along the dimensions in , starting with the dimension with the lowest value of and continuing to the dimension with the next smallest until we have split on all dimensions .
3.3. The DIRECT Algorithm Flowchart

We now have all the ingredients for the DIRECT algorithm. We initialize the search by sampling at the center of the unit hypercube. Each iteration begins with identifying the set of potentially optimal hyper-rectangles as described in Section 3.1. These hyper-rectangles are then sampled and subdivided as described in Section 3.2. The process continues until a prespecified iteration limit is reached. A formal statement of the DIRECT algorithm is given in Algorithm 2.

(0) Parameter setting: set the maximum number of iterations .
(1) Normalize the search space to be the unit hypercube. Let be the center point of this hypercube and evaluate . Set , , and .
(2) Identify the set of potentially optimal rectangles.
(3) Select any rectangle .
(4) Use Algorithm 1 to determine where to sample within rectangle and how to divide the rectangle into subrectangles. Update , set , where is the number of new points sampled.
(5) Set ; If go to Step 3.
(6) Set . If , stop and output the minimum value ; Otherwise, go to Step 2.

In order to better illustrate how the DIRECT algorithm works, Figure 3shows the first three iterations of the algorithm on the two-dimensional Branin function in [34]. For each iteration, the first row shows the set of potentially optimal rectangles (gray colored), and the second row shows how these rectangles are sampled (red points) and subdivided.

3.4. Convergence

The DIRECT algorithm is guaranteed to converge to the globally optimal function value if the performance function is continuous or at least continuous in the neighborhood of a global optimum [31]. This follows the fact that, as the number of iterations goes to infinity, the set of points sampled by DIRECT forms a dense subset of the unit hypercube. That is, given any point in the unit hypercube and any , DIRECT will eventually sample a point within the neighborhood of with radius . It is proved that DIRECT with a large enough iteration number can converge with a high accuracy [31, 3537].

4. The Proposed Approach

As metaheuristic algorithms are general and do not rely on mathematical properties for application and the DIRECT algorithm converges to the global optimum with a high accuracy, an innovative approach which combines metaheuristic algorithms with the DIRECT algorithm is proposed in this paper to tackle the two challenges mentioned in Section 2.2. First, the metaheuristic algorithm is used for seeking the largest hyperbox. Second, the DIRECT algorithm is used as a checking technique to ensure all designs in the obtained hyperbox satisfy the performance criterion.

More specifically, in this paper, two methods are proposed: one combines simulated annealing with the DIRECT algorithm and the other combines the distributed covariance matrix adaptation evolution strategy with the DIRECT algorithm. They will be referred to as the simulated annealing DIRECT algorithm (SA-DIRECT) and the distributed covariance matrix adaptation evolution strategy DIRECT algorithm ( CMA-ES-DIRECT), respectively. Of course, one can use other metaheuristic algorithms, but these two metaheuristic algorithms are derivative-free and insensitive to initial solution and global search.

4.1. Simulated Annealing DIRECT Algorithm (SA-DIRECT)

Simulated annealing (SA) proposed by Kirkpatrick et al. [38] is based on the similarity between the solid annealing process and solving global optimization problems. It is a generic probabilistic meta-algorithm for the global optimization problem. SA has been applied to a wide range of problems especially in those cases where traditional optimization techniques have shown poor performances or simply have failed [39].

Simulated annealing DIRECT (SA-DIRECT) algorithm is illustrated in Algorithm 3. Initially, the SA-DIRECT sets the design space and builds an initial solution for problem (7). More specifically, we generate a solution , where and are stochastically generated from the range . Then, we evaluate by the DIRECT algorithm in order to check whether this generated solution meets constraints. If the constraint conditions, i.e., and, are met, we calculate the associated value of the fitness function (i.e., the volume expressed by equation (6)). Otherwise, we discard this generated solution and generate a new initial solution.

(0) Parameter setting:
    Set the initial temperature , the cooling ratio , the maximum number of the inner iterations and the maximum number of the outer iterations .
(1) Initialization:
 (1.1) Set the lower and upper bounds of design space, and , and the threshold level .
 (1.2) Generate randomly and from the range .
 (1.3) Evaluate by Algorithm 2 with .
 (1.4) If all the constraints are satisfied. i.e., and , then calculate the volume by equation (6) and store it in , i.e., ; Otherwise, go to Step 1.2.
 (1.5) Set , , , and .
(2) For do:
 (2.1) Set .
 (2.2) While do:
  (2.2.1) Generate a new solution in the neighborhood of , i.e., , , where and are randomly and uniformly generated from range , and denotes componentwise multiplication.
  (2.2.2) Evaluate by Algorithm 2 with the parameter setting .
  (2.2.3) If constraints and are all met, then evaluate the volume by equation (6), i.e., ; Otherwise go to Step 2.2.1.
  (2.2.4) Evaluate , if , then set and ; If , then set , .
  (2.2.5) If , then generate a random number in and evaluate ; If , then set , , .
  (2.2.6) Set .
 (2.3) Set .
(3) Verification:
 (3.1) Evaluate by Algorithm 2 with the parameter setting .
 (3.2) If , output the best solution ; Otherwise go to Step 1.

Next, during the process of optimization, in each iteration, a neighbor solution to the current solution is generated according to a predefined neighborhood structure, and is evaluated by the DIRECT algorithm. If the condition is not satisfied, this neighbor solution is rejected. Otherwise, the volume is calculated by equation (6). The improving move (the volume of the neighbor is larger than that of the current solution) is always accepted, whilst worse neighbor is accepted with a certain probability determined by the Boltzmann probability [40], , where is the difference between the volume of the current solution and the generated neighbor, and is randomly generated from the range . Moreover, is a parameter (called the temperature) which periodically lowers during the search process according to the temperature updating rule. The temperature updating rule (as adopted in [41]) is , where is the initial temperature, is the cooling ratio, and is the number of times the temperature has been lowered. The halting criterion of SA is “reaching the maximum number of times the temperature could be lowered.”

Finally, the SA-DIRECT verifies whether the best solution meets the constraints by the DIRECT algorithm with a relatively large maximum number of iterations. If the constraints are not met, the SA-DIRECT goes back to Step 1. Otherwise, the SA-DIRECT outputs the solution.

In this paper, the maximum number of the iterations at each temperature and the maximum number of times the temperature could be lowered .

4.2. Distributed Covariance Matrix Adaptation Evolution Strategy DIRECT Algorithm (CMA-ES-DIRECT)

The evolution strategy (ES) is developed as a powerful tool for numerical optimization tasks [42]. Covariant matrix adaptation evolution strategy (CMA-ES) acts as an improved robust form of evolution strategy [43]. The main feature of the CMA-ES is the ability of being invariant to landscape transformations and scaling modulation. The CMA-ES is also invariant to applications of rotation, reflection, and translation, besides maintaining order and monotonicity [43]. It offers no discrepancy in behavior towards varied nature of functions and can be easily generalized. Complexity of algorithm is largely reduced with update schemes of CMA-ES, and thus it offers an extremely prospective mode of maximization in fitting function landscapes [44, 45]. The implementation details of CMA-ES are given in the appendix.

The CMA-ES is powerful and performs well. However, better results can be obtained by distributed covariant matrix adaptation evolution strategy (CMA-ES) with multiple subpopulations and proper migration strategy [46]. Particularly, the population model of CMA-ES divides the large population into multiple small demes. These demes evolve independently of each other for a certain number of generations, and then a number of individuals are migrated from one deme to another. The CMA-ES preserves diversity in the population through multiple demes, while increasing the selection pressure through periodic migration [47].

The migration operator includes four parameters: (a) migration topology that defines the topology of the connections between demes, (b) the migration rate (the fraction of the population that migrates) that controls how many individuals migrate, (c) migration interval that affects the frequency of migrations, and (d) migration policy that selects emigrants and replaces existing individuals with incoming migrants [48].

In this paper, the ring topology is selected as the migration scheme [47]. In other words, individuals are transferred between directionally adjacent demes. The migration policy is that the best individuals (the larger volume value) are selected as migrants to replace the worst individuals at the receiving demes. The migration interval is set to 20 generations to permit the demes to partially converge prior to migration. The migration rate is set to 5 percent to provide a reasonable selection pressure. The final remaining experimental parameters directly related to the CMA-ES are the deme size and deme count.

The distributed covariant matrix adaptation evolution strategy DIRECT (CMA-ES-DIRECT) algorithm is illustrated in Algorithm 4. The CMA-ES-DIRECT begins with generating the initial mean, covariance matrix, and step size. Subsequently, the new individual is randomly generated by sampling from a multivariate normal distribution and is checked whether the constraints are satisfied by the DIRECT algorithm. If the constraints are not satisfied, resampling is performed until the generated individual becomes good. The fitness, i.e., the volume of the hyperbox in this paper, is calculated. The next step involves storing the individual with the maximum volume. Then, the migration operator is performed. Next, the update schemes by equations (A.2)–(A.6) in the appendix are applied to equip the individuals for the next generation.

(0) Parameter setting:
  Set the default strategy parameters according to Appendix A.
(1) Initialization:
 (1.1) Set the lower and upper bounds of the design space, and , the best volume , the maximum number of the outer iterations , and the threshold level .
 (1.2) Set the deme size , the deme count , the migration rate , the migration interval .
 (1.3) Set the initial covariance matrix , the step-size , the evolution paths and , (details in the appendix).
 (1.4) Generate the initial mean uniformly and randomly from the range , .
(2) For do:
 (2.1) For do:
  (2.1.1) For do:
   (2.1.1.1) Set , then generate a random number from normal distribution . Based on equation (5), and .
   (2.1.1.2) Evaluate by Algorithm 2 with parameter setting .
   (2.1.1.3) If all constraints are satisfied, i.e., , and , then calculate the volume by equation (6), i.e., ; Otherwise go to Step 2.1.1.1.
  (2.1.2) Set and . Sort individuals based on the value of the volume from largest to smallest. If , set , and , where denotes the index of th ranked individual.
 (2.2) Migrate:
  (2.2.1) If is divisible by do:
   (2.2.1.1) For do:
    (2.2.1.1.1) Set and . If , then set and ; Otherwise set and .
    (2.2.1.1.2) Select best individual out of to replace worst individual out of .
 (2.3) For do:
  (2.3.1) Recombination:
   (2.3.1.1) Update , according to equation (A.3) in the appendix.
  (2.3.2) Step-size control
   (2.3.2.1) Update , according to equation (A.4) in the appendix.
   (2.3.2.2) Update , according to equation (A.5) in the appendix.
  (2.3.3) Covariance matrix adaptation
   (2.3.3.1) Update , according to equation (A.6) in the appendix.
   (2.3.3.2) Update , according to equation (A.7) in the appendix.
(3) Verification:
 (3.1) Evaluate by Algorithm 2 with the parameter setting .
 (3.2) If , output the individual ; Otherwise go to Step 1.

Finally, the CMA-ES-DIRECT verifies whether the best individual meets the constraints by the DIRECT algorithm with a relatively large maximum number of iterations. If the constraints are not met, the CMA-ES-DIRECT goes back to Step 1. Otherwise, the CMA-ES-DIRECT outputs the best individual .

4.3. Characteristics of the Proposed Approach

As these two metaheuristic algorithms are insensitive to initial solution, derivative-free and global search [49], the DIRECT algorithm with a large enough iteration number converges to the global optimum with a high accuracy [31]; the proposed methods which combine metaheuristic algorithms with the DIRECT algorithm have the following good characteristics:(1)The quality of the hyperbox obtained finally is not affected by the initial solution, except that the computational time may increase with improper starting designs.(2)The proposed methods have the advantage of not getting trapped in local optimum and have great possibility to reach the globally maximum solution hyperbox.(3)Because of the discrete nature of the performance function evaluations in the DIRECT algorithm, the proposed methods can be used for both analytically known and black-box performance functions.(4)The proposed methods guarantee that any point selected within the hyperbox obtained finally satisfies the performance criterion as long as the performance function is continuous.

In the majority of practical engineering problems, the continuity of the performance function is easily satisfied, no matter it is analytically known or a black box. Therefore, the proposed approach has strong applicability and is valuable to practical engineering applications.

From Algorithms 3 and 4, we see that the DIRECT algorithm with a relatively small maximum number of iterations (i.e., ) is adopted in the iterative optimization phase, while the DIRECT algorithm with a relatively large maximum number of iterations (i.e., ) is adopted in the verification phase. This setting reduces the computation time while ensuring that any point selected within the obtained hyperbox satisfies the performance criterion.

Due to the stochastic nature of metaheuristic algorithms, it is important to show the robustness of the proposed approaches. Therefore, as adopted in [15], the proposed approach runs 20 times, and the best solution among the 20 runs is used in practice. In addition, the boxplot containing the minimum, the maximum, the median, and the first and third quartiles of the 20 runs is shown.

5. Numerical Examples and Comparisons

A stochastic approach (we denote this method by “GHZ” hereafter) that computes the solution hyperbox has been discussed in [11]. An approach (we denote this method by “CES-IA” hereafter) based on the use of cellular evolutionary strategies (CES) and interval arithmetic (IA) has been proposed in [15].

In order to compare the proposed SA-DIRECT and CMA-ES-DIRECT methods with the existing GHZ and CES-IA methods, the following two numerical examples are considered:(i)Example 1 studies multiple performance functions. Since each performance function is monotone, the exact solution of the optimization problem (7) exists and can be considered globally optimal.(ii)Example 2 studies a performance function which has multiple local minima. Unfortunately, the exact solution of the optimization problem (7) does not exist.

In addition, for comparison purpose, the solutions of the CMA-ES-IA method which combines the distributed covariance matrix adaptation evolution strategy (CMA-ES) with interval arithmetic (IA) is also shown.

5.1. Example 1: Multiple Performance Functions

Example 1 comes from [11], including multiple performance functions, where and are the lower and upper bounds of the design space, respectively, and

Since is monotone, as the definition , we have

That is, has an explicit expression, for .

Therefore, the optimization problem (7) can be restated as follows:

Obviously, (16) is an optimization problem with inequality constraints. The exact solution can be found by the Lagrange multiplier method [50] and has the value tabulated in the second column of Table 1.

The result in [11] obtained by the GHZ method is listed in the third column of Table 1. The CES-IA method adopts the same parameter setting as in [15] and the best solution among its 20 runs is shown in the fourth column of Table 1. The best solution among 20 runs of the CMA-ES-IA method is listed in the fifth column of Table 1. The row entitled “Volume” contains the volume of the obtained hyperbox, and the row entitled “Error” contains the relative error between the volume and the exact volume.

In the proposed SA-DIRECT method, the initial temperature is set to 100, 150, and 200, and the cooling ratio is set to 0.94 and 0.98. In the proposed CMA-ES-DIRECT method, two sets of deme size and count values are chosen: 8 and 1 and 8 and 3, respectively. The best solutions among their respective 20 runs of the SA-DIRECT and CMA-ES-DIRECT methods are presented in Table 1. A visualization of results of these methods in Table 1 is shown in Figure 4. Moreover, we magnify the region A in Figure 4 and show it in Figure 5.

According to Table 1, the volumes of the obtained hyperboxes by the CES-IA, CMA-ES-IA, SA-DIRECT, and CMA-ES-DIRECT methods are approximately the same as the exact value, whereas the volume of the obtained hyperbox by the GHZ method has a somewhat large error. Moreover, we can see that the SA-DIRECT method is insensitive to parameter variations of simulated annealing.

From Figure 4, we observe that the obtained hyperboxes by the SA-DIRECT and CMA-ES-DIRECT methods are in close proximity to the exact solution. From Figure 5, it can be seen that the green lines exceed the gray region, which means that the hyperbox obtained by the GHZ method contains some design points which are not within the complete solution space, and therefore is not a solution hyperbox. However, the hyperboxes obtained by the CES-IA, CMA-ES-IA, SA-DIRECT, and CMA-ES-DIRECT methods all locate within the complete solution space and therefore are solution hyperboxes.

Consequently, the hyperboxes obtained by the CES-IA, CMA-ES-IA, SA-DIRECT, and CMA-ES-DIRECT methods not only are close to the exact hyperbox but also guarantee that any point selected within those hyperboxes satisfies the performance criterion.

Furthermore, we sketch the boxplots using the volume of the hyperbox of 20 runs, which are presented in Figure 6. From Figure 6, we observe that, in each case of parameter setting, the volumes of the obtained hyperboxes are all above 2.2000 after eliminating outliers, and the range of the volumes is relatively small; therefore, the two proposed approaches are robust.

5.2. Numerical Example 2: Multiple Local Minima

The maximization of the solution hyperbox is considered in case of the Michalewics function:

Stripinis et al. [51] have pointed out that the Michalewics function has two local minima and one global minimum. In this example, the lower and upper bounds of design space are and , respectively, and the threshold value is .

Unfortunately, the bound-constrained optimization problem has no explicit expressions; therefore, we cannot obtain the exact solution for the optimization problem (7) in this example. However, can be solved by the FMINCON function of MATLAB. For comparison purpose, an approach (we denote this method by “SA-FMINCON” hereafter) which combines simulated annealing and FMINCON function is presented to solve problem (7).

The best solutions of their respective 20 runs of the GHZ, CES-IA, CMA-ES-IA, SA-DIRECT, CMA-ES-DIRECT, and SA-FMINCON methods are shown in Table 2. A visualization of these results is shown in Figure 7. According to Table 2 and Figure 7, the hyperbox obtained by the SA-FMINCON method is the maximum and the hyperboxes obtained by the SA-DIRECT and CMA-ES-DIRECT methods are much larger than those by the GHZ, CES-IA, and CMA-ES-IA methods. Besides, we see that the SA-DIRECT method is relatively insensitive to the initial parameters of simulated annealing.

The values of the performance function on the obtained hyperboxes by the GHZ, CES-IA, CMA-ES-IA, SA-DIRECT, CMA-ES-DIRECT, and SA-FMINCON methods are shown in Figure 8. We can see that the values of the performance function on the hyperboxes by the CES-IA, CMA-ES-IA, SA-DIRECT, and CMA-ES-DIRECT methods are all above the performance criterion while some values of the performance function on those by the GHZ and SA-FMINCON methods are below the performance criterion. This indicates that the obtained hyperboxes by the CES-IA, CMA-ES-IA, SA-DIRECT, and CMA-ES-DIRECT methods only include good design points while those by the GHZ and SA-FMINCON methods include some bad design points.

The boxplots of the volume of the hyperbox of 20 runs are given in Figure 9. From Figure 9, we see that, in each case of parameter setting, the volumes of the obtained hyperboxes are all above 3.1400 after eliminating outliers, and the range of the volumes is relatively small; therefore, the robustness of the proposed approaches are verified.

5.3. Discussion

The CES-IA, CMA-ES-IA, and SA-FMINCON methods require an analytically known performance function, while GHZ, SA-DIRECT, and CMA-ES-DIRECT methods can be used for both analytically known and black-box performance functions.

The hyperboxes obtained by the CES-IA and CMA-ES-IA methods in Example 1 are very close to those obtained by the proposed SA-DIRECT and CMA-ES-DIRECT methods while the hyperboxes obtained by the CES-IA and CMA-ES-IA methods in Example 2 are much smaller than those obtained by the proposed SA-DIRECT and CMA-ES-DIRECT methods. This is mainly caused by the dependency problem of the interval arithmetic. In the CES-IA and CMA-ES-IA methods, the range of the performance function inside the generated hyperbox is calculated by the interval arithmetic. In Example 1, each design parameter appears only once in the performance function ; thus, the interval arithmetic can determine the range of the performance function very accurately. However, in Example 2, the performance function is unable to be described by the output interval directly, instead the Taylor interval extension [52] is used. In this case, the design parameters occur several times in the calculation, and each occurrence is taken independently. Consequently, the range of the performance function inside the generated hyperbox is overestimated and the actually solution hyperbox is discarded. Rocco et al. [15] have pointed out that the main drawback of the interval arithmetic adopted in the CES-IA method is that results can be overestimated due to the dependency problem, and overestimation cannot be estimated and could cause that no solution hyperbox is found.

Numerical examples show that the hyperboxes obtained by the GHZ method may include some bad design points. This is mainly because the GHZ method uses Monte Carlo sampling to estimate locations of good and bad designs. Monte Carlo sampling cannot guarantee that each design point in the obtained hyperbox is good for even a large number of samples, especially when the performance function has multiple local minima. In addition, numerical examples show that the hyperboxes by the GHZ method are smaller than those by the proposed SA-DIRECT and CMA-ES-DIRECT methods. This is because the GHZ method removes bad sample points sequentially in only one order. The order, however, has an effect on the result. Zimmermann and von Hoessle [10] have concluded that the GHZ method is not necessarily globally optimal, even not locally optimal.

In Example 2, although the hyperbox obtained by the SA-FMINCON method is larger than those by the proposed SA-DIRECT and CMA-ES-DIRECT methods, it includes some bad design points, and therefore is not a solution hyperbox. This demonstrates that it is improper to use the FMINCON function of MATLAB to solve the optimization problem (7) especially when the performance function has multiple local minima.

From the “quantitative” viewpoint, the solution hyperboxes obtained by the proposed SA-DIRECT and CMA-ES-DIRECT methods are nearly the same as the exact result in Example 1, and the solution hyperboxes obtained by the SA-DIRECT and CMA-ES-DIRECT methods are much larger than those obtained by the CES-IA and GHZ methods in Example 2 without exact result. From the “feasible” viewpoint, no matter Example 1 or Example 2, any design point selected within the solution hyperboxes obtained by the SA-DIRECT and CMA-ES-DIRECT methods satisfies the performance criterion. Therefore, numerical examples prove that the proposed global search is effective.

Moreover, from Figures 6 and 9, we observe that the SA-DIRECT and CMA-ES-DIRECT methods are robust. As the SA-DIRECT method with has a relatively large maximum value and a relatively small interquartile range, we therefore set and in the remainder of paper. Similarly, the -CMA-ES-DIRECT method with (i.e., in Examples 1 and 2) and has a relatively large maximum value and a relatively small interquartile range; we therefore set and in the rest of the paper.

6. Case Studies

The first case is the life-support system in a space capsule which has been studied by Rocco et al. [15] and is reinvestigated by the proposed approach. The second case is the power-shift steering transmission control system (PSSTCS) which is a typical mechatronics complex control system with a price of approximately 500,000 USD. Therefore, it is importantly significant to improve the robustness of the PSSTCS.

6.1. Life-Support System in a Space Capsule
6.1.1. Life-Support System in a Space Capsule: Reliability Constraint

The complex system (life-support system in a space capsule) in [15], as shown in Figure 10, consists of four components. The reliability of the system is given bywhere and for .

Rocco et al. [15] have applied the CES-IA method to find a largest symmetric hyperbox using a specified point as symmetry center, whereby any point selected within this hyperbox satisfies the performance criterion. Particularly, the center point is , the lower and upper bounds of the design space are and , respectively, and the performance criterion is . The best solution among the 20 runs of the CES-IA method shown in [15] is given in the fourth column of Table 3 in terms of the intervals of design parameters.

For comparison purpose, the optimization problem considered in this paper is the same as that considered in [15], and it is stated similarly to the optimization problem (7):where .

The best solutions among the 20 runs of the GHZ, SA-DIRECT, and CMA-ES-DIRECT methods are shown in the third, fifth, and sixth columns of Table 3, respectively. To show how close to the global solution are the results obtained by the proposed methods, the second column of Table 3 presents the exact solution obtained by the Lagrange multiplier method. To show whether the obtained hyperboxes satisfy the performance criterion, the intervals of the performance function on the hyperboxes obtained by the exact, GHZ, CES-IA, SA-DIRECT, and CMA-ES-DIRECT methods are, respectively, calculated and listed in Table 4.

Based on Tables 3 and 4, the SA-DIRECT and CMA-ES-DIRECT methods outperform the GHZ and CES-IA methods in which the volume errors are smaller and the minimum values of the system reliability on their obtained hyperboxes are above the reliability criterion (0.99). Although the hyperbox obtained by the GHZ method is maximum, from the third column of Table 4, we can see that it includes some design points whose corresponding reliabilities are below the reliability criterion. Although the hyperbox obtained by the CES-IA method only includes good design points, its volume is smaller than those by the SA-DIRECT and CMA-ES-DIRECT methods.

Besides, the average volumes are, respectively, and in 20 runs of the SA-DIRECT and CMA-ES-DIRECT methods, and the standard deviations are, respectively, and . This implies that the proposed methods are robust.

6.1.2. Life-Support System in a Space Capsule: Additional Cost Constraint

Normally, the design problem also seeks to minimize the cost function :

In [15], , , , , and for all . Using the above values and the previous SA-DIRECT ranges of , the corresponding range of is . Using the exact results of , the corresponding range of is .

Rocco et al. [15] studied the problem to derive the ranges for each subject to and . We solve the same problem by the proposed methods for comparison. The best solution shown in [15] by the CES-IA method is given in the fourth column of Table 5.

Since the cost function is monotone, the exact solution can be obtained by the Lagrange multiplier method and is shown in the second column of Table 5. The best solutions among the 20 runs of the GHZ, SA-DIRECT, and CMA-ES-DIRECT methods are also shown in the third, fifth, and sixth columns of Table 5, respectively. The intervals of and on the obtained hyperbox, denoted by and , respectively, are shown in Table 6. From Tables 5 and 6, we can see that the proposed methods are better than the GHZ and CES-IA methods because the volume errors are smaller and the performance criterion is satisfied in the obtained hyperboxes.

The relative errors between the exact solution and the best solutions of the proposed SA-DIRECT and CMA-ES-DIRECT methods are both 0.02. This indicates that the results are globally optimal.

6.2. The Power-Shift Steering Transmission Control System (PSSTCS)

The power-shift steering transmission control system (PSSTCS) is a key complex system with multicharacteristics of heavy vehicle to achieve the control of the steering, speed changing, fan driving, and lubricating. The PSSTCS is composed of a hydraulic oil supply system, an integration pump-motor system, a fan control system, an electronic control system, and a hydraulic control system. The hydraulic oil supply system consists of a fill oil and constant pressure system of pressure oil tank, a pump group, and a fill oil system of transmission control and fan control. The function constitutes and the structure principle drawing of PSSTCS are shown in Figures 11 and 12, respectively. The PSSTCS is a nonmonotonic coherent system; therefore, its reliability function is not analytically known and is evaluated by a graphical inductive analysis method based on the principles of the decision tree, i.e., the goal-oriented (GO) reliability assessment method, as illustrated in the appendix.

As shown in Table 7, the number of design parameters is 86. The optimization problem (7) is given as follows:where , , , and is the reliability function and is evaluated by the goal-oriented (GO) reliability assessment method. Note that the volume here is replaced by the log-volume (logarithmic transformation of the volume) in order to calculate conveniently.

Since the reliability function of the PSSCTS is not analytically known, the CES-IA method is not suitable for this complex system. The best solutions among the 20 runs of the GHZ, SA-DIRECT, and CMA-ES-DIRECT methods are shown in the third, fourth, and fifth columns of Table 7, respectively. The log-volumes of the GHZ, SA-DIRECT, and CMA-ES-DIRECT hyperboxes are listed in Table 8. We see that the log-volumes of the SA-DIRECT and CMA-ES-DIRECT hyperboxes are all much larger than that of the GHZ hyperbox.

To show whether the obtained hyperboxes satisfy the performance criterion, sample design points are randomly chosen from each of the obtained hyperbox by using the Latin hypercube sampling; then, the rate of good sample design points (the performance criterion satisfaction probability for the hyperbox) is calculated as follows:where is the indicator function, i.e., .

The performance criterion satisfaction probabilities of the obtained hyperboxes by the GHZ, SA-DIRECT, and CMA-ES-DIRECT methods are shown under different sample sizes in Figure 13. It can be observed that the performance criterion satisfaction probabilities for the SA-DIRECT and CMA-ES-DIRECT hyperboxes are all 1 while those for the GHZ hyperbox are below 1. This implies that the obtained hyperboxes by the SA-DIRECT and CMA-ES-DIRECT methods are solution hyperboxes. Therefore, our approach is superior to the GHZ method.

7. Conclusions

To improve the design robustness, rather than optimizing one single design point, the approach presented in this paper maximizes the region in the design space where all design points meet the required performance goal. Moreover, regions expressed by hyperboxes are considered in order to decouple parameters.

To this end, an innovative global approach that combines metaheuristic algorithms and the DIRECT algorithm is proposed to seek a maximum solution hyperbox. The metaheuristic algorithm is used to obtain a maximum hyperbox and the DIRECT algorithm is used as a checking technique to guarantee that the obtained hyperbox is a solution hyperbox. More specifically, the SA-DIRECT and CMA-ES-DIRECT methods are presented in detail. The results of studies on complex numerical examples and engineering cases have shown that these two methods have better performance than the GHZ and CES-IA methods. Since the DIRECT algorithm only requires evaluating the values of the performance function at the sample points, the proposed approach can be used for both analytically known and black-box performance functions. The performance function is continuous in most engineering problems; therefore, the proposed approach is a powerful tool that engineering designers can use to obtain a maximum solution hyperbox.

The optimality of the proposed approach mainly depends on the properties of the metaheuristic algorithms. The two metaheuristic algorithms we adopt are global search and have great possibility to reach the globally optimal solution. Our work can be further improved provided that more advanced global search algorithms are developed.

Appendix

A. Covariance Matrix Adaptation Evolution Strategy

An in-depth discussion on initial values of parameters of covariant matrix adaptation evolution strategy is given in [53]. In the th deme, , evolution path and , the initial distribution mean is chosen randomly and uniformly in the design space , and the step-size . However, the initial covariance matrix is problem dependent. Specifically, if the design interval is , different design intervals for different variables can be reflected by an initialization of , in which the main diagonal elements of obey and the entries outside the main diagonal are all zero. Note that ’s should not disagree by several orders of magnitude. Otherwise, a scaling of the variables should be applied. Therefore, in this paper, we set .

The termination condition for CMA-ES-DIRECT is the number of generations. The total number of generations for the CMA-ES-DIRECT method is the same as the number of the outer iterations for the SA-DIRECT method.

In the covariance matrix adaptation evolution strategy, a population of new individuals is generated by sampling from a multivariate normal distribution. In the optimization problem (7), the basic equation for sampling individuals, for the generation number and the th deme, iswhere is a normal distribution with mean and covariance matrix . As defined in Section 2.2, . The CMA-ES depends on selection and recombination and step-size control, as well as covariance matrix adaptation, i.e., how to calculate , , and for the next generation and the th deme.

The new mean value is computed aswhere is the recombination weight, , denotes the th best individual out of , , and .

The step-size is updated using the cumulative step-size adaptation (CSA). The evolution path is updated first:where is the variance effective selection mass, , denotes the identity matrix, and is the damping parameter.

Finally, the covariance matrix is updated. The evolution path is updated first:where denotes the transpose, is minor relevance, and

The default strategy parameters are , , , , , , and . Default strategy parameters values are given in [53] and shown in Table 9. Hansen [53] has pointed out these default parameters are in particular chosen to be a robust setting, and thus are not recommended to be changed.

B. The Goal-Oriented (GO) Reliability Assessment Method

The goal-oriented (GO) reliability assessment method for the power-shift steering transmission control system (PSSTCS) is given in [54] in detail.

The function GO operators, logical GO operators, and auxiliary GO operators are selected to describe the unit itself, its logical relationships, and auxiliary GO operation in the PSSTCS, respectively, as presented in Tables 10 and 11.

The GO model of the PSSTCS is developed by using the signal flows to connect the above GO operators, as shown in Figure 14. The signal flow 129 is the system’s reliability output.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grant 71801196, National Major Project from the Shanghai Nuclear Engineering Research and Design Institute under Grant 2017ZX06002006, and China Postdoctoral Science Foundation under Grant 2018M631606.