Research Article  Open Access
A Novel Latin Hypercube Algorithm via Translational Propagation
Abstract
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is directly related to the experimental designs used. Optimal Latin hypercube designs are frequently used and have been shown to have good spacefilling and projective properties. However, the high cost in constructing them limits their use. In this paper, a methodology for creating novel Latin hypercube designs via translational propagation and successive local enumeration algorithm (TPSLE) is developed without using formal optimization. TPSLE algorithm is based on the inspiration that a near optimal Latin Hypercube design can be constructed by a simple initial block with a few points generated by algorithm SLE as a building block. In fact, TPSLE algorithm offers a balanced tradeoff between the efficiency and sampling performance. The proposed algorithm is compared to two existing algorithms and is found to be much more efficient in terms of the computation time and has acceptable spacefilling and projective properties.
1. Introduction
In engineering, manufacturing companies strive to produce better and cheaper products more quickly. However, engineering systems are fairly large and complicated nowadays. In addition, design requirements are rigorous and stringent for such systems, especially multidiscipline design optimization systems such as aerospace. These engineering analysis and design problems usually involve expensive computer simulations. For example, it is reported that it takes Ford Motor Company about 36–160 h to run one crash simulation [1], which is unacceptable in practice. Although the capacity of computer keeps increasing, the complexity of analysis software, for example, finite element analysis (FEA) and computational fluid dynamics (CFD), seems to keep pace with computing advances [2]. To alleviate the computational burden, metamodels, which are often called surrogate models or response surfaces, are widely used for optimization and design analysis by creating approximate models to replace the expensive computer simulations. Because the accuracy of metamodels directly depends on the samples of computer simulations, it is important to obtain efficient designs of computer experiments.
In recent decades, various sampling designs have been developed for computer experiments. The classical experiment designs containing alphabetical optimal design [3], factorial or fractional factorial design [4], central composite design (CCD) [5], and so forth, were widely used earlier. However, they do not have good performance of both spacefilling and projective properties. As is recognized by many researchers, designs for computer experiments should at least satisfy the following two criteria (see [6–10]). Firstly, the design should be spacefilling in some sense. When no details on the functional behavior of the response parameters are available, it is necessary to be capable of obtaining information from the entire design space. Therefore, design points should be “evenly spread” over the entire region. Secondly, the design should be noncollapsing. When one of the design variables has almost no effect on the function value, two design points that differ only in this variable will “collapse”; that is, they can be considered as the same point that is evaluated twice. As evaluation of the deterministic blackbox function is often timeconsuming, this is not a desirable situation. Therefore, two design points should not share any coordinate values when it is not known a priori which dimensions are important. Furthermore, we would like the projections of the points onto the axes to be separated as much as possible. Based on these two properties, a spacefilling Latin hypercube design termed LHD in this paper is an appropriate and popular choice.
Latin hypercube designs (LHD) play an important role in computer experiments. The Latin hypercube structure allows one to achieve both the spacefilling requirement and the noncollapsing condition. Each column of an dimensional LHD of points is a random permutation of . By scaling, we can use LHD for any rectangular design space. An LHD has good projective properties on any single dimension but bad spacefilling properties when it is randomly selected. To further obtain the good spacefilling property, the optimal LHD is widely studied. Koehler and Owen [11] showed that the projection of the optimal LHD onto a subset of variables retains good spatial properties. Morris and Mitchell [7] employed the simulated annealing (SA) algorithm for constructing optimal LHD. Ye et al. [12] made a research on the columnwisepairwise (CP) algorithm for constructing optimal symmetrical LHD. Jin et al. [13] introduced the enhanced stochastic evolutionary (ESE) algorithm for finding various spacefilling designs, including approximate maximin LHD. Bates [14] described a method for generating optimal LHD using PermGA by minimizing the potential energy . Liefvendahl and Stochi [15] compared the efficiency of CP and genetic algorithm (GA) for the optimization of LHD. Grosso et al. [16] adopted iterated local search (ILS) for improving the objective function to obtain maximin LHD problem. Jourdan and Franco [17] presented an optimal LHD using the KullbackLeibler criterion.
Although aforementioned methods provide effective ways to produce samples with good spacefilling and projective properties, they are computationally inefficient for problems with large dimensions and sample sizes. For example, Ye et al. [12] reported that generating an optimal LHD using CP could take several hours on a Sun SPARC 20 workstation. The search for a larger design would take even longer and may be computationally prohibitive. Thus, search processes often stopped before finding a good design. This situation motivated us to look for alternatives that require less computing time. In recent years, some methods without expensive optimization procedures were investigated. Van Dam et al. [18] presented some general formulas to obtain maximin LHD, which is just used for twodimensional problems and limited by the number of sampling points. Viana et al. [19] presented a new method to obtain near optimal LHD without going through the expensive optimization process, whereas the projective property of the sampling points is not satisfying except for some special problem sizes. Zhu et al. [20] presented a novel algorithm of maximin Latin hypercube design using successive local enumeration.
In this paper, we propose a method that is able to quickly construct a good design of experiments given a limited computational resource. There are two major algorithms involved. One is translational propagation algorithm (TP) [19], which requires minimal computational effort and does not use formal optimization. It can solve the optimization problem in an approximate sense, that is, to obtain a good Latin hypercube quickly, rather than finding the best possible solution. The other is successive local enumeration algorithm (SLE) [20]. It can maximize the minimal distance which is the minimum of all the distances between the point to be generated and the existing points. The sampling points produced by SLE are evenly distributed in the design space and projective points in lower dimensions are almost uniform [20]. The algorithm proposed in this paper is a combination of TP algorithm and SLE algorithm which is termed TPSLE. In fact, it is a compromise between computing efficiency and sampling performance, that is, spacefilling properties and projective properties. TPSLE algorithm is based on the inspiration that a near optimal Latin hypercube design can be constructed by a simple initial block as a building block with a few points generated by algorithm SLE. Testing results compared with the MATLAB function LHSDESIGN and SLE indicate that this method is effective to generate sampling points with good spacefilling and projective properties. In addition, the sampling efficiency of TPSLE is the highest through comparison with function LHSDESIGN and SLE. In this paper, MATLAB function LHSDESIGN is termed LHSD.
The remainder of the paper is organized as follows. The proposed TPSLE algorithm for obtaining Latin hypercube designs is described in Section 2, and then testing results compared with LHSD and SLE are represented to show its acceptable sampling performance and high efficiency in Section 3. Section 4 provides the further comparative study to show the advantages of the proposed TPSLE algorithm on improving the metamodels accuracy and solving mechanical design optimization problem. Eventually, conclusions are drawn in Section 5, where the shortcomings of TPSLE and future works are also pointed out.
2. Description of TPSLE Algorithm
In order to illuminate the algorithm in detail, the basic procedure of TPSLE is introduced first, followed by the application of the novel algorithm for a twodimensional problem. Then a method of generating experimental designs of any size is proposed. Certainly the summary of TPSLE algorithm will be given at last.
2.1. Basic Process of TPSLE Algorithm
The proposed algorithm is based on the inspiration of constructing the dimensional Latin hypercube design from a fairly small optimal dimensional Latin hypercube design used as an initial block via translational propagation [20]. In order to strengthen understanding, a simple example of a size (i.e., sixteen sampling points in two dimensions) Latin hypercube design is used to elaborate the methodology below.
Assuming to construct a Latin hypercube design of points and dimensions from an initial block design of points and dimensions, each dimension is partitioned into the same number of divisions as 2. So the design space is divided into a total of blocks such that Meanwhile, the number of points of block design is defined as where dimensions of block design should be equal to the dimensions of Latin hypercube design ; that is, .
In the example of the Latin hypercube design (i.e., and ), one obtains , , and from (1) and (2).
Next, points of initial block are generated by the optimal Latin hypercube design SLE which will be introduced in the next section. Then the entire design space will be filled with the initial block via translational propagation algorithm. Figure 1 shows the division of the design space for the Latin hypercube design. Figure 2 illustrates the process step by step. First, the initial block is properly filled with points determined by SLE algorithm as shown in Figure 2(a). Next, the initial block is shifted by levels in one of the dimensions. Every time that the old block is shifted, a new block is added to the experimental design to produce a new block (twice as points of old block). Figure 2(b) shows the shift of the initial block (chosen to be in the horizontal direction). To preserve the noncollapsing property of Latin hypercube, that is, only a single point per level, there also has to be a onelevel shift in the vertical direction which is shown in Figure 2(b). In the general case, a displacement vector is built for each accounting for the shifting in the dimension of interest (horizontal direction in the example above) as well as a shift in all other dimensions to preserve the Latin hypercube properties (vertical direction in our example). In the next step, the current set of points (newly filled division) is used as a new block and the procedure of shifting the block is repeated in the next dimension. Figure 2(c) illustrates the shifting procedure in the vertical direction.
(a) Step 1
(b) Step 2
(c) Step 3
The greatest advantage of this approach is that there are no calculations to perform once initial block is completed. All operations can be viewed as a simple translation of the block designs in the dimensional hypercube. Although efficient for generating sampling designs, the algorithm proposed now fails to provide flexibility to obtain any sample size in the final Latin hypercube design. Equations (1) and (2) must hold, that is, become responsible for the limitation of algorithm. The strategy to overcome this limitation and generate sample designs with arbitrary size is described in Section 2.3.
2.2. Novel Optimal Algorithm of Latin Hypercube Design SLE
In this section a novel algorithm of maximin LHD using SLE is introduced briefly, referred to in [19]. Unlike the existing LHD methods which employed the global objective functions, the sequential local objective function is to maximize the minimal distance which is the minimum of all the distances between the point to be generated and the existing points already generated by SLE. The points produced by this method are evenly distributed in the design space and projective points in lower dimensions are almost uniform. Based on SLE algorithm, the sampling points in a twodimensional plane are shown in Figure 3(a). The projective points to each coordinate axis are uniform. Comparison with LHSD function provided by MATLAB using the default set is shown in Figure 3(b). From the comparative plots, sampling points generated by using SLE algorithm have better spacefilling and projective property.
(a) SLE
(b) LHSD
Similarly, assuming to generate an initial Latin hypercube design of sampling points and dimensions by SLE algorithm. This problem of finding a set of sampling points in dimensional space can be described as positioning points in a unit hypercube, each point in which has coordinates values, , (), so that all the points possess good performance, that is, spacefilling and projective properties. According to the SLE algorithm, the design space will be divided into the unit hypercube. The sampling points should be determined cell by cell (when , a cell is equal to a column), and for each cell only one point can be designated. A cell can be considered as a unit hypercube, which owns hyperboxes (when , a hyperbox is equal to a square, and when , a hyperbox is equal to a cube).
When using SLE algorithm to construct an initial block for TPSLE algorithm, it is noticed that the variable , (), which is different from the SLE algorithm in the literature [19]. Figure 2(a) shows that the interval of points in initial block is twolevel which is equal to .
2.3. Constructing Designs of Experiment with Arbitrary Size
To generate an improved Latin hypercube design proposed in this paper with any number of points, the first step is to generate a TPSLE that has more points than the required. The experimental design will be completed without resizing the size of TPSLE if the design of points and dimensions is proper; that is, calculated by (2) is an integer. Otherwise, an experimental design which is larger than the required will be created through rounding up. And then a resizing process will be used to reduce the number of points to the desired one. The points are removed one by one from the initially created TPSLE by discarding the points that are the furthest from the center of the hypercube and reallocating remaining points to fill the whole design (preserving the Latin hypercube properties). In the proposed algorithm removing the points furthest from the center does not reduce the area of exploration. After removing the points, the final design is rescaled to cover the whole design space. The detailed process of resizing algorithm refers to the literature [20]. In the next paragraph, an experimental design of Latin hypercube with a size will be illustrated step by step.
To construct a size Latin hypercube (i.e., and ), the corresponding initial block should be created first. From (1) and (2), one can obtain which is not an integer. Rounding up to , then the size of larger design that can be constructed is , as illustrated in Figure 1. The resizing process begins with first calculating the distance between each of the 16 points and the center of the design space. To create a size design out of a one, three points furthest from the center have to be eliminated. In practice, this means the points of the original TPSLE have to be ranked according to the distance between the original points and the center of the design space. Three points which are further from the center are eliminated gradually. When two points are equally far from the center, it is not important which of the points will be removed first due to symmetry. In general, the point which is further from the origin will be removed. Certainly, the point which is nearer to the origin can also be chosen to be removed. The red point marked with 16 is first eliminated rather than point 1, as illustrated in Figure 4. However, removing point merely may break the Latin hypercube property that only a single point is found at any of the levels. So once a point is removed, the levels occupied by its projection along each of the dimensions have to be eliminated. Figure 4 illustrates the resizing process step by step. The number of points in the design progressively shrinks, but the final design still represents samples over the same design space. Figure 4(a) shows that in the design the red point which is farther from the center and its corresponding levels marked with green shadow are eliminated. When eliminating levels, the remaining points are used to occupy the empty level that is in between the points remarked with 10, 12 in Figure 4(a). In Figure 4(a), two points marked with 10, 14 on the top would move downward to occupy the empty level. Next, all points are scaled to cover the original design space. After each step, a new Latin hypercube design is obtained with one point less. The process continues until the design is achieved. Figure 4(b) and Figure 4(c) display the same process of eliminating points marked with 1, 14 and their corresponding levels. Removing points/levels part reduces the number of points of the experimental designs to obtain sample designs with arbitrary size, while preserving the Latin hypercube properties. The corresponding dimensions will not be eliminated after one certain point is chosen to be eliminated. On the other hand, it makes the experimental design fit in the original design space again.
(a) Step 1. New design with size by eliminating point and corresponding levels
(b) Step 2. New design with size by eliminating point and corresponding levels
(c) Step 3. New design with size by eliminating point and corresponding levels
2.4. Summary of the TPSLE Algorithm
The proposed algorithm is inspired by the tradeoff between performance and efficiency of experimental sampling design. In practice, good Latin hypercube designs are expected to be obtained efficiently because the consuming time is limited. This is particularly critical for large number of points in high dimensions. TPSLE generated from a fairly small optimal Latin hypercube design used as an initial block via translational propagation algorithm is a superior design relatively. Figure 5 illustrates the TPSLE algorithm. The given design parameters include the number of points of the required Latin hypercube design and the number of variables . The first step is to calculate the design variables , and number of blocks from (1) and (2). Then checking whether is an integer. If is an integer, the initial block will be constructed with size by SLE algorithm. Afterwards, the required experimental design is constructed by TPSLE algorithm via translational propagation. However, while is not an integer, the initial block cannot be constructed immediately. It is advised to round up; in Figure 5 represents the rounding up. In this algorithm, parameter controls the number of sampling points, that is, . Thus, a larger Latin hypercube design is obtained using the initial block via translational propagation algorithm. Next, the Latin hypercube design is resized to the required one. So far, the process of constructing a Latin hypercube design with the arbitrary size is completed and the required experimental design is achieved.
Based on the abovementioned TPSLE algorithm, the sampling points illustrated in Figure 6 in twodimensional space with size are generated, compared with the sampling points produced by SLE and LHSD which are shown in Figure 3. From the comparative plots, the spacefilling and projective properties of sampling design generated by using the TPSLE algorithm are better than LHSD and coincident with SLE roughly, but the efficiency of TPSLE algorithm is farther superior to SLE algorithm which will be discussed in the next section.
3. Results and Discussion
The sampling points generated by the TPSLE algorithm meet the two desired features, namely, spacefilling and projective properties. The distributions of the produced sampling points are even in the design space and the projective points in lower dimensions are almost uniform, especially for projecting to each coordinate axis. According to the sampling process of the TPSLE algorithm, the initial block constructed by SLE is used to generate the sampling points via translational propagation, which are quite different from the existing LHD sampling methods. In TPSLE, there are no global objective functions, such as , potential energy to optimize and thus no expensive optimization algorithm such as genetic algorithm and simulated annealing would be employed, so the efficiency of the algorithm TPSLE is superior to the sampling methods containing optimization algorithms. In this section, the performance and efficiency of algorithm TPSLE are both tested compared with two existing Latin hypercube algorithms.
3.1. Test Criteria
In recent years, some optimal criteria are employed widely to achieve a good performance in design of computer experiments. The optimal designs constructed by these optimal criteria have been shown to have a good performance. In other words, these optimal criteria can be used as test criteria to test whether the experimental designs have good performance. Four widely used test criteria are considered in this work.
3.1.1. Maximin Distance Criterion
Maximin distance criterion is proposed by Johnson et al. [6]. As the term suggests, the objective of the criterion is maximizing the minimum intersite distance : where is the number of points and is the distance between two arbitrary points: where is the number of variables. In this paper, is considered. The parameters , , and are the same as those used for test criteria below.
3.1.2. Centered Discrepancy Criterion
Centered discrepancy criterion is one of discrepancy criteria which is a measure of the difference between the uniform cumulative distribution function and the empirical cumulative distribution function of an experimental design. Namely, the discrepancy is a measure of nonuniformity of a design that is used most widely. Hickernell [21] proposed an interesting formula of discrepancy termed as centered discrepancy expressed as follows:
3.1.3. Criterion
In 1995, Morris and Mitchell [7] proposed an intuitively appealing extension of the maximin distance criterion: where are distinct distance values with , is the number of pairs of sites in the design separated by , is a positive integer, and is the number of distinct distance values. Jin et al. [13] provided a new equation to efficiently evaluate the value of which is expressed by where can be obtained by (4) and are advised by the literature [13].
3.1.4. Potential Energy Criterion
In optimal LHD algorithms, the AudzeEglais objective function [17], namely, the potential energy criterion , is usually used as a criterion for checking whether sampling points have good performance. It is inspired by the following physical analogy: a system will reach equilibrium when the potential energy of the repulsive forces between the masses is at a minimum. The potential energy criterion is inversely proportional to the distance squared between the points formulated as follows:
3.2. Performance of TPSLE Algorithm
To illustrate the performance, that is, spacefilling and projective properties of the sampling points, four aforementioned criteria are employed, namely, , , , and potential energy , to compare with other existing LHD methods. In this work, LHSD function in MATLAB and SLE algorithm [19] are used to make a comparison with TPSLE algorithm.
Various sampling designs are generated by three different Latin hypercube design methods including TPSLE, LHSD, and SLE. In order to reduce the randomness of sampling designs, sampling points are generated for 50 times through the TPSLE and SLE algorithm. Meanwhile, 500 times are for LHSD algorithm with the default set in MATLAB. It is noticed that sampling points are generated for 10 times as . And the best, worst, and mean values of the different criteria are calculated. It is noticed that the sampling designs are scaled to 0~1. Afterwards, testing and comparison results based on four test criteria which are minimal distance , centered discrepancy , , and potential energy are shown in Tables 1 and 2. The larger the values of and the smaller the values of , , and which are marked with bold and italic in Tables 1–3, the better the sampling design.



According to the comparison study with SLE algorithm and LHSD function with various number of points in twodimension in Table 1, the most mean values of , , and of the sampling designs produced by TPSLE algorithm are smaller than LHSD function, and the mean values of of sampling designs produced by TPSLE algorithm are all larger than LHSD function, which demonstrate that sampling designs using TPSLE algorithm have better performance compared with LHSD function. Furthermore, part of worst values , , , and of sampling designs produced by TPSLE algorithm are better than mean values of those produced by LHSD function especially for the criteria and . In order to show the good performance of sampling designs produced by TPSLE algorithm comprehensively, comparisons are made with sampling designs produced by SLE algorithm which is a timeconsuming algorithm. From the results shown in Table 1, sampling designs produced by TPSLE algorithm are compared to SLE algorithm in terms of performance. It is attractive that the sampling design with size generated by TPSLE algorithm is better than the other two methods in terms of test criteria , , and .
Similarly, the results of sampling designs in threedimension from Table 2 demonstrate the same conclusion as aforementioned. For observing the performance of sampling design produced by TPSLE algorithm intuitionally, the 3D spacefilling and corresponding 2D projective points generated based on three sampling methods separately are shown in Figures 7, 8, and 9.
For the sake of reflecting good performance of TPSLE further, the test criteria , , and of TPSLE are studied to compare with LHSD in high dimension, as shown in Table 3. As it is shown in Table 3, the minimum distances between any two points of sampling designs in high dimension generated by TPSLE are all larger than LHSD. The other criterion is smaller compared with LHSD. However, the potential energy of sampling designs generated by LHSD is smaller in some cases. It indicates that different optimal sampling designs may be obtained based on different optimal criteria. According to the comparison in Table 3, the performance of sampling designs generated by TPSLE in most cases is better than LHSD in high dimension.
In a word, we can conclude that better spacefilling and projective properties can be obtained by TPSLE through comparison with LHSD and SLE under different criteria of , , , and .
3.3. Efficiency Study of TPSLE Algorithm
In this section, an illustrative comparison among our proposed TPSLE algorithm, LHSD function in MATLAB, and SLE algorithm presented in Zhu et al. [20] is provided to show the significant savings achieved by our method.
The time consumptions of sampling designs using different algorithms are compared in Table 4. The computational time of them is measured on a PC with an Intel Core i3 3.3 GHz CPU. For sampling designs with various sizes accept , the time of TPSLE is close to zero, which is more efficient than algorithm SLE especially for larger sampling size. For the sampling designs with size and , TPSLE is even more effective than LHSD. When sampling dimension , the computational time increases rapidly but is still acceptable.

4. Application Study of TPSLE Algorithm
In this section, five mathematical examples listed in Appendix A and one engineering problem are used to study the validity of TPSLE algorithm. TPSLE algorithm is applied to construct metamodels and deal with engineering optimization design problem in this work.
4.1. Comparative Study Based on Metamodel Accuracy
Sampling designs are very important for constructing metamodels. Poor sampling designs not only lead to poor accuracy of metamodels, but also reduce the efficiency. In this paper, five widely accepted mathematical examples are employed to test the accuracy of metamodels that are built with different sampling methods, that is, LHSD and TPSLE. As one of the most effective approximation methods, radial basis functions’ (RBF) [23–25] interpolation is a better choice for constructing metamodels or finding the global optima of computationally expensive functions by using a limited number of sampling points. In this paper, RBF is used to construct metamodels and the basis function multiquadric is applied.
To make a fair comparison of two methods, the total number of sampling points () is the same for each method in each tested problem. As mentioned in the last section, 50 times procedures are conducted for each sampling design and therefore there are 50 sets of accuracy results for each sampling method. The accuracy measures, NRMSE and NMAX [26, 27] (see Appendix B for definition) summarized in Table 5, are average values. Note that a value of zero for both accuracy measures, NRMSE and NMAX, would indicate a perfect fit.

From the results shown in Table 5, it is found that values of both NRMSE and NMAX by TPSLE algorithm are smaller comparable to those of LHSD function. It indicates that the metamodels based on TPSLE sampling algorithm can obtain better approximations for blackbox functions. Such improved accuracy in metamodeling is attributed to better spacefilling and LHD projective properties achieved by the TPSLE method. Therefore, the performance of TPSLE is better than LHSD function provided by MATLAB in constructing metamodels.
4.2. Engineering Problem
The performance of TPSLE algorithm is tested by a typical mechanical design optimization problem involving three design variables, that is, pressure vessel design. This problem is modified from the original problem recorded in [28–30]. The schematic of the pressure vessel is shown in Figure 10. In this case, a cylindrical pressure vessel with two hemispherical heads is designed for minimum fabrication cost. Three variables are identified: thickness of the head , inner radius of the pressure vessel , and length of the vessel without heads . In this case, the variable vectors are given (in inches) by
The objective function is the combined cost of materials, forming and welding of the pressure vessel. The mathematical model of the optimization problem is expressed as
The ranges of the design variables , , and are used referring to the literature [30]. The minimum objective function value is 7021.3 declared in the literature [30].
The problem formulated above is a simple nonlinear constrained problem. Now assume the objective function defined by (10) is a computationintensive function and thus the reduction of the number of function evaluations is considered. Hence, metamodel of objective function is constructed by RBF. Initial sampling points are generated by TPSLE algorithm. For comparison, sampling design method LHSD function is also employed. The average values of optimal results from 50 runs are listed in Table 6. It can be seen from the table that TPSLE outperforms LHSD in terms of both the minimum objective function value and the efficiency, that is, the number of design iterations. As shown in the table, the TPSLE method requires 7.28 iterations to reach the optimum, whereas the LHSD method needs 11.8 iterations. Therefore, the performance of TPSLE is better than LHSD function in solving engineering design optimization problem.

5. Conclusion
In this paper, a methodology for creating novel Latin hypercube designs via translational propagation algorithm (TPSLE) is proposed. The approach is inspired by the idea that a simple initial block with a few points generated by a novel algorithm SLE can be used as a building block to construct a near optimal Latin hypercube design. TPSLE algorithm offers a balanced tradeoff between the efficiency and performance, that is, spacefilling and the projective properties. The greatest advantage of the proposed methodology is that it requires virtually no computational time. In fact, no global objective functions are employed to optimize in TPSLE algorithm which is quite different from the existing LHD sampling methods. The performance of the sampling points generated by TPSLE is studied through comparison with other sampling methods under different test criteria, and the efficiency of TPSLE and other sampling methods is compared.
It is found that the spacefilling and projective properties of sampling points using TPSLE are better than LHSD in most cases. In addition, though TPSLE algorithm is not as good as SLE in terms of performance of sampling points, the efficiency of TPSLE is further superior to SLE. TPSLE is a novel LHD sampling algorithm with acceptable spacefilling and projective properties, and the efficiency of sampling algorithm TPSLE is superior. For the sake of examining the validity of the proposed TPSLE sampling algorithm further, five typical mathematical examples and one mechanical design optimization problem have been tested. The assessment measures for accuracy of metamodels, that is, NRMSE and NMAX, are employed. In contrast to the traditional sampling methods LHSD, TPSLE results in more accurate metamodels. Furthermore, TPSLE is superior in solving engineering design optimization problem on exploring global minimum of metamodels.
The proposed sampling algorithm TPSLE is a wise tradeoff between performance and efficiency of sampling design and significantly outperforms the conventional sampling methods. However, there are still some shortcomings in TPSLE algorithm. Firstly, the performance of sampling points in high dimension is not good sometimes. Secondly, the sampling algorithm TPSLE cannot construct sampling points with arbitrary size directly. The problems mentioned above need to be resolved in future work.
Appendices
A. Mathematical Examples
Branin function (BR), : Alpine function (AF), : Peaks function, : Hartman function (HN), : where MATH function, :
B. Metamodel Accuracy Measures
Root mean squared error (RMSE): where is the number of additional testing points and and are the true function value and predicted metamodel value at the th testing point , respectively.
Normalized root mean squared error (NRMSE): RMSE only calculates the error of functions themselves. However, NRMSE allows comparison of the metamodel error values with regard to different functions.
Maximum absolute error: Normalized maximum absolute error: where is the mean of the actual function values at the test points.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The authors would like to thank everybody for their encouragement and support. The support grants from the National Science Foundation (CMMI51375389 and 51279165) are greatly acknowledged.
References
 L. Gu, “A comparison of polynomial based regression models in vehicle safety analysis,” in Proceedings of the ASME Design Engineering Technical Conference and Computers and Information in Engineering Conference (DAC' 01), pp. 509–514, September 2001. View at: Google Scholar
 P. N. Koch, T. W. Simpson, J. K. Allen, and F. Mistree, “Statistical approximations for multidisciplinary design optimization: the problem of size,” Journal of Aircraft, vol. 36, no. 1, pp. 275–286, 1999. View at: Publisher Site  Google Scholar
 T. J. Mitchell, “An algorithm for the construction of “Doptimal” experimental designs,” Journal of Technometrics, vol. 16, no. 2, pp. 203–210, 1974. View at: Google Scholar  MathSciNet
 R. H. Myers and D. C. Montgomery, Response Surface Methodology: Process and Product Optimization Using Designed Experiments, John Wiley & Sons, 1995. View at: MathSciNet
 W. Chen, A robust concept exploration method for configuring complex systems [Ph.D. thesis], Mechanical Engineering, Georgia Institute of Technology, Atlanta, Ga, USA, 1995.
 M. E. Johnson, L. M. Moore, and D. Ylvisaker, “Minimax and maximin distance designs,” Journal of Statistical Planning and Inference, vol. 26, no. 2, pp. 131–148, 1990. View at: Publisher Site  Google Scholar  MathSciNet
 M. D. Morris and T. J. Mitchell, “Exploratory designs for computational experiments,” Journal of Statistical Planning and Inference, vol. 43, no. 3, pp. 381–402, 1995. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 T. W. Simpson, J. D. Peplinski, P. N. Koch, and J. K. Allen, “Metamodels for computerbased engineering design: survey and recommendations,” Engineering with Computers, vol. 17, no. 2, pp. 129–150, 2001. View at: Publisher Site  Google Scholar
 G. Rennen, B. Husslage, E. R. Van Dam, and D. Den Hertog, “Nested maximin Latin hypercube designs,” Structural and Multidisciplinary Optimization, vol. 41, no. 3, pp. 371–395, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 G. Rennen, E. R. van Dam, and D. Den Hertog, SpaceFilling Latin Hypercube Designs for Computer Experiments, Tilburg University, 2006.
 J. R. Koehler and A. B. Owen, “Computer experiments,” in Handbook of Statistics, vol. 13, pp. 261–308, 1996. View at: Google Scholar
 K. Q. Ye, W. Li, and A. Sudjianto, “Algorithmic construction of optimal symmetric Latin hypercube designs,” Journal of Statistical Planning and Inference, vol. 90, no. 1, pp. 149–159, 2000. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 R. Jin, W. Chen, and A. Sudjianto, “An efficient algorithm for constructing optimal design of computer experiments,” Journal of Statistical Planning and Inference, vol. 134, no. 1, pp. 268–287, 2005. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 . Bates S J, J. Sienz, and V. Toropov V, “Formulation of the optimal Latin hypercube design of experiments using a permutation genetic algorithm,” AIAA Journal, vol. 2011, pp. 1–7, 2004. View at: Google Scholar
 M. Liefvendahl and R. Stocki, “A study on algorithms for optimization of Latin hypercubes,” Journal of Statistical Planning and Inference, vol. 136, no. 9, pp. 3231–3247, 2006. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 A. Grosso, A. R. M. J. U. Jamali, and M. Locatelli, “Finding maximin latin hypercube designs by Iterated Local Search heuristics,” European Journal of Operational Research, vol. 197, no. 2, pp. 541–547, 2009. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 A. Jourdan and J. Franco, “Optimal Latin hypercube designs for the KullbackLEIbler criterion,” AStA. Advances in Statistical Analysis., vol. 94, no. 4, pp. 341–351, 2010. View at: Publisher Site  Google Scholar  MathSciNet
 E. R. van Dam, B. Husslage, D. den Hertog et al., “Maximin Latin hypercube designs in two dimensions,” Journal of Operations Research, vol. 55, no. 1, pp. 158–169, 2007. View at: Publisher Site  Google Scholar  MathSciNet
 F. A. C. Viana, G. Venter, and V. Balabanov, “An algorithm for fast optimal Latin hypercube design of experiments,” International Journal for Numerical Methods in Engineering, vol. 82, no. 2, pp. 135–156, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 H. Zhu, L. Liu, T. Long, and L. Peng, “A novel algorithm of maximin Latin hypercube design using successive local enumeration,” Engineering Optimization, vol. 44, no. 5, pp. 551–564, 2012. View at: Publisher Site  Google Scholar
 F. J. Hickernell, “A generalized discrepancy and quadrature error bound,” Mathematics of Computation, vol. 67, no. 221, pp. 299–322, 1998. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 A. A. Mullur and A. Messac, “Extended radial basis functions: more flexible and effective metamodeling,” AIAA Journal, vol. 43, no. 6, pp. 1306–1315, 2005. View at: Publisher Site  Google Scholar
 G. S. Babu and S. Suresh, “Sequential projectionbased metacognitive learning in a radial basis function network for classification problems,” IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 2, pp. 194–206, 2013. View at: Google Scholar
 W. Yao, X. Q. Chen, Y. Y. Huang, and M. van Tooren, “A surrogatebased optimization method with RBF neural network enhanced by linear interpolation and hybrid infill strategy,” Optimization Methods & Software, vol. 29, no. 2, pp. 406–429, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 N. Vuković and Z. Miljković, “A growing and pruning sequential learning algorithm of hyper basis function neural network for function approximation,” Neural Networks, vol. 46, pp. 210–226, 2013. View at: Publisher Site  Google Scholar
 R. Jin, X. Du, and W. Chen, “The use of metamodeling techniques for optimization under uncertainty,” Structural and Multidisciplinary Optimization, vol. 25, no. 2, pp. 99–116, 2003. View at: Publisher Site  Google Scholar
 A. A. Mullur and A. Messac, “Metamodeling using extended radial basis functions: a comparative approach,” Engineering with Computers, vol. 21, no. 3, pp. 203–217, 2006. View at: Publisher Site  Google Scholar
 L. D. S. Coelho, “Gaussian quantumbehaved particle swarm optimization approaches for constrained engineering design problems,” Expert Systems with Applications, vol. 37, no. 2, pp. 1676–1683, 2010. View at: Publisher Site  Google Scholar
 Y. J. Cao and Q. H. Wu, “Mechanical design optimization by mixedvariable evolutionary programming,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '97), pp. 443–446, April 1997. View at: Google Scholar
 X. Wei, Y. Wu, and L. Chen, “A global optimization algorithm based on incremental metamodel method,” China Mechanical Engineering, vol. 24, no. 5, pp. 623–627, 2013. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2014 Guang Pan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.