Abstract

In order to truly reflect the ship performance under the influence of uncertainties, uncertainty-based design optimization (UDO) for ships that fully considers various uncertainties in the early stage of design has gradually received more and more attention. Meanwhile, it also brings high dimensionality problems, which may result in inefficient and impractical optimization. Sensitivity analysis (SA) is a feasible way to alleviate this problem, which can qualitatively or quantitatively evaluate the influence of the model input uncertainty on the model output, so that uninfluential uncertain variables can be determined for the descending dimension to achieve dimension reduction. In this paper, polynomial chaos expansions (PCE) with less computational cost are chosen to directly obtain Sobol' global sensitivity indices by its polynomial coefficients; that is, once the polynomial of the output variable is established, the analysis of the sensitivity index is only the postprocessing of polynomial coefficients. Besides, in order to further reduce the computational cost, for solving the polynomial coefficients of PCE, according to the properties of orthogonal polynomials, an improved probabilistic collocation method (IPCM) based on the linear independence principle is proposed to reduce sample points. Finally, the proposed method is applied to UDO of a bulk carrier preliminary design to ensure the robustness and reliability of the ship.

1. Introduction

Ship uncertainty-based design optimization (UDO) is a method of optimizing the design space according to the requirement of the robustness and the reliability under the influence of the uncertainty [1, 2], including robust design optimization (RDO) and reliability-based design optimization (RBDO). Papanikolaou et al. [3] considered uncertainties related to ship’s seakeeping responses and wave induced loads and presented their recent advances in modelling the combined hydrodynamic responses of ship structures using cross-spectral combination methods and using uncertainty models for the development of modern decision support systems as guidance to ship’s masters. Recently, Plessas and Papanikolaou [4] developed the procedure which may be used as a Decision Support Tool (DST) for interested ship investors including a Life Cycle Assessment (LCA) model that takes into consideration uncertainties of the most dominant economic parameters in tanker ship operation to demonstrate the optimization of alternative hull/engine/propeller setups for a defined operational scenario of a tanker that includes ship’s operational profile in calm seas and representative weather conditions, as well as all relevant safety and efficiency regulatory and technical constraints. Hou et al. [5] took Autonomous Underwater Vehicle (AUV) as the research object, the resistance as the optimization objective, and the characteristic parameters of AUV as the uncertainty and then evaluated the performance of AUV with the 6 sigma design criterion to ensure the operation reliability of AUV under the environmental interference. After that, 4 different forms of uncertainty and their applications in the RDO were considered [6], where the standard Wigley ship model was taken as the research object and EEOI of the whole route was used as the optimization objective to verify feasibility and superiority of this method. The same method was also applied in the hull form optimization of a bulk carrier [7].

In addition, Matteo Diez and his team have led in the ship uncertainty-based design optimization for years. They applied the PSO algorithm to the robust optimization concept design of a bulk carrier [8] and proposed a two-stage ship design optimization method considering the uncertainty [9]. Then they systematically demonstrated RDO, introduced an integral form uncertainty quantification method, and compared various robust optimization forms to verify the superiority of the method through a robust optimization example of a bulk carrier [10]. For the epistemic uncertainty, a Bayesian method was proposed to quantify this kind of uncertainty while the probability density function was still used to model the aleatory uncertainty [11]. Subsequently, he proposed a variable-accuracy metamodel-based architecture to improve the efficiency of multidisciplinary robust optimization design and applied it to the hydrostatic resistance optimization of DTMB5415 model [12]. In order to accurately establish the probability distribution of uncertain variables, Diez and his team obtained a large amount of operational data through a two-month data collection of five sister ships and derived a probability distribution of displacement and speed [13], where some modifications of a small portion of the hull were proposed in order to increase significantly the performances of the hull and decrease the operative cost of the ship.

When the ship design process is tackled, uncertain variables of the mission profile, main dimensions, etc. need to be defined. In order to truly reflect the ship performance under the influence of uncertainties, the designer gradually considers more and more uncertainties in the ship UDO process to obtain a more robust and reliable solution. It should be acknowledged that more uncertainties are helpful to simulate the real operating environment, but the high-dimensional problems that come with it also raise considerable challenges. For UDO, it is necessary to quantify the uncertainty for every case; therefore, it is obvious that adding another uncertain parameter will definitely increase the calculation burden, resulting in low optimization efficiency. Sensitivity analysis (SA), as a dimension reduction technology, is a feasible way and has been adopted to alleviate this problem. SA can qualitatively or quantitatively evaluate the influence of the model input uncertainty on the model output, so that uninfluential uncertain variables can be determined for the descending dimension to achieve dimension reduction. SA methods can be divided into local sensitivity analysis (LSA) method and global sensitivity analysis (GSA) method. The former investigates effects of variations of input factors in the vicinity of nominal values, whereas the latter aims to quantify the output uncertainty due to variations of the input factors in their entire domain. GSA does not require the model to be a linear system like LSA and it also includes investigation of interactions between model parameters, which is currently more widely applied.

For GSA methods, there are Fourier amplitude sensitivity test (FAST), extended FAST (EFAST), random balance design (RBD), and Sobol' global sensitivity method based on variance decomposition, etc. Among several GSA methods, GSA with Sobol’ sensitivity indices is herein of interest [14], which belongs to the broader class of variance-based methods. These methods do not assume any kind of linearity or monotonicity of the model and rely upon the decomposition of the response variance as a sum of contributions of each input factor or combinations thereof, that is, partial variance. Various methods have been investigated for computing Sobol’ indices; among them, Monte Carlo (MC) method is still the most commonly used method to sample and estimate the integral in indices; however, a prominent drawback of this classical method is that it often requires thousands of model evaluations, which requires substantial computational resources and makes it impractical in terms of time consumption. Therefore, it is obviously infeasible to use MC method directly. In other words, a method as accurate as the MC method but with lower computational cost is desired.

In recent years, PCE has gradually been introduced in this field. Using PCE method to perform SA in UDO, the coefficients of the polynomial can not only obtain the stochastic property of outputs required (mean, standard deviation, skewness, and kurtosis) in UDO, but also directly obtain Sobol' sensitivity indices. In other words, once a PCE representation is available, calculating Sobol’ indices is only the postprocessing of coefficients of PCE; therefore, indices can be calculated analytically at almost no additional computational cost. It was originally introduced by Sudret [15] and applied to replace the traditional Sobol’ MC method to calculate Sobol’ indices. In the field of the stochastic modelling of subsurface flow and mass transport, PCE has been widely used and proved to be able to perform SA comprehensively and reliably at low computational cost. Fajraoui et al. [16] and Younes et al. [17] applied a PCE-based global SA to flow and mass transport in a heterogeneous porous medium and established the transient effect of uncertain flow boundary conditions, hydraulic conductivities, and dispersivities; in the radionuclide transport simulation in aquifers, Ciriello et al. [18] analyzed the statistical moments of the peak solute concentration measured at a specific location; Sochala and Le Maitre [19] considered uncertain effects of soil parameters upon 3 different physical models of subsurface unsaturated flow; Formaggia et al. [20] applied PCE-based sensitivity indices to evaluate the influence of the uncertainty in hydrogeological variables on a basin-scale sedimentation process; Deman G. et al. [21] combined the computation of Sobol’ indices with sparse PCE method to effectively analyze the influence of input parameter uncertainty of the ground water over the life cycle on model outputs.

Compared with MC method, PCE method can greatly reduce the computational cost; however, it still has the potential to further reduce the amount of calculation. In the process of solving the polynomial coefficient of PCE, probabilistic collocation method (PCM) is used instead of the common statistical methods, such as Latin hypercube sampling method, which select a large number of sample points to maintain the calculation accuracy. Generally, the number of sample points should be greater than the number of undetermined coefficients. However, different from statistical methods, due to the orthogonal property of the polynomial, input sample points of PCM are not randomly selected, but according to certain rules, which may lead to fewer sample points. Xiu et al. [22] discussed the use of PCM to solve elliptic stochastic partial differential equations; Foo et al. [23] used PCM to study the three-dimensional problem with random loads and the spatial variability material; Li et al. [24] applied PCM to the simulation of the ground water seepage and the solute transport; Isukappali [25, 26] thought that the number of collocation points should be twice as many as the number of undetermined coefficients and then used the regression method to calculate the coefficients. Unfortunately, Jiang et al. [27] found that sometimes increasing the number of collocation points to more than 8 times the number of the undetermined coefficients still cannot maintain the calculation accuracy. In order to deal with the unbalance between the number of sample points and the calculation accuracy, an improved probabilistic collocation method (IPCM) based on the linear independence principle is adopted to give the optimal number of collocation points by comparing the rank of the coefficient matrix with the number of collocation points, which reduces collocation points for solving polynomial coefficients of PCE, thereby improving the efficiency of the uncertainty-based optimization.

The paper takes the ship uncertainty design as the research object, analyzes the influence of multiple random variables on the optimization target and the constraint, and proposes a ship uncertainty design based on the multidimensional polynomial chaos expansion method. It finally applies it into the practical engineering optimization design of a bulk ship, to ensure the robustness and reliability of the ship.

The author has done some work about ship UDO [28]. In contrast to the previous work, this paper uses Sobol’ indices based on PCE for dimension reduction, reduces the sample points by using an improved probability collocation method, and solves failure probability via maximum entropy method (MEM) and is organized as follows. In Section 2, a brief outline of uncertainty-based design optimization and its formulation are presented. Section 3 introduces PCE theory and summarizes the process of solving the polynomial; moreover, it provides concepts of SA with Sobol’ indices and the computation of those indices using PCE. In Section 4, the basic theory of PCM is introduced and its improvement based on the linear independence principle is given with numerical examples to verify its feasibility. Finally, the proposed method is applied to the uncertainty-based optimization of the bulk carrier preliminary design in Section 5, whereas final conclusions are shown in Section 6.

2. Uncertainty-Based Design Optimization

According to different design requirements, uncertainty-based design optimization can be divided into robust design optimization, reliability-based design optimization, and reliability-based robust design optimization.

2.1. Robust Design Optimization (RDO)

Compared with the traditional design optimization, RDO considers the interference of external factors on outputs, whose solutions are not easily perturbed by external factors.

As shown in Figure 1, point 1 is the optimal point of deterministic optimization (DO) while point 2 is the optimal point of RDO. If the influence of uncertain factors is not considered, the objective function value of point 1 is minimum, which should be the optimal solution of objective function. However, when the design variable perturbs in the range of , the perturbation of objective function at point 1 violates constraints, resulting in failure of the product. The perturbation is obviously less than , and all the objective outputs are within the constraint. Therefore, the robust optimal solution is a relatively stable optimal solution.

RDO does not only improve the objective performance, but also reduce its sensitivity to the uncertainty. Therefore, considering the influence of the uncertainty, its typical measurement indexes are the mean and the standard deviation of the objective function. Its mathematical expression can be concisely expressed in

where and are the mean and the standard deviation of the objective function respectively, is the th inequality constraint, and is the th equality constraint.

2.2. Reliability-Based Design Optimization (RBDO)

The core of RBDO is to deal with the influence of the uncertainty on the constraints, and its mathematical expression is as follows:

where denotes the probability that the th constraint is less than or equal to 0, denotes the reliability that the th constraint is required to reach, and is the mean of theth equality constraint.

2.3. Reliability-Based Robust Design Optimization

Changing the constraint in RDO to the probability constraint, RBRDO takes the influence of the uncertainty on the objective and the constraint into account simultaneously. Its mathematical expression is as follows:

3. Polynomial Chaos Expansions for Sensitivity Analysis

3.1. Polynomial Chaos Expansions (PCE)

Compared with MC method, PCE [29] can use the less computational cost to quantify the stochastic property of the output under the influence of multiple uncertain parameters, which has more advantages in ship uncertainty-based design optimization than MC method.

Constructing random output variables into a surrogate model with the randomness, for a random variable , it can be approximated as

where are independent n-dimensional random input variables, is the random output variable, are coefficients of polynomial chaos expansions terms, and is the multidimensional orthogonal polynomial with respect to , which satisfies

In order to estimate the uncertainty of outputs, (4) is truncated to a certain order ; therefore, -order polynomial chaos expansions of is

where the total number of PCE terms is . Since -dimensional random variables are mutually independent, the multidimensional orthogonal polynomial can be decomposed into the product of multiple one-dimensional orthogonal polynomials:

where is the lt-order one-dimensional orthogonal polynomial with respect to .

In summary, in order to do the multidimensional orthogonal polynomial chaos expansions, coefficients of polynomial chaos expansions terms and the one-dimensional orthogonal polynomial basis need to be obtained. The solving process of the one-dimensional orthogonal polynomial basis of different probability distributions can be seen in [29]. can be obtained from (7) after obtaining the one-dimensional orthogonal polynomial. As for coefficients of polynomial chaos expansions terms , given a set of input sample points and corresponding outputs , can be calculated using

where is the polynomial coefficient matrix, is the coefficient matrix, and is the output matrix. Meanwhile, after obtaining coefficients, according to the property of the orthogonal polynomial, the mean and the standard deviation of the output are

Higher statistical moments can be calculated similarly.

3.2. Sobol Decomposition and Indices

First, an input random variable is assumed as with joint PDF . Due to the assumption of independent components of , , where is the marginal PDF of . For the square integrable function , it can be decomposed into summands of its main effects and interactions as

where , . represents the subvector of containing only those components of which the indices belong to .

The Sobol’ decomposition is unique under the condition that the integral of the expansion term for any one of independent variables is 0, as in

Equation (12) leads to the orthogonality property:

The square integral of two sides of (11) is shown in

The left side of (14) is the variance of the output and the right side is the partial variance of the output of the corresponding variable. Take first-order partial variance as and second-order variance as . Meanwhile, the uniqueness and orthogonality properties allow decomposition of the variance of as

where denotes the partial variance. Then the Sobol’ index is defined as

where . For first-order indices, , it describes the influence of each variable considered separately as main effects, . Second-order indices, , describe the effects of the interaction between the variables and . Higher-order indices can be obtained similarly. However, it is obviously impossible to consider all higher-order effects. Therefore, in practical applications, all higher effects are often measured by the total effect sensitivity indices, , accounting for its main effect and all interactions with other parameters.

3.3. Sobol’ Indices from Polynomial Chaos Expansions

As mentioned in Section 3.1, and . Let us define as a set of and

Only when , is nonzero. Note that corresponds to the polynomials depending on ; for example, the polynomial of is represented by . Therefore, terms in (6) may now be gathered according to the parameters they depend on:

Comparing (18) with (11), the summands in the Sobol’ decomposition of straightforwardly read:

It is now easy to derive sensitivity indices from the above representation. According to (10), (11), (16), and (19), PCE-based Sobol’ indices are defined as

3.4. Numerical Examples
3.4.1. Polynomial Model

First, consider a simple polynomial function:

where the input variable is uniformly distributed on . The analytical solution of Sobol’ indices [14] are

This numerical example is carried out for ; that is, the order of this polynomial model is 6. Therefore, exact values of sensitivity indices are

In practice, the true form of the model output is unknown, thus, the order of PCE has to be chosen a priori. Thus 4 cases are considered; namely, the expansion order is 3,4,5,6, and sensitivity indices under different orders are listed in Table 1.

As shown in Table 1, compared with the analytical solution, using a 3rd order PCE, the solution has already provided a good estimation of sensitivity indices, since the relative error is less than 0.2% for the first-order indices and 1.2% for second-order indices. For third-order indices, the relative error is larger which reaches 17.6%. When expansion order is 4, these relative errors all drop to 1% or less. The relative error obtained with and the relative order of the first-order indices were infinitely close to 0 where that of second-order and third-order indices decreases down to 0.1%. When the order of PCE reaches 6, the true order of the function and the relative error are 0 and the exact value is obtained.

3.4.2. Sobol’ Function

Then Sobol’ function is considered:

where the input variable is uniformly distributed over . The analytical solution of Sobol’ indices [30] is

In this case, is selected with . At this time, the order of Sobol’ function is 8. If the 8th PCE with 8 variables is performed, there are 12870 coefficients to be calculated, leading to a high calculation cost. In order to reduce the computational burden, a 2nd order PCE with 45 coefficients is used. and are reported in Table 2 and Figure 2.

It can be seen that first 4 variables are the main influence on the output whether from or . Therefore, the last 4 variables are fixed at their mean value () while the first 4 variables are still uniformly distributed over . This simplified model is used to replace the original function for analysis with different orders and the results are shown in Table 3.

Compared with analytical solutions, some conclusions can be drawn.

(1) When , the relative error of sensitivity indices is large and maximum error reaches nearly 11%. However, as soon as the order of PCE is greater than 5, sensitivity indices have gradually converged to the exact values. Specifically, relative errors of and are both less than 3% for and 1% for , which shows that the simplified model is able to replace the original model with guaranteed accuracy.

(2) For the latter 2 cases, the model evaluations are 171 (45+126) and 375 (45+330), respectively, which is a significant reduction compared with the model of the 8th PCE with 8 variables (12870). Therefore, the simplified model can greatly reduce the computational cost of SA under the premise of ensuring the accuracy; therefore, this method can be applied into practical applications.

In conclusion, PCE-based Sobol’ indices have proven the efficiency and accuracy in SA. It is shown that the computation of Sobol’ indices after a proper expansion is precise enough while the computation cost is thus transferred to the obtention of the PCE coefficients, and the subsequent postprocessing being almost costless.

4. Probabilistic Collocation Method (PCM)

As mentioned in Section 3, a set of sampling points need to be generated to solve the coefficient of PCE. The number of sampling points determines the calculation times of outputs, which directly affects the calculation cost. In addition, the distribution of sampling points will also affect the fitting effect of PCE; thus this affects the accuracy of subsequent results (statistical moments). Due to the orthogonality of PCE, probabilistic collocation method (PCM) is adopted instead of commonly used statistical sampling methods (LHS, etc.)

4.1. Implementation of PCM

Considering as one-dimensional variable, the residual between the truncated model and the actual model is defined as

The coefficient can be calculated by the weighted residual method; therefore, the residual and the one-dimensional orthogonal polynomial should be orthogonal to each other:

Equation (12) can be solved by using the Gaussian quadrature approximation:

where and are weights and abscissas of Gaussian integral points, respectively. If has the same sign and is not zero for all and , (13) can be simplified as

Equation (29) is an equation with respect to only coefficients and abscissas of Gaussian integral point, which represents that the coefficient can be solved by using the known Gaussian integral point.

Therefore, for -order polynomial expansions of one-dimensional random variables, collocation points are simply the roots of one-dimensional orthogonal polynomial with the order of . For instance, Hermite polynomials are the optimal polynomial of the normal distribution, which has the highest probability at the origin. Therefore, collocation points should be arranged as close and symmetrical as possible to the origin. In addition, odd-order PCE requires roots of even-order Hermite polynomials as collocation points, whereas even-order polynomials have no zero root; therefore, zero root is added as a collocation point of odd-order PCE [31]. Thus, the root of the first 4 order Hermite polynomials are shown in Table 4.

For n-dimensional random variables, collocation points are the combination of roots of the next higher-order orthogonal polynomial. Then the number of all the available probability distribution points is

4.2. Improvement in PCM

Input samples of PCM are selected according to some certain rules. Generally speaking, the number of sample points should be greater than the number of undetermined coefficients. Isukappali [25, 26] thought that the number of collocation points should be twice as many as the number of undetermined coefficients and then used the regression method to calculate coefficients. Unfortunately, Jiang et al. [27] found that sometimes increasing the number of collocation points to more than 8 times the number of undetermined coefficients still cannot maintain the calculation accuracy. The reason is that the polynomial coefficient matrix contains many linearly dependent collocation points and these points are not helpful to improve the calculation accuracy. Rejecting these linearly dependent points will improve the computational efficiency; therefore, based on the principle of linearly independence, an improved probabilistic collocation method (IPCM) is proposed to reject these linearly dependent points and ensure the linearly independence of the coefficient matrix; that is, the matrix is the full row rank matrix. Figure 3 is the flow chart of this method and the steps are as follows.

Step 1. The expansion order, the distribution type, and the number of variables are determined.

Step 2. The collocation points are arranged in the decreasing probability density (an ascending order according their distance from the origin). For instance, the point () should be selected first.

Step 3. Then the sorted collocation points are selected one by one to establish the polynomial coefficient matrix row by row.

Step 4. For the candidate of the (i+1)th collocation point, the (i+1)th row of the matrix should be linearly independent with the previous ith rows. Therefore, the rank of this matrix is calculated and judged whether it is equal to the row of this matrix. If it is equal, this candidate is preserved or otherwise abandoned. Then the point with next highest probability density should be tested.

Step 5. The process does not end until the number of the row is equal to the number of undetermined coefficients.

When the number of random variables and the degree of PCE are both high, due to the multiple times of composing the polynomial coefficient matrix and calculating the rank of the matrix, the calculation cost of searching linearly independent points can be high as well after following the above steps. Fortunately, if the same basis polynomial chaos is adopted, for a given number of random variables and the order of PCE, collocation points for computing different outputs are the same, which means that linearly independent collocation points can be searched and saved in advance and can be used whenever you need to save the time of repeating searching the same collocation points.

4.3. Numerical Examples

To demonstrate the uncertainty quantification capability and illustrate the application of IPCM, a nonlinear nonmonotonic Ishigami function is considered, using IPCM and PCM to select collocation points, respectively.

where the input variable is uniformly distributed on and in this case , . Due to its nonlinearity, its true order cannot be determined. Thus 4 cases are considered; namely, the expansion order is 3,5,7,9 and the number of their corresponding expansion terms is 20, 56, 120, and 220, respectively. Taking the result of MC method (1000 sample points) as the exact value, PCM (the number of collocation points is 125, 343, 729, and 1331, respectively) and IPCM are used to select sample points for and to calculate the mean and the standard deviation of . The variation of them with the expansion order is presented in Figure 4.

From Figure 4, it can be seen that errors of the mean and the standard deviation calculated by PCM and IPCM are large when the expansion order is low (). When the order rises, the mean and the standard deviation tend to converge to the result of MC method. Specifically, when , the mean and the standard deviation calculated by these 2 methods are within 1% error band of the result of MC method. When , the accuracy of the mean and the standard deviation slightly increases compared with ; however, the number of collocation points is more than 2 times that of . Therefore, PCE have different optimal expansion orders for different problems; if the expansion order increases blindly, it may obtain a more accurate result but result in higher computational cost.

Meanwhile, under the same expansion order, although the accuracy of PCM is slightly higher than that of IPCM, the number of collocation points required by IPCM is much less than that of PCM. As the order increases, for example, , the number of collocation points required by PCM (1331) is even 6 times that of IPCM (220). In conclusion, IPCM has an obvious advantage in the calculation; that is, it can guarantee the calculation accuracy with much fewer collocation points, compared with PCM and MC method.

As can be seen, the number of collocation points selected by PCM is greater than the number of undetermined coefficients. However, IPCM can maintain the accuracy by selecting a part of the points from collocation points generated by PCM. In order to further explain the relationship between the number of collocation points and the calculation accuracy, a part of collocation points generated by PCM, namely, 2 times, 3 times, and 4 times the number of undetermined coefficients, is selected to calculate the mean and the standard deviation by the regression method.

Tables 5 and 6 give the comparison of the mean and the standard deviation of for the different number of collocation points. When , the increase in the number of collocation points does not improve the calculation accuracy; when , the calculation accuracy can only be improved when the number of collocation points increases to 3P.

In order to explain the above relationship between the calculation accuracy and the number of collocation points, Figure 5 gives the relationship between the rank of the coefficient matrix and the number of collocation points. It can be observed that when the number of collocation points is greater than the number of undetermined coefficients, the polynomial coefficient matrix may not reach the full rank. In this case, errors of results in Tables 5 and 6 are large. As shown in Figure 5, when , the polynomial coefficient matrix can reach the full rank when the number of collocation points is greater than 30. Then the rank of the coefficient matrix remains unchanged even if the number of collocation points increases, which means that increasing the number of collocation points will not improve the calculation accuracy anymore. When N=4P=80, relative errors of the mean and the standard deviation are 22.9% and 19.9%, respectively, while relative errors when N=30 are 19.3% and 18.3%. In these 2 cases, the polynomial coefficient matrices both have already reached the full rank; thus the accuracy does not improve significantly. Nevertheless, the number of collocation points of the former is more than 2 times that of the latter. Similar conclusions can be drawn when .

However, when , the polynomial coefficient matrix reaches the full rank when the number of collocation points are greater than 247, which explains the improvement in calculation accuracy when N=3P=360. When N=2P, the number of collocation points is less than 247 and the polynomial coefficient matrix does not reach the full rank resulting in large errors even no solution. Besides, the rank of the coefficient matrix does not increase monotonically with the increase of the number of collocation points. Taking the case as an example, from the 172th to the 235th collocation point, the rank of the polynomial coefficient matrix is always 108, which indicates that these collocation points are linearly dependent and only one point is selected while other points can be eliminated.

When using the regression method, the minimum number of collocation points that guarantee the full rank of the coefficient is 30(d=3), 84(d=5), 247(d=7), and 404(d=9). However, the number of collocation points that guarantees the full rank of the coefficient is 20(d=3), 56(d=5), 120(d=7), and 240(d=9) using IPCM. It can be seen that IPCM can reduce the number of collocation points required for the calculation greatly, thereby leading to a decline in the computational cost. The accuracy of the former is slightly higher than that of the latter; this is because the latter only uses a part of collocation points of the former. Nevertheless, compared with the exact solution, the result of IPCM can fully meet the requirement of the accuracy, and the computational efficiency is improved.

In summary, when selecting a part of collocation points generated by PCM to solve undetermined coefficients of the polynomial, the selected collocation points should satisfy the condition that the rank of the coefficient matrix should be equal to the number of the undetermined coefficient. However, for the high-dimensional high-order problems, complex uncertainty analyses, and simulations, a large amount of calculation is required. In these cases, IPCM has great advantages due to the fact that IPCM based on the linear independent principle only need the same number of collocation points as the number of undetermined coefficients, with an obvious decline in required collocation points compared with PCM.

5. Preliminary Design of the Bulk Carrier

5.1. Description of Optimization Design

The proposed method is applied to the ship UDO. The optimization model of the bulk carrier proposed by Hannapel S and Vlahopoulos N is considered. Details of this model can be found in [32]. Deterministic optimization (DO) and uncertainty optimization are both applied to this model.

5.2. Specific Process of Bulk Carrier Uncertainty-Based Design Optimization

The specific process is shown in Figure 6. The steps of this process are as follows.

Step 1. Preparation work: determine the optimal PCE order of original model, then perform the sensitivity analysis on uncertain variables and rank them according to their degree of influence on the objective and the constraint, fix the uncertain variable with less influence at the mean value, and reduce the dimension of the model. Redetermine the optimal PCE order for the simplified model.

Step 2. Design variables and their ranges are determined.

Step 3. A set of design variables, in other words, a bulk carrier scheme, is selected and transferred to the inner layer, the uncertainty analysis module.

Step 4. According to the expansion order, the distribution type, and the number of uncertain variables, which are determined in advance, IPCM is used to select collocation points for uncertain variables.

Step 5. PCE is used to perform the expansion of the objective and the constraint.

Step 6. According to the design requirement, select one of 3 types of UDO. Then the mean and the standard deviation of the objective and the failure probability of the constraint are obtained and transferred to the outer layer, optimization module.

Step 7. The optimizer judges whether the entire optimization process meets requirements of the iteration number. If the process reaches the iteration number, the optimization process ends. If not, step 3~step 6 are implemented again.

5.3. Optimization Analyses
5.3.1. Deterministic Optimization

In order to give a baseline for the comparison for later results of uncertainty optimization, DO is done first. DO uses a multi-island genetic algorithm, with the subpopulation size being 10, the number of islands being 10, and the number of generations being 50. Design variables, constraints, and objective functions are presented in Table 7 and the comparison between results of DO and initial design [32] is listed in Table 12.

5.3.2. Uncertainty-Based Optimization

Selection of Uncertain Variables. In the real travelling process, due to the unavoidable disturbance such as the wind and the wave, the ship resistance in waves will increase with more violent ship motions; therefore the service/operational speed of ship will decrease compared with the hydrostatic condition. Therefore, the speed of a sailing ship is not a fixed value, but an uncertain variable that perturbs around the design speed. By monitoring the ship speed in a period of time [6, 13], we obviously find that the speed perturbation is a universal existence. Although the perturbation uncertainty magnitudes are relatively small in most cases, their continuous calculation and coupling effects with other parameters will cause the system response to a large deviation in the optimization design process. Therefore, the ship speed is selected as an uncertain variable in

where is the actual speed, is the design speed, and is the speed perturbation, which follows the normal distribution with the mean of 0 and the standard deviation of [6].

Then the uncertainty in parameters is introduced. The steel weight of the bulk carrier is given by

Equation (33) is obtained by the statistical regression using the real ship data. Considering the limited number of samples and the uncontrollable error of using the unknown regression method, the regression exponent cannot accurately reflect the actual engineering situation; therefore, the uncertainty is applied to the exponent [32], which are assumed to be random parameters, namely, (the normal distribution with the mean of 1.7 and the standard deviation of 0.17) and (the normal distribution with the mean of 0.7 and the standard deviation of 0.07), (the normal distribution with the mean of 0.4 and the standard deviation of 0.04), and (the normal distribution with the mean of 0.5 and the standard deviation of 0.05), as presented in

Determination of the Optimal PCE Order. Considering the influence of the uncertainty on the objective and the constraint for the DO optimal case, the objective and the constraint are computed using PCE, whose expansion order is with the number of their corresponding expansion terms are 21, 56, and 126, respectively. IPCM is used to select collocation points, for the optimization objective, and the constraint, the value of undetermined coefficients of order is shown in Figure 7, where the abscissa is the serial number of the coefficient and the ordinate is the coefficient value.

It can be seen from Figure 10 that, for , undetermined coefficient values after the number 72 are almost all 0; thus the 4th order expansion can basically meet the requirement of the accuracy. For , the coefficient values of are nearly the same, thus the 2nd-order expansion is optimal for . For , the coefficient values after the number 56 (the 4th terms) are almost all 0, which means the 4th terms have no contribution to ; therefore, 3rd-order expansion is selected. For , coefficient values have been close to 0 since the number 107; therefore, the 4th-order can basically meet requirement of the accuracy. In summary, the optimal PCE order is the 4th-order.

Sensitivity Analysis. Then, for the objective and the constraint, sensitivity analysis is performed by using PCE method and traditional Sobol’ method. The sensitivity indices, namely, and of the objective and the constraint to , , , , and , are shown in Table 8 and Figure 8.

According to and , 5 uncertain variables are ranked in order. For , and , , and have dominated the role and A has only a little effect, while the influence of other variables can be neglected. On the contrary, for , appears to be the most important in the decomposition of the variance of the response while the effect of other 4 variables has no need to consider, which is consistent with the formula of . In conclusion, whether from or , , , and have main effects on the objective and the constraint. Therefore, and are fixed at their mean values, while , , and still satisfy their original distributions. The simplified model is used to replace the original model to perform optimization.

For this simplified model, the process of determining the optimal PCE order is repeated, and 3rd-order expansion is optimal for the model. The mean and the standard deviation of 3rd-order expansion are compared with results of MC method, as can be seen in Table 9.

From results in Table 9, taking the result of MC method as the exact value, the error of the mean and the standard deviation solved by PCE is within 2% and 3.5%, respectively. Meanwhile required sample points of PCE (20 sample points) are far less than those of MC method (100 sample points). Therefore, the advantage of PCE in computational efforts is obvious; specifically, required sample points reduce greatly while the calculation accuracy is not significantly reduced.

Then, according to the solved first 4 order moments of the constraints, maximum entropy method (MEM) is adopted to obtain PDF of constraints and then the failure probability is solved. Compared with MC method, the results are listed in Table 10.

As shown in Table 10, taking the influence of uncertainties into account, the deterministic optimal solution will violate the constraint, resulting in the failure of the solution; thus it is necessary to consider the uncertain factor in the ship design. Meanwhile, compared with the result of MC method, the error of the failure probability obtained by PCE is small; therefore, the proposed method can replace MC method to solve the failure probability.

Parameter Setting of UDO. All 3 types of UDO use NSGA-II algorithm, with the population size being 40 and the number of generations being 40. Design variables, constraints, and objective functions are presented in Table 11. Models before and after dimension reduction, namely, 5-dimensional 4th-order PCE and 3-dimensinal 3rd-order PCE are both adopted for UDO and validated by MC method. Results of these 2 methods are presented in Table 12 and the iteration of the objective is shown in Figures 9, 10, and 11. Due to the randomness of NSGA-II, there are some differences in the results of the repeated search of the optimization algorithm for the same optimization problem. In this paper, the average of 5 independent times of optimization is used as the optimal value.

5.3.3. Analyses of Optimization Results

Some conclusions can be drawn.

(1) As can be seen from Figures 9, 10, and 11, for RDO, 5D PCE model starts to converge in the 900th iteration while 3D PCE model begins to converge in the 400th iteration, which indicates that the model after dimension reduction converges faster than the original model. For RBDO and RBRDO, due to the complexity of solving the failure probability, the convergence rate of the model after dimension reduction is nearly the same as the original model. However, from the total time of completing the whole optimization process, due to the reason that the collocation points of the model after dimension reduction have been reduced from 126 to 21, therefore, the time of completing 1600 iterations is much more less than that of original models, reflecting remarkable effects of dimension reduction. Besides, the difference of objective values before and after dimension reduction is little; concretely, the error of 3 cases is 1.2%, 2.7%, and 1.8%, respectively. The main reason is that of and is relatively small, which can be regarded as uninfluential variables; therefore, the effects of fixing and on the optimal solution are extremely limited and the quality of the optimal solution is not greatly affected. In conclusion, the model after dimension reduction can replace the original model for UDO.

(2) Table 12 also gives the optimal solutions after dimension reduction (3D) of RDO, RBDO, and RBRDO, solved by PCE and MC method. Optimal cases obtained by these 2 methods are almost the same, which validates the feasibility of PCE again. Moreover, the advantage of PCE method can be seen from Figures 9, 10, and 11; due to the fact that PCE can maintain the accuracy with fewer sample points, for RDO, the objective of PCE has decreased greatly in the early stage of the iteration and tends to converge at the 400th iteration. The objective of PCE completes the entire 1600 iterations in 400 min while the objective of MC method completes the entire 1600 iterations in 1800 min. Similarly, for RBDO, the objective of PCE completes the entire 1600 iterations in 1800 min while the objective of MC method completes the entire 1600 iterations in 2600 min. For RBRDO, the difference is more obvious that the objective of PCE completes the entire 1600 iterations in 3600 min while the objective of MC method completes the entire 1600 iterations in 5600 min. The time of PCE is significantly lower than that of MC method for all 3 types of UDO, thereby showing obvious advantages in engineering applications.

(3) Taking the result of PCE as an example, compared with the initial design, of the 3 uncertain optimal cases is reduced by 14.5% (RDO), 13.6% (RBDO), and 13.2% (RBRDO), which are slightly lower than that of DO, 14.9%. However, for the 3 uncertain optimal cases, the influence of the uncertainty is taken into account during the optimization process; thus the standard deviation is reduced greatly compared with that of DO. More specifically, for RDO, its optimal case has lower mean and standard deviation than those of RBDO, which means the perturbation of the objective caused by the uncertainty is much smaller than that of the latter, leading to a more robust design. For RBDO, its failure probability of constraints is all 0, which is lower than that of RDO, thereby leading to a design with a higher reliability. For RBRDO, it combines the advantages of RDO and RBDO, the mean and the standard deviation are reduced, and the failure probability of constraints is less than 0.5%, which shows the fact that the optimal case of RBRDO can guarantee the reliability demand and reduce the perturbation caused by the uncertainty at the same time, thus ensuring both the reliability and the robustness of the case.

6. Conclusions

This paper proposed a sensitivity analysis based on polynomial chaos expansions and its application in ship UDO. Combined with the multiobjective genetic algorithm, an efficient UDO system is constructed and applied to the ship design, and some conclusions can be drawn:

(1) PCE-based Sobol’ indices are used to perform SA for the model and uncertain variables are ranked based on the indices to achieve the dimensionality reduction. Compared with MC method, the proposed method has proven the efficiency and accuracy in SA. Specifically speaking, the computation of Sobol’ indices after a proper expansion is precise enough while the computation cost is transferred to the obtention of the PCE coefficients, the subsequent postprocessing being almost costless.

(2) When using PCE, IPCM can give the optimal number of collocation points by comparing the rank of the coefficient matrix with the number of collocation points, which reduces collocation points greatly and overcomes the shortcoming, large computational complexity of traditional statistical methods.

(3) In the bulk carrier uncertainty-based design optimization, the perturbation of the objective and the constraint under the influence of multiple random variables are considered. By analyzing optimization results, first, it is necessary to take the influence of uncertainties into account where the deterministic optimal solution will violate the constraint, resulting in the failure of the solution. Meanwhile, in the optimization process, on the premise of maintaining the accuracy, the time of models after dimension reduction is significantly lower than that of original models; at the same time, the time of PCE is significantly lower than that of commonly used MC method, which shows more advantages in engineering applications.

However, the paper models all uncertain variables by using only probabilistic method and only discussed the application of the proposed method in the ship preliminary design optimization. Future work will use the probabilistic and nonprobabilistic methods to model uncertain variables according to their different characteristics and will not be limited to the ship conceptual design and will be carried out to guide the hull form design.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China [Grant nos. 51720105011, 51709213, 51609187, and 51479150] and Fundamental Research Funds for the Central Universities [WUT:2018IVB068].