Abstract

Parameter sensitivity analyses have been widely applied to industrial problems for evaluating parameter significance, effects on responses, uncertainty influence, and so forth. In the interest of simple implementation and computational efficiency, this study has developed two sensitivity analysis methods corresponding to the situations with or without sufficient probability information. The probabilistic method is established with the aid of the stochastic response surface and the mathematical derivation proves that the coefficients of first-order items embody the parameter main effects on the response. Simultaneously, a nonprobabilistic interval analysis based method is brought forward for the circumstance when the parameter probability distributions are unknown. The two methods have been verified against a numerical beam example with their accuracy compared to that of a traditional variance-based method. The analysis results have demonstrated the reliability and accuracy of the developed methods. And their suitability for different situations has also been discussed.

1. Introduction

Numerical modeling has shown its critical usage in industrial simulations for the purpose of cost saving in product R&D. In scientific research, mathematical models are also frequently used to approximate complex engineering, social, and economic phenomena [1, 2]. For simulation models having numerous uncertain parameters, an unabridged involvement of all the parameters would often produce plausible results with a waste of computational expenses [3]. Due to it, a parameter sensitivity analysis (SA) procedure is often essential before formal analyses in the interest of determining (a) uncertainties of which parameters contribute the most to output (i.e., response) variability; (b) insignificant parameters that can be eliminated from the analysis model; and (c) interaction effects among different parameters [4]. The main goals of an SA concern parameter effect evaluation and model simplification. To be more specific, SA feedbacks always provide useful information for identifying parameters whose uncertainties highly influence the variances of the response. On the other hand, insignificant parameters are abandoned or fixed to their deterministic values in order to reduce the model complexity. Meanwhile, interaction effects can also be evaluated once strong correlations exist among parameters.

Presently there are many SA methods that place particular emphasis on certain aspects of model sensitivities [1, 2, 5]. Mathematical or statistical techniques are popularly employed for such analyses and different methods may provide different judgments. According to [6], SA methods can be addressed in three groups: (a) those which evaluate an individual parameter at a time, (b) those which utilize random sampling methods, and (c) those which require a partitioning of a particular parameter vector. Alternatively, Saltelli et al. [2] give more specific categories represented by scatterplots, derivatives, and regression coefficients, as well as variance-based methods. Different methods correspond to linear and nonlinear model assumptions. Scatterplots may provide visualized inferences for parameter-response relationships; however, such judgments are subjective and cannot be made automatically with comparable indices [7]. Therefore, mathematical or statistical treatments should be involved for practicability. The fundamental treatment relies on partial differentiation if the parameter-response relationship can be formulized. For linear models, sigma-normalized derivatives [8] are computationally efficient but they are only informative at design points without providing global sensitivity information. Regression methods [9] search for the entire parameter space and the standardized regression coefficients can be regarded as the sensitivity indices. Their SA results are similar to those of derivative-based methods but are more robust and reliable when nonlinear models are detected. Unfortunately such methods require numerous samples for regression if one wants to guarantee the SA precision, which might result in unendurable computation time. More recently variance-based approaches have revealed their value in providing a better comprehension of the model’s sensitivity patterns [10, 11], especially for a model of unknown linearity, monotonicity, and additivity. These approaches are based on the analysis of conditional variances and high-order interaction effects can be studied. However, the implementation of variance-based methods is often complex and requires a high computational cost. It is found that for complex problems traditional SA methods often fall into trouble with analysis complexity and computational costs. Therefore metamodels like the Kriging model can be used to alleviate the computational burden [12, 13]. Using the metamodels uncertainties can be efficiently propagated between parameters and responses. And metamodels can be constructed through regression of some samples generated by Monte Carlo sampling or other algorithms, which forms another superiority over “classical” variance-based methods requiring some ad hoc sampling [2]. Unfortunately, the total effect estimation is a weak element of metamodelling since the precision of a metamodel is often not adequate for the estimation of high-order interaction terms [2]. Hence, suitable metamodeling techniques should be developed for such analyses. Until now, the aforementioned methods are mostly probabilistic indicating that parameter uncertainties follow certain probability distributions such as a normal distribution. But in the real world it is often difficult to acquire sufficient probability information of parameters. Hence nonprobabilistic SA methods play a role at this moment. SA procedures incorporated with interval analysis (IA) [14] received attentions recently which do not focus on local first- or second-order behavior of the response [15]. Interval SA methods explore the full range of parameter uncertainties and the interval sensitivities represent a measure for the individual effect of each interval parameter. Such methods require a linear variation of parameter values within their intervals [16, 17] and the sensitivity calculation is relatively easy to implement.

From the context one can conclude that a qualified SA method should provide global sensitivity information with respect to the full range of parameter uncertainties. Meanwhile, for the purpose of practical application the sensitivity calculation should be efficient without the need of extensive sampling or heavy computation. Due to such considerations, this study develops two SA methods belonging to probabilistic and nonprobabilistic strategies, respectively. The probabilistic method relies on the stochastic response surface (SRS) theory [18] and the sensitivity indices are obtained from the derivation of SRS expressions. Then the nonprobabilistic method employs the IA theory and is suitable for the solution of uncertain interval parameters. The feasibility and accuracy of the two methods have been verified against a numerical example and also compared with the traditional analysis of variance (ANOVA) based method [19, 20]. Lastly, it is mentioned that so far most SA methods concern problems such as risk assessments, environmental evaluations, and economic predictions. But this study attempts to apply SA to the civil field where large-scale structures having numerous components and complicated material properties frequently appear. Unfortunately, systematic research has not been found in the literature.

2. Probabilistic Sensitivity Analysis

As aforementioned, probabilistic SA methods usually demand numerous samples for adequate precision. Even with the assistance of metamodels, sometimes high computational cost still cannot be avoided. Hence this study attempts to embrace parameter uncertainties in a so-called stochastic response surface modeled through regression of a limited number of samples (also called “collocation points” (CPs)) [18]. The SRS theory is extended from the deterministic response surface methodology in order to correlate uncertain parameters with the response . An SRS is expressed as a polynomial chaos expansion in terms of Hermite polynomials [18]: where denotes a set of standard random variables following a normal distribution of and it is the transformation of the original parameter vector , is the deterministic coefficient to be estimated by regression, and denotes the multidimensional -degree Hermite polynomials:

Transforming to aims to maintain the orthogonal property of , namely, for , which is important for equal evaluation of different parameters. Then with the predefined form of an SRS, the CPs can be generated using the probabilistic collocation method and is estimated by regression of all the CPs. After that, the explicit polynomial expression of the SRS is determined.

The subsequent analysis is to explore the parameter sensitivity based on (1), which starts from the partial derivatives of with respect to , instead of . The benefit of doing so lies in the fact that different types of parameters may have distinct dimensions. Therefore the operations on the standard random variables guarantee the equal evaluation of all the parameters. On the other hand, partial differentiation of parameter-response equations is theoretically comprehensive but also difficult to solve for complicated problems where complex numerical procedures are often necessary [5]. The employment of SRS can at least partially avoid such difficulty. Furthermore, the full range of parameter uncertainties can be explored for a global SA purpose. For example, the popularly used second-order SRS is demonstrated and expressed as

The expectation of the partial derivative of with respect to is

For the response vector , the derivative matrix can be written as with its expectation written as where the coefficients of the first-order terms in (1) embody the parameter main effects on the response. This observation can be easily demonstrated using a second-order SRS having three uncertain parameters of ():

It is easy to obtain

Meanwhile, the partial derivatives of with respect to the original parameters are where () is the standard deviation of parameter . Then one has

Since the probability distribution of is known beforehand, the expectation of is in fact a constant (say ) corresponding to the mean of . Therefore, one has

It can be seen that the coefficient simultaneously takes into account ’s mean and standard deviation (uncertainty). Hence, embodies the response sensitivity to parameter uncertainty. Meanwhile, it should be noted that the coefficient is from a robust approximation and its value changes a little when a higher-order SRS is constructed. Thus the use of a second-order SRS is adequate for the SA of first-order (main) effects.

Lastly, it is mentioned that, for the SRS based method, parameters are assumed to follow normal (Gaussian) distributions, which has been proved to be reasonable for most engineering problems. In case of nonnormal distributions, some transformation techniques such as the Rosenblatt transformation can be adopted in order to transform random variables into uncorrelated Gaussian variables [21]. Therefore, the normal distribution assumption can be regarded as a general one for our method.

3. Interval Sensitivity Analysis

The SRS based method is suitable for uncertain parameters following normal distributions. However, for real-world problems, sufficient probability information of structural parameters is often unsatisfied indicating that only parameter uncertainty intervals are known. At this moment, IA should be involved in the SA implementation, which defines a nonprobabilistic strategy seldom reported in the literature. Due to it, this study extends the interval SA method of the previous research [16] for considering the situation when extreme points exist within the parameter interval.

Suppose an interval vector where the superscript denotes an interval number; its lower and upper bounds together with the median and interval radius can be written as where and denote the lower and upper bounds and and represent the median and interval radius, respectively. The normalized variability of can be defined as Then for the response vector , there exists a mapping relationship between and [16]: where can be regarded as the compositive effect coefficient of on . The determination of is unpractical since different parameters may have different dimensions. Therefore, the evaluation of an individual parameter’s effect on a single response could be more feasible. Namely, if the effect of on is , which is actually a subinterval of , then can be defined as with . Here indicates a decreasing sensitivity of to the variation of ; presents an increasing sensitivity. can be defined by using the combination method which first establishes the different combinations of ’s bounds and then calculates the corresponding response intervals. Subsequently the largest interval is regarded as .

The above handling is comprehensive and easy to implement. However, a linear and monotonic assumption of the parameter-response relationship within the interval should be satisfied. In case of a nonlinear relationship or existence of extreme points, (15) may fail. Due to it, this study extends the method for considering the existence of extreme points. The basic idea is to divide the original intervals into several subintervals at extreme points of :

And the lower and upper bounds of the response corresponding to each subinterval are calculated as where denotes the mapping function between and . Subsequently, the maximum and minimum values of can be easily obtained as

After that, (15) can be again used to evaluate the interval sensitivity. It should be mentioned that in fact the interval SA can be regarded as a special case of probabilistic method whose probability distribution tails can be used as the bounds (intervals).

4. ANOVA Based Sensitivity Analysis

The reliability and accuracy of the proposed two methods should be validated. For this aim, the factorial design (FD) [19, 20] has also been adopted for SA analyses. The core of FD is the analysis of variance and such designs have been used for parameter screening purposes [2224]. The ANOVA analyzes the difference between the means of different sample groups using the statistical -test. The variances of particular responses can be decomposed into several components attributable to different sources of parameter variations. By this means parameters whose uncertainties significantly affect the responses can be identified. In implementation, the FD establishes a linear additive model closely related to the ANOVA with coefficient estimates and standard errors. In the FD each parameter has only two levels representing the lower and upper bounds of the parameter. The linear model is obtained from the regression of some samples generated using the design of experiment [19, 20]. The samples come from a random population following a normal distribution. Then the main and interaction effects are simultaneously evaluated. This method is traditional and well known; thus more details can be found in the literature and will not be described here.

5. Validation

The performance of the proposed SA methods has been validated using a numerical cantilever beam with its length of 2 meters (Figure 1). The beam had a uniform cross section of 0.2 m × 0.2 m (nominal values). And its nominal material properties were defined as Young’ modulus () of 30 GPa, density () of 2400 kg/m3, and Poisson’s ratio () of 0.2. The FE model was divided into 20 identical elements and element 10 was assumed to have uncertain geometric and material properties. The uncertainty effects of , , , and (the section inertia) on the first five frequencies were investigated with the corresponding vibrational modes illustrated in Figure 2. It is seen that there are four flexural vibrational modes and one axial mode (the deformation shape of this mode is plotted with its undeformed edge for a clear illustration).

Two scenarios were analyzed including an identical parameter uncertainty level and two different levels. In the first scenario each parameter had a variability level of 1% variation to its nominal value. To give an illustration, for the probabilistic method, the parameter means were the nominal values of , , and . And the corresponding standard deviations of the four parameters were defined as 0.01. Meanwhile for the interval SA method, the nominal values were also used as the parameter medians while the corresponding interval radii were 0.01 multiplying the medians. In the second scenario the variability of was increased up to 2% while that of the rest of the parameters remained unchanged. Meanwhile for the sake of better comparison, the parameter effects on the frequencies were expressed by their percentage contributions to a unit variation of each frequency.

5.1. Scenario 1

In this scenario, a second-order SRS having 4 parameters was constructed through the regression of 20 CPs. And the coefficients of the first-order terms were considered to reflect parameter main effects. Meanwhile, the interval SA was performed by assuming the upper and lower bounds of each parameter to be and , where denotes the parameter nominal value. Then the individual parameter effect was compared to the total effects for evaluating the effect contribution to the unit frequency variations. The SA results for the two methods are listed in Tables 1 and 2. Simultaneously, a 24 FD was also carried out for comparison with its results given in Table 3.

It is observed that the three methods present equal SA results for this simple beam example, demonstrating the correctness of the proposed probabilistic and interval methods. For the first frequency, the main effect contributions of and are identical and 40% larger than that of . For and , the contributions of , , and are almost the same. But it is found that shows no effects on all the frequencies. The above observations can be explained by the theoretical expressions for the natural frequencies of a cantilever beam [25]:

It can be seen that for the flexural frequencies, , , and ( denotes the section area) should present the same effects, while Poisson’s ratio () does not appear in these expressions indicating no influence on the frequencies. And the slight difference among , , and is caused by the positions of the uncertain element 10 at the corresponding mode shapes, which affects the parameter significance. This point can be testified by the SA results of in which the contribution of turns to be zero since at this moment element 10 locates at the modal node, as illustrated in Figure 2. Moreover, for , the effect of is around 2.5 times over that of implying that the influence of the bending stiffness variability has been highly weakened due to the element location at the modal node. Meanwhile for the first axial frequency , only and show effects due to the fact that the section area, instead of the section inertia, controls the axial vibration. This observation can be proved by both the theoretical expression and the analysis results in Tables 1, 2, and 3.

Lastly, it should be mentioned that the interaction effects among the four parameters were not taken into account in this example since they are pretty small. The sum effects of , , and were only 0.40%, 0.28%, 0.02%, 0.51%, and 0.31% for the 5 frequencies, respectively. Therefore, the involvement of the interaction effects is unnecessary and would increase the SA complexity.

5.2. Scenario 2

The first scenario evaluates the parameter effects in equal variability. But in practice different structural parameters may have various uncertainty levels, which means that the parameter effects should be investigated according to the true variability. As such, the second scenario assumed the variability of to be 2% while that of and remained 1% ( was discarded here due to its insignificance). The SA results are given in Tables 4, 5, and 6 demonstrating that for , , and the main effect of is nearly double compared with the effects of and . This is rational since the three parameters have equal weights in (19). And for and , the effect of still does not exist. For especially, although ’s variability increases twice the effect evaluations do not change after being compared with Tables 13. Meanwhile, with respect to , the effect of is also around 2.5 times over that of .

5.3. Discussions

The SA results of this beam example demonstrate that all the three methods can accurately evaluate the parameter effects on the frequencies. Then one may raise a question: why develop the SRS and IA based methods if the ANOVA based method can provide satisfactory performance? This query can be explained by three aspects. Firstly, the FD and the SRS based method are both probabilistic methods suitable for uncertain parameters following normal distributions. But in case only the bounds, instead of the probability information, of uncertain parameters can be measured, the IA method embodies its superiority. Secondarily, the computational and analysis expenses form a critical point. For example, when there are 10 uncertain parameters, the FD requires at least samples for a complete evaluation without losing precision. But the SRS method needs only 132 samples (namely, CPs), which is much fewer. Thirdly, although the IA method is relatively simple in implementation, it loses sight of the distributions within the intervals. And the interaction effect analysis should be implemented in a more complex form than (15). As a conclusion, it is suggested that in practical applications suitable SA methods should be chosen on the basis of specific problems. And two or even more methods are recommended for better judgment.

6. Conclusions

A probabilistic SA method and an interval SA method have been developed and verified against the traditional ANOVA based method. The developed methods focus on solving problems with or without sufficient probability information of uncertain parameters. Via the study on a numerical beam, the uncertainty effects of different geometric and material parameters on the beam frequencies were investigated. It is found that the three methods present the same effect evaluations proving the correctness of the developed methods. And the sensitivity of different parameters can be analyzed in rational way. Finally, the applicability of the developed methods has also been discussed and cross verification between different methods is suggested for the sake of better SA evaluations.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The research is supported by the National Natural Science Foundation of China (Grant no. 51108090) and also by the Natural Science Foundation of Fujian Province (Grant no. 2011J05129). The financial supports from the Scientific Research Foundation for the Returned Overseas Chinese Scholars (Grant no. LXKQ201201) and the Cultivation Project of Outstanding Youth Researchers in Universities of Fujian Province (Grant no. JA12020) are also acknowledged.