Abstract

To reduce the runtime and ensure enough computation accuracy, this paper proposes a structural reliability assessment method by the use of sensitivity analysis (SA) and support vector machine (SVM). The sensitivity analysis is firstly applied to assess the effect of random variables on the values of performance function, while the small-influence variables are rejected as input vectors of SVM. Then, the trained SVM is used to classify the input vectors, which are produced by sampling the residual variables based on their distributions. Finally, the reliability assessment is implemented with the aid of reliability theory. A 10-bar planar truss is used to validate the feasibility and efficiency of the proposed method, and a performance comparison is made with other existing methods. The results show that the proposed method can largely save the runtime with less reduction of the accuracy; furthermore, the accuracy using the proposed method is the highest among the methods employed.

1. Introduction

In recent years, a number of structural reliability assessment methods, including first-order reliability method (FORM) [1], response surface method (RSM) [2], and Monte-Carlo simulation method (MCSM) [3], have been developed and applied to practical engineering structures. Among these methods, the FORM is usually used to directly estimate structural failure probability in the case of the explicit limit state functions. In contrast, the RSM and MCSM are widely used in the case that the limit state functions are complex and implicit. The main idea of RSM is to transfer the original implicit limit state function to an approximated explicit expression, which will then be used for the assessment of failure probability with the aid of FORM. However, in most cases, the hypothetical explicit expressions can hardly be found to represent precisely the original nonlinear and complex functions; thus RSM usually causes an unallowable error, even a wrong assessment result. MCSM not only is the most precise method for failure probability assessment, but also solves theoretically all of reliability problems. However, MCSM is a time-consuming process. It is suitable for solving such problem when structural failure probability is small, because a number of samples are required for the purpose of obtaining a reasonable result.

To overcome the low-fidelity of RSM and low computing efficiency of MCSM, several researchers have attempted to construct the limit state function based on the intellectual techniques, such as artificial neural networks [4, 5] and SVM [6, 7]. Due to the strong small-samples learning ability and generalization capability of SVM [8, 9], it has been widely used for structural reliability analysis in various fields. Hurtado and Alvarez [10] regarded reliability problems as model classification problems and combined the SVM and FEA for the assessment of structural failure probability. Hurtado [11] used the statistical learning theory to prove the feasibility of SVM in the application of reliability problems. Jin et al. [12] combined RSM with SVM for structural reliability probability assessment, and the results showed that this method is more accurate and efficient in comparison with other conventional methods. Guo and Bai [13] introduced the least squares SVM for regression into reliability analysis to deal with huge computational cost and huge space demands.

The input vectors of SVM model are the variables influencing the structural reliability assessment. For a large-scale civil structure, its reliability is affected by a number of variables due to the complex service environment and loading situations. If all of influencing variables are taken into consideration with no regard to their importance in the process of reliability assessment, it will increase the sample size of input variables, complicate the SVM model, and enlarge the data storage memory demands while decreasing the classification accuracy (CA) of SVM model. In fact, some input variables have slight effect on the reliability assessment results. Therefore, it is necessary to eliminate the small-influencing variables before assessing the structure reliability. Recently, a series of SA techniques have been developed and studied for the purpose of quantifying the importance of input variables. These SA techniques are divided into two classes: global SA methods and local SA methods [14]. The local SA methods are usually based on differential and/or difference theory and ignore the probability distributions of variables, thus not competent for analyzing the random variables in limit state function. The global SA methods consider not only the effects of the probability distributions of individual input variables on the output, but also the contribution of the interaction among input variables on the output. In the past decades, Sobol’s SA method [1520], as an invaluable tool, has drawn researchers’ attention because it works well without simplifying approximations, even for the functions with large number of variables.

In order to reduce the dimension of input samples, simplifying the SVM model in the case of ensuring computation accuracy, this paper presents a novelty reliability analysis method based on SA and SVM. The small-influence variables in the limit state function are extracted in virtue of Sobol’s SA method and are rejected as input vectors of SVM model. The SVM model is trained and tested by samples of residual variables. The reliability assessment is implemented with the aid of reliability theory. To validate the applicability and efficiency of the proposed method, the reliability assessment of a 10-bar planar truss is employed. In addition, some comparisons are also carried out.

2. Structural Reliability Assessment Methodologies

2.1. Sobol’s Global SA Method

Sobol’s method is a variance-based global SA technique that has been applied to assess the relative importance of input variables on the output. It is able to decompose the variance of the output into terms due to individual input variables and terms due to the interactions between input variables.

Consider a square integrable function, , as a function of vector of input variables , where is the -dimensional unit hypercube. If the input variables are mutually independent then there exists an interesting decomposition of : where is a constant. The total number of summands in (1) is .

If the following condition is imposed for , where , the decomposition described in (1) is unique. Moreover, all of summands are mutually orthogonal and can be obtained with the aid of multiple integral: where is the set of input variables except and   is the set of input variables except and . The higher dimensional summands are similarly found except for the last one that is calculated using (1).

Therefore, the partial variances, , representing the contribution of each of summands to the total variance of output, can be expressed as with the total variance equal to which can also be expressed as

The relative importance of input variables is quantified by a set of indices, namely, first-order and total sensitivity indices. The former represents the contribution of the individual on the total variance without any interactions with other input variables, while the latter refers to the contribution of all input variables. In addition, the -order () sensitivity index, , represents the coupled contribution of the interaction among input variables on the total variance. It is given by

In order to investigate the total sensitivity index of input variables , the total variance, , can be divided into two complementary terms: and , where denotes the variance due to all of input variables except . Therefore, the total sensitivity index of input variable is expressed as

The variances in (6) can be calculated approximately by Monte-Carlo numerical integrations, particularly when the function, , is highly nonlinear and/or implicit [16, 21]. The Monte-Carlo method approximations for , ,   and are given by where is the sample size, is the samples in the -dimensional unit hypercube, and superscripts (1) and (2) represent two different samples, respectively. The and denote that the input variables use the sampled values in samples (1) and (2), respectively. The and represent cases when all the input variables except   use the sampled values in samples (1) and (2), respectively.

Usually, the input variables whose total sensitivity indices are less than 0.3 are considered to be slight of contribution on the output of [22]. Therefore, it is reasonable to define a threshold value of 0.05 to eliminate the small-influence input variables.

2.2. SVM

SVM is an emerging machine learning technique that has been successfully applied to pattern classification and regression analysis. It is based on the Vapnik-Chervonenkis dimension of statistical learning theory and the principle of structural risk minimization; thus it has a better generalization capability than the conventional classification methods. This is based on the principle of empirical risk minimization.

Suppose a set of training examples are input vectors in space with associated labels (−1: label for class I, +1: label for class II). Some kernel functions are used to map the input samples to a high-dimensional feature space so that the overlapped samples in the original space become linearly separable in the feature space. Therefore, there exist hyperplanes separating the positive examples on one side and the negative samples on the other side. The hyperplanes are given by where and are the weight vector and bias of hyperplane, respectively.

Among these separating hyperplanes, the one so-called optimal separating hyperplane (OSH) separates all vectors without error and the distance between the closest vectors to the hyperplane is maximal. The OSH is found by minimizing under constraints of (14). Therefore, the primal form of objective function is

The Lagrange multipliers, , are employed to solve the above problem. Consequently, the optimal problem is rewritten as a dual form:

In the case of linearly nonseparable training data, by introducing slack variables, , the objective problem is given by where is the regularizing (margin) parameter that determines the trade-off between the maximization of the margin and minimization of the classification error.

Similarly, the corresponding dual problem is expressed as

With the OSH found, the decision function can be written as where   and are the parameters of OSH, respectively.

2.3. Methodology for Structural Reliability Assessment

The reliability assessment based on SA and SVM can be implemented as follows.

Step 1. Calculate the total sensitivity indices of each input variable in the limit state function, , by Monte-Carlo numerical integrations and eliminate those whose total sensitivity indices are less than 0.05.

Step 2. The samples used for the SA in Step 1 are also selected as the train samples for training SVM model. It is noted that the column of train samples is the residual input variable. The number of train samples is defined as . The set of failure and nonfailure samples are defined as class I and class II, respectively. Use the train samples and associated classes to train the SVM model.

Step 3. Produce the test samples of residual variables according to their distributions. The number of test samples is . The trained SVM model is used to classify the test samples.

Step 4. Count the number of samples located in class I. Consequently, the failure probability, , is where is the number of test samples located in class I.

3. Case Study: 10-Bar Planar Truss

3.1. General Description

A numerical 10-bar planar truss has been adopted to validate the proposed reliability assessment method. The young’s modulus of each bar is  KN/m2. The sectional area of each bar, , and loads applied to the truss (as shown in Figure 1), , were assumed to be random variables. Their distribution types are listed in Table 1.

The ultimate strength of this material is assumed to be 480 MPa. Consequently, the limit state function, , can be expressed as

3.2. SA of Input Variables

The Sobol’s method was employed to analyze the contribution of each variable on the output variance of limit state function. Firstly, the Latin hypercube sampling technique [23] was used to produce two sets of samples with size of 50 × 13. Then, the values of , , and are calculated based on (11)–(13). Finally, the total sensitivity indices of each variable can be obtained by (10). The SA results of input variables are shown in Figure 2. It is found that the sensitivity indices of , , , , , and are less than 0.01, which indicates that these variables have slight contribution on limit state function. Therefore, these variables are rejected as input vectors of SVM model.

3.3. Support Vector Classifier

It is noted that a total number of samples are calculated in the process of global SA in Section 3.2. These samples are also regarded as training data of SVM but the small-influence variables were rejected as the input vector of SVM model. Therefore, the size of train samples is . As mentioned in Section 2.3, class I and class II are introduced to describe the failure and nonfailure sample set. The SVM model is trained by use of the train samples and corresponding sample labels. Gaussian radial basis function is ascertained as kernel function of the SVM model. The value of penalty term, , and kernel parameter, , are determined as and 0.002.

A total number of 20000 test samples are constructed and input into the trained SVM model. The classification results of test samples are shown in Table 2. It is shown in Table 2 that a total number of test samples are classified into class I. Consequently, the corresponding structural failure probability is 7.03%.

3.4. Comparisons and Discussions
3.4.1. Computation Precise

It is observed in Table 2 that the total CA of test samples is 96.34%, showing an excellent classification capability. However, it is also noted that the CA of class I is only 72.13%. The main reasons of this phenomenon can be summed as two aspects: (a) the OSH in SVM model is the approximate limit state function, and the samples nearly the limit state function maybe misclassified. However, it is seen that the number of misclassification samples in class I is 389, almost equal to the number of misclassification samples (399). It indicates that misclassification samples have a slight effect on the assessment results; (b) the sample number of class I is far less than that of class II. When the same number of misclassification samples shows up in classes I and II, the CA of class I drops more rapidly than that of class II.

In order to validate the applicability, other three structure reliability assessment methods (i.e., RSM, MCSM, and SVM) are employed to evaluate the structural failure probability. The failure probabilities evaluated by these four methods are listed in Table 3. Usually, the results from MCSM are regarded as the exact solution. It is found that the result evaluated by the proposed method is closest to that by MCSM, indicating that it is the most precise method among these methods employed except MCSM.

The classification result of SVM model is also listed in Table 2. It is obvious that in total the CA of test samples is 95.28%, slightly less than the proposed method (96.06%). The main reason is that the small-influence input variables affect the shape of OSH, thus causing more distortion of OSH than that of the proposed method.

3.4.2. Computation Efficiency

The amount of FEA required by each method is also listed in Table 3. It is found that the least amount of FEA required is RSM, which only needs 27 times. However, its accuracy is the worst. The amount of FEA required by the proposed method and SVM model are far less than the MCSM, while the relative errors are 0.72% and 3.01%, respectively. However, the size of test samples is , two times that of the proposed method . This indicates that the proposed method can largely reduce the runtime in case of ensuring computation precise in comparison with the SVM model. It can be predicted that the proposed method can largely reduce the data storage memory requirements for the reliability assessment of a more complex structures.

4. Conclusive Remarks

In this study, a novelty reliability assessment method based on SA and SVM has been developed and successfully applied for reliability assessment of a 10-bar planar truss. The results show that the proposed method not only reduces data storage memory requirements with enough computation accuracy, but also has a better assessment capability in comparison with other methods.

The proposed assessment method integrating both SA and SVM is proved to be a successful example. However, it should be noted as well that our success in the proposed method was only achieved through numerical simulations, and more field tests should be done to testify its feasibility and efficiency in practice.

Conflict of Interests

We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

Acknowledgments

The work is supported by the National Natural Science Foundation of China (nos. 51278127 and 50878057), the Ph.D. Programs Foundation of Ministry of Education (no. 20093514110005), and the National 12th Five-Year Research Program of China (no. 2012BAJ14B05), China.