Abstract

Metamodel-based seismic fragility analysis methods can overcome the challenge of high computational costs of problems considering the uncertainties of earthquakes and structural parameters; however, the accuracy of metamodels is difficult to control. To enhance the efficiency of analyses without compromising accuracy, a metamodeling method using Gaussian process regression (GPR) and active learning (AL) for seismic fragility analysis is proposed. In this method, a GPR metamodel is built to estimate the stochastic seismic response of a structure, in which the record-to-record variability is considered as in the dual-metamodel-based fragility analysis approach. The metamodel can also predict the estimation error. Taking advantage of this ability, we present an AL strategy for adaptive sampling, so that the metamodel can be improved adaptively according to the problem. Using this metamodel and Monte Carlo simulation, seismic fragility curves can be obtained with a small number of calls for time history analysis. To verify its effectiveness, the proposed method was applied to three examples of nonlinear structures and compared with existing methods. The results show that this method has high computational efficiency and can ensure the accuracy of fragility curves without making the metamodel globally accurate.

1. Introduction

Structural seismic fragility analysis is an important part of the performance-based earthquake engineering framework [1]. From the analysis, the fragility function of a structure can be obtained in the form of fragility curves that describe the conditional probability that the engineering demand parameter (EDP) exceeds a certain limit state for a given intensity measure (IM). The EDP is usually a structural response, such as the maximum displacement or maximum interstorey angle [2]. Seismic fragility analysis can be used to assist designers in improving the seismic performance of structures according to their degree of damage and provide a basis for postearthquake loss assessment. Fragility curves can be obtained by experience, expert judgment, or analytical methods [3]. The former two approaches tend to be limited by the lack of seismic damage data; therefore, analytical methods are more widely used in engineering. In analytical methods, a number of time history analyses need to be performed to consider the uncertainties of seismic ground motions [1].

Besides the uncertainties of earthquake ground motions, the uncertainties of structural parameters also have a significant impact on the structural response [4]; therefore, they need to be considered in the fragility analysis as well. Such problems can be addressed by Monte Carlo simulation (MCS) methods combined with incremental dynamic analysis [2, 57]. However, numerous nonlinear time history analyses are required in these methods, which results in high computation costs for large complex structures. To reduce the calculation cost of MCS, metamodel (also called surrogate model) approaches are introduced into the fragility analysis. These approaches use approximate functions with low computational cost to estimate the results of finite element analysis, thereby greatly improving the calculation efficiency. The metamodel is built based on a training sample set T = (X, y) = {(x(i), y(i))|i = 1, 2, …, m }, where x(i) is an n-dimensional input vector, X = [x(1), x(2), …, x(m)] is the input matrix, and the corresponding output vector is y = [y(1), y(2), …, y(m)]. Commonly used metamodels include the response surface methodology [8], radial basis function [9], artificial neural network [10], and support vector machine [11]. In recent years, many investigations have been conducted on the application of metamodels in fragility analysis. Towashiraporn [12] proposed a dual-metamodel-based seismic fragility analysis (D-M-SFA) method, in which two metamodels were employed to fit the mean and standard deviation of random responses, respectively, and the damage probabilities for a given IM level were calculated by Monte Carlo sampling. De Grandis et al. [13] proposed an improved D-M-SFA method for nuclear power plants by integrating the first-order reliability approach into metamodeling. With the general damage selected as the EDP, Park and Towashiraporn [14] analyzed the seismic vulnerability of track-on steel-plate-girder bridges using the D-M-SFA method. Saha et al. [15] applied D-M-SFA to base-isolated liquid storage tanks and studied the influence of uncertainties in isolator parameters on the seismic response of liquid storage tanks in which the peak ground acceleration (PGA) is taken as the IM. Ghosh et al. [16] improved the D-M-SFA approach with the moving least squares method and applied it to reinforced concrete bridge piers. Zhang and Wu [17] used the Kriging model for metamodeling in D-M-SFA and applied the improved fragility analysis method to reinforced concrete bridges. Xiao et al. [18] proposed an improved D-M-SFA approach for base-isolated structures that considered the correlation of the seismic demands of components. Ghosh et al. proposed a metamodeling method based on support vector regression [19] and a Kriging-based metamodeling method [20] for seismic fragility analysis, both of which fit the structural responses for different earthquake records.

In most existing metamodel-based seismic fragility methods, training samples are generated by one-shot sampling, which makes it difficult to control the accuracy of the metamodel. In fact, for structures with different complexities, the number of training points required for metamodeling is different, and it is difficult to predetermine these [21]. Excessive sampling will lead to unnecessary calculation costs, whereas too small a sample size may result in poor accuracy of the model, which tends to cause problems for engineering designers. To achieve high global accuracy of the metamodel, these methods evenly distribute the training points in the sampling space; however, little attention is paid to the locations that have a key impact on the accuracy of the probability results.

Active learning (AL) sampling is an ideal tool for adaptively improving the accuracy of metamodels according to the complexity of the structure, and it has been widely used in the field of structural reliability analysis [2225]. Generally, the AL-based reliability analysis (AL-RA) methods first establish the initial metamodel with a small number of training samples and select new training points from the candidate sample set according to the learning function values of candidate points to update the metamodel. The result of this type of method is very close to that of MCS, but the calculation burden is much less than the latter, endowing it with a great advantage in reliability problems. In recent years, a variety of efficient AL sampling strategies have been proposed. However, they cannot be directly used in metamodel-based seismic fragility analysis because there are some differences between the AL-RA problems and seismic fragility analysis problems. In the former case, whether the structure fails is judged according to the approximate value of the performance function, and the uncertainties can be described by several random variables or interval variables. Nevertheless, in the fragility analysis problem, multiple damage levels rather than only two states of safety and failure need to be considered, and the metamodel is used to estimate the structural responses but not the performance function values. Moreover, the record-to-record variability caused by the uncertainties of the frequency content and other attributes of ground motions [3] is difficult to express using only several variables. To the best of the authors’ knowledge, there are few studies on the application of AL in seismic vulnerability analysis considering multiple thresholds of damage levels.

To enhance the efficiency and ensure the accuracy of seismic fragility analysis, in this study, the D-M-SFA approach is improved with a metamodeling method using Gaussian process regression (GPR) and AL. In this method, a GPR model is employed to predict the seismic response of the structure. Based on the characteristics of GPR that can reveal the error of prediction, an AL strategy is presented to realize the adaptive sampling of metamodels for a given problem. With the real response values obtained by nonlinear time history analysis replaced by the predicted values in MCS, the approximate failure probability at different IM levels can be calculated. In this study, the procedure of the D-M-SFA method and the basic theory of GPR are first reviewed. Second, the details of the proposed method are presented. Finally, the proposed method is applied to three examples of nonlinear structures—a single degree of freedom (SDOF) system, a concrete frame, and a steel frame—and the fragility curves are compared with those of the MCS and D-M-SFA methods. The nonlinear time history analysis was performed using OpenSeesPy (the Python version of OpenSees).

2. Dual-Metamodel-Based Seismic Fragility Analysis Method

The D-M-SFA is an efficient seismic fragility analysis method that has been used on a variety of engineering structures. In this method, the response of a structure under seismic loading is assumed to follow a certain distribution [14]. In other words, when the structural parameters and IM level are fixed, the seismic response D is still a random variable, and its statistical characteristics can be completely determined by the mean and variation [13]. Based on this assumption, the record-to-record variability can be implicitly incorporated in a suite of seismic waves, thus avoiding the use of high-dimensional earthquake time history as the input of the metamodel [16]. In many studies, the seismic response is considered to be lognormally distributed [2630], and in this study, that assumption is also made. In D-M-SFA, the input variables of the metamodel include the structural parameter s and IM variable im. First, a set of training points are selected in the input variable space using the experimental design method. At each training point, a time history analysis is performed for all the selected seismic records, and the mean and standard deviation of the logarithms of the response results can be obtained. Then, the metamodels of and can be built. The approximations and of the mean and standard deviation can be obtained from these metamodels. The stochastic seismic response [18] can be estimated fromwhere represents a normally distributed variable with a mean of 0 and a standard deviation of . Generally, multiple damage levels need to be considered in fragility analysis, and their corresponding limit states can be expressed as D = z1, D = z2, …, D = zQ, where Q represents the number of damage levels, and zl (l = 1, …, Q) represents the threshold corresponding to the l-th limit state LSl. With the damage state evaluated according to , the probability of the seismic response exceeding zl can be calculated using Monte Carlo sampling.

Compared with MCS, the computational burden of D-M-SFA is significantly reduced. However, the training set is usually obtained by one-shot sampling, and the number of points is difficult to determine. In addition, to pursue the global accuracy of the metamodel, error indicators such as the coefficient of determination R2 and root mean square error are employed for model testing. In fact, for the evaluation of the structural damage level, it is not necessary to make the metamodel highly accurate at any position. Additional computation on the time history analysis will also be needed in the accuracy verification based on test points. To overcome these problems, in this study D-M-SFA is improved by introducing the AL technique into the metamodeling.

3. Gaussian Process Regression

The Gaussian process is a supervised machine learning method based on a Gaussian random process and Bayesian theory. GPR is a regression method that uses the Gaussian process model to fit the data. GPR can not only provide the approximation but also predict the error of the approximation, which is conducive to the realization of AL sampling [31]. Therefore, GPR was adopted to establish the metamodel in this study.

The GPR regards the target function F(x) as a random process whose statistical characteristics are completely determined by the mean function M(x) and covariance function k(x, x') [32]. To simplify the calculation, M(x) is usually set to 0. For the n-dimensional input vector x, the output value y is assumed to bewhere εN denotes the noise, which follows a normal distribution with a mean value of 0 and a standard deviation of σN.

The model is built based on the training sample set T = (X, y) = {(x(i), y(i))|i = 1, 2, …, m }. For a given point , the joint prior distribution of y and the function value iswhere K(X, X) = [kij]m×m is the covariance matrix, kij = k(x(i), x(j)) is the covariance between x(i) and x(j), is the m × 1 covariance matrix between and X, and Im is the m-dimensional identity matrix. The prediction and predicted variance of are expressed as

The distribution of is as follows:

The covariance function k(x, x') is usually of the square exponential type and is expressed aswhere L = diag(l) and l = (l1, l2, …, ln). θ = (l, σf, σN) is a hyperparameter, and its optimal value can be obtained by minimizing the negative log-likelihood function of the conditional probability expressed as

The “L-BFGS-B” algorithm provided by the SciPy library is used here to solve this optimization problem.

4. Proposed Metamodeling Method Using GPR and AL

4.1. Metamodel of Seismic Response

In the proposed method, the assumption of seismic response in the D-M-SFA method to handle the record-to-record variability is still adopted. To facilitate the prediction of the error of the approximation, only one GPR metamodel is employed to estimate the seismic response, which avoids establishing metamodels of mean and standard deviation, respectively, as in the D-M-SFA method. Because the seismic response follows a lognormal distribution, with reference to (1), the logarithm of the stochastic response can be expressed as

The normally distributed variable can be transformed into , where u is a standard normally distributed variable. Then, (8) is converted into

Given the structural parameters s and IM variable im, the statistical characteristic of D obtained by (9) is the same as that obtained in equation (8).

The input variable vector x of the metamodel is composed of s, u, and im, that is, x = [s, u, im]. By selecting sample points in x space and calculating their responses, the training set can be obtained, based on which the GPR model of seismic response is built. This metamodel can predict the approximation and estimation error in the structural response. By randomly generating the value of u, the approximation of the seismic response with randomness can be obtained, as in the D-M-SFA method. Compared with the D-M-SFA method, the number of metamodels of the developed method is reduced to one, and a variable u related to the randomness of the seismic response is added to the input variables. The differences between the metamodels in the two methods are shown in Figure 1.

Based on the surrogate model of seismic response, Monte Carlo sampling can be used to calculate the probability that the structure reaches a certain damage level. By combining the given IM value im with N points randomly generated according to the distribution parameters of the random variables s and u, the MCS samples are obtained. Then, the probability of an EDP exceeding LSl is calculated aswhere is an indicator function for counting the number of sample points where the structural response exceeds the threshold. The value of is 0 if the value in brackets is greater than 0; otherwise, it is 1. The fragility curve can be plotted by calculating the damage probabilities for different IM levels.

4.2. Initial Training Set

In this study, Latin hypercube sampling (LHS) [6, 33] is employed to sample the initial training points within the range of input variables. LHS has the advantage that the number of sample points can be arbitrarily specified and excessive aggregation of sample points can be avoided. If x contains n variables and ns sample points are to be selected in the sampling space, the basic steps of LHS are as follows: (1) divide the sampling ranges of variables into ns cells; (2) select a number randomly within each cell to generate n groups of numbers; and (3) pair these n groups of numbers randomly to obtain ns sample points. Because the responses for MCS samples are predicted by the metamodel, the sampling range of the training points should be a region into which most MCS samples fall. The upper and lower limits of the i-th random variable xi in x can be taken as , where is the inverse function of the probability distribution function of xi. The range of im is the variation range of the IM concerned in the fragility analysis. As the training set is updated by AL sampling, only a small number of initial points are required. For example, the number of initial samples can be equal to the dimensions of the metamodel input vector [31].

4.3. AL Sampling Strategy

Owing to the plastic deformation of structures under seismic loading, the relationship between the output response and the input variables is a complex function, and it is difficult to establish a metamodel with sufficient accuracy by only using the initial training set. Therefore, an AL sampling strategy is presented to adaptively enrich the training sample set according to the complexity of the structure.

First, a candidate sample set is generated without calculating the corresponding responses. Then, according to the learning function values of candidate points, the point that is most favorable for improving the accuracy of the metamodel is selected and its response is calculated. The GPR model gets updated once by adding to the training set. The accuracy of the metamodel is gradually improved by sequentially selecting new training points from the candidate set. It can be seen from (10) that the accuracy of the probability results depends on the accuracy of in the vicinities of the thresholds, while the accuracy at a position far from the limit states does not affect the judgment of the sign of . Therefore, the accuracy of the metamodel near the limit states is more critical, and the learning function should be able to evaluate the contribution of candidate points to improve the accuracy.

Various learning functions have been proposed by researchers [3436]. The expected feasibility (EF) learning function is commonly used, and it can provide a balance between the search in the vicinity of the limit state and exploration in the area where the uncertainty is high [24]. This function is used to select the new training points in this study. The EF value at a point is obtained based on the prediction response and the predicted error provided by the metamodel. The EF function [37, 38] is defined aswhere ξ is taken as . The EF value at a point indicates how well the true function value at this point is expected to satisfy the equality constraint [39, 40]. A point with a prediction close to zl and a large predicted error will have a large EF value [41]. In AL-RA problems, the safety state is evaluated according to whether the approximate value of the performance function is greater than the threshold z, and z is generally 0 [24]. However, for vulnerability analysis problems, it is necessary to consider the accuracy of in the vicinities of multiple thresholds z1, z2, …, zQ. Therefore, the maximum EF (MEF) value at a certain point for all limit states is taken as the learning function value of this point, which is expressed as

Then, is selected as the point with the greatest MEF value in the candidate set, which is described as

The new training point may affect the accuracy of in the vicinities of all thresholds, but this point mainly improves the accuracy of the model for a certain limit state. After updating the metamodel with the new sample, the next new training point can be searched in the same manner. During the sampling process, the new points will switch in the areas close to these limit states with the change in model precision, so that the accuracy of in the vicinities of all thresholds will be improved together. When is less than a specified tolerance ζ (e.g., a value of about 1/20∼1/50 of the standard deviation of the function values at the initial training samples), the metamodel is considered to have sufficient accuracy and the sampling can be stopped.

The candidate sample points in the proposed method are also generated using the LHS method, but they are not uniformly selected in the space of variables. MCS random points are very sparse at the locations where the joint probability density of random variables is very small. At such locations, the precision of the metamodel is not important; consequently, there is no need to scatter many candidate points. The candidate sample set is generated as follows: a group of points H = [hi]R = [hij]R×n are uniformly selected in the space of (0, 1)n by LHS; then, the candidate set Xc is obtained by the transformation [6] as

In this transformation, im can be regarded as a variable that follows a uniform distribution.

4.4. Main Steps of Seismic Fragility Analysis

In summary, the flowchart of the developed fragility analysis procedure based on adaptive metamodel is shown in Figure 2, and the basic steps are as follows:(1)Construct a finite element model of the structure and select seismic wave records.(2)Choose ns (e.g., the dimension of the input vector) initial training points from the space of x = [s, u, im] using the LHS method and perform a time history analysis for all earthquake records at each training point to calculate the seismic responses according to equation (9) to obtain the initial training sample set.(3)According to equation (14), generate a candidate sample set containing R points (R = 2,000 in this study) using LHS.(4)Build the GPR metamodel of the seismic response based on the training sample set.(5)Find the point with the maximum MEF value from the candidate set according to equation (13). If is less than a specified tolerance ζ (e.g., a value of approximately 1/20∼1/50 of the standard deviation of the function values at the initial training samples), proceed to step (6), otherwise, add the new point with its seismic response to the training set, and return to step (4).(6)According to equation (10), calculate the damage probabilities of the structure on different IM levels using MCS based on the metamodel and plot the fragility curves by fitting the probability results.

5. Numerical Study

In this section, three examples of nonlinear structures—a nonlinear SDOF system, a five-storey concrete frame, and a four-storey steel frame—are analyzed, and the results are compared with those of the D-M-SFA and MCS [17] methods to verify the effectiveness and efficiency of the proposed method. In the examples, it is assumed that the seismic fortification intensity is 8, and the site classification is II. The structural damping ratio was set at 0.05. Referring to the requirements for seismic records in time history analysis and the design response spectrum provided in the current Chinese code for seismic design of buildings (GB50011-2010) [42], a suite of records were derived from the ground motion database of the Pacific Earthquake Engineering Research Center. According to Ref. [27], at least 10 seismic records are required to evaluate the seismic performance of structures. Therefore, 12 seismic waves (listed in Table 1) were selected for the time history analysis. The response spectra of the ground motions and the design spectrum are shown in Figure 3. In the fragility analysis, PGA is taken as the IM, and the IM level varies from 0 to 1.0 g. As the maximum interstorey angle is widely accepted as a practical response parameter to evaluate the structural damage [43] and has been used in many design guidelines (e.g., GB50011-2010), it was selected as the EDP for the frames in this study.

The calculation code was programmed in Python, and the finite element model of the structures was constructed using OpenSeesPy. Because the computational time is mainly spent on the nonlinear time history analysis, the number of calls to the finite element analysis, Nc, is used to evaluate the computational efficiency. Nc is equal to the number of training points. In a finite element analysis, a time history analysis was performed for all selected earthquake records.

5.1. Example 1: a Nonlinear SDOF System

Figure 4(a) shows a nonlinear SDOF system [12, 17] composed of a lumped mass M and a spring with elastic stiffness k and yield force Fy. The force-deformation behavior of the spring was simulated using a bilinear model (Figure 4(b)) in which the ratio of the postyield to the initial stiffness α is 0.05.

The maximum displacement of the structure, dm, was selected as the EDP. Two damage levels were taken into account, and their dm thresholds were set as z1 = 0.02 m and z2 = 0.08 m, respectively. The structural parameters k, M, and Fy were considered as random variables. The distribution information of the variables is presented in Table 2.

The training set used to build the initial GPR model was obtained using LHS, as listed in Table 3. There were 2,000 points in the candidate set, and their distribution in kMFy space is shown in Figure 5. The ζ value in the termination condition was taken as 0.003, which is about 1/50 of the standard deviation of the function values at the initial training points. During AL sampling, 29 training points were added to the training set, and the changes in and in the sampling process are shown in Figure 6. It can be seen that, with an increase in the number of training points, the value of decreases gradually with fluctuation. When the value of is reduced to ζ, the sampling stops. The response values and predicted values of the newly added sample points were mostly near the thresholds z1 and z2.

Once the metamodel was established, the approximate damage probabilities at different IM levels were calculated by Monte Carlo sampling, and the fragility curves are plotted in Figure 7. In addition, the D-M-SFA and MCS methods were used to draw fragility curves for comparison. In the D-M-SFA method, the metamodels were built using GPR, and the training points were generated by one-shot sampling, in which the number of samples was set to be the same as that of the proposed method. In the MCS method, the simulation points were sampled using LHS [17], and a finite element analysis was performed for each simulation sample. According to Ref. [2], 200 samples that are selected using LHS are sufficient to ensure that the accuracy of MCS results meets the needs of fragility analysis. Therefore, 2,000 samples were generated for the MCS. The damage probabilities for PGAs of 0.0184, 0.1, 0.2, …, 1.0 g were calculated, and the fragility curves were obtained by fitting these results. Because MCS is generally regarded as a very accurate reliability method, it was used to test the accuracy of other methods.

The fragility curves obtained using the different methods are shown in Figure 7. In addition, the Nc values of the proposed method, D-M-SFA, and MCS were 38, 38, and 22,000, respectively. It can be seen that the fragility curves obtained using the proposed method are very close to those of MCS, while the calculation cost of the former is only 0.17% of that of the latter. For damage level 1, the deviation in the D-M-SFA results is obvious. Although the computational efficiency of D-M-SFA is also high, its fragility analysis result is not as accurate as that of the proposed method.

5.2. Example 2: A Reinforced Concrete Frame

A five-storey reinforced concrete frame with a storey height of 3.2 m is considered, and its configuration and section information are shown in Figure 8. The uniformly distributed load q was set to 33.75 kN·m−1. The cross section of the member consists of steel reinforcement, cover concrete, and core concrete. The thickness of the cover concrete was 0.03 m. The cross section of the column was a square with a width of 0.4 m, and the cross section of the beam was a rectangle with a size of 0.3 m × 0.5 m. The material properties of the reinforcement were simulated using a bilinear model, in which the initial elastic modulus and yield strength are Es and fs, respectively, and the ratio of the postyield to the initial stiffness was 0.05. The material properties of the cover concrete and core concrete were simulated using the Kent–Scott–Park model [44, 45], as shown in Figure 9. Ec represents the compressive elastic modulus of concrete, and fc is the concrete compressive strength. The crushing strength of the core concrete fu was 1.5 × 107 Pa, and the corresponding strain εu was 0.015. The ratio of the unloading slope at εu to the initial slope was 0.1. The tensile strength of the core concrete, ft, was 2.2 × 106 Pa, and the tension softening stiffness Et was 1.1 × 1010 Pa [46]. The crushing strength of the cover concrete, was 5 × 106 Pa, and the corresponding strain was 0.006 [47]. The beam and column members were modeled using a displacement-based beam-column element.

The maximum interstorey angle of the structure θm was taken as the EDP. Three damage levels, namely, immediate occupancy, life safety, and collapse prevention, were considered, and their thresholds of θm were taken as z1 = 1/550, z2 = 1/150, and z3 = 1/50 with reference to GB50011-2010. The material property parameters Ec, Es, fc, and fs were considered as random variables. The distribution information of the variables is presented in Table 4.

Nine initial training points were selected using LHS, as listed in Table 5. A total of 2,000 candidate points were generated. The value of ζ in the stop condition was taken as 4 × 10−4, which is approximately 1/50 of the standard deviation of the function values at the initial training points. During AL sampling, 42 training points were added to the training set, and the changes in and in the sampling process are shown in Figure 10. It can be seen that the response values and predicted values of newly added sample points are mainly located near the limit state thresholds z1, z2, and z3. With an improvement in the metamodel accuracy, the value of decreases. The fragility curves obtained with the metamodel are shown in Figure 11, in which the curves obtained by the other two methods are also given. In this example and Example 3, the number of samples of the MCS method selected by using LHS is 1,000. The Nc values of the proposed method, D-M-SFA, and MCS were 51, 51, and 11,000, respectively. For the three damage levels, the results of the proposed method are close to those of MCS, and the calculation cost is 0.46% of that of the latter. Moreover, the fragility curves obtained using this method are more accurate than those obtained using D-M-SFA, although their calls for finite element analysis are the same.

The global accuracy of the metamodel established using the proposed method was tested using the coefficient of determination R2 [9], which is defined aswhere is the number of test points, and represent the true and predicted responses of the s-th test point, respectively, and denotes the mean of the predicted values of all test points. The value of R2 is between 0 and 1. The closer the value is to 1, the higher the model accuracy. Twelve test points were randomly selected, and the R2 value of the metamodel obtained using equation (15) was only 0.70. This indicates that the global accuracy of the metamodel is not high; however, it does not affect the precision of the fragility curves. It can be concluded that there is no need to build a global accurate metamodel in the proposed method.

5.3. Example 3: a Steel Frame

A four-storey steel frame is considered, in which the cross sections of beams and columns are the same, as shown in Figure 12(a). The section depth is 0.32 m, the web thickness is 0.02 m, the flange width and thickness are 0.25 and 0.02 m, respectively. The uniformly distributed load q on the structure was set to 30.0 kN·m−1. The nonlinear characteristics of steel were simulated using a bilinear model [45, 48], as shown in Figure 12(b). The initial elastic modulus, yield strength, and ratio of postyield to the initial stiffness of the column steel are E1, f1, and B1, respectively. The initial elastic modulus, yield strength, and ratio of the postyield to the initial stiffness of the beam steel are E2, f2, and B2, respectively. The displacement-based beam-column element was used to simulate the beam and column members.

The thresholds of θm corresponding to the three damage levels of immediate occupancy, life safety, and collapse prevention were taken as z1 = 1/250, z2 = 1/100, and z3 = 1/50 according to GB50011-2010. The material property parameters f1, f2, E1, E2, B1, and B2 were considered as random variables, whose distribution information is listed in Table 6.

This example was analyzed four times using the proposed method, and the numbers of initial training samples, NI, in the analyses were 3, 6, 12, and 15, respectively. The value of ζ in the stop condition was taken to be 5 × 10−4. The candidate set contained 2,000 points. The iteration process for each fragility analysis is shown in Figure 13. The Nc values of the analyses were 44, 40, 41, and 43, respectively. The fragility curves are shown in Figure 14, which also show the fragility analysis results obtained using MCS. The number of calls for finite element analysis in the MCS was 11,000. It can be seen that the results of the four analyses are all very close to the fragility curves plotted using MCS. In these cases of different initial training sets, the iteration process of AL sampling can be implemented, and the difference in the results is small. Even when there are only three initial training points, the accuracy of the fragility curves can still be ensured. Therefore, this method is insensitive to the number of training samples, which reduces the difficulty of metamodeling.

6. Conclusions

To reduce the calculation burden and ensure the accuracy of fragility curves, a metamodeling method using GPR and AL is proposed for fragility analysis considering uncertainties of the ground motions and structural parameters. In this procedure, the same assumption of seismic response as in D-M-SFA is adopted to handle record-to-record variability, but the number of metamodels is reduced to one. The presented AL sampling strategy for the metamodel can adaptively enrich the training sample set according to the complexity of the structure. In the developed approach, only a few nonlinear time history analyses are needed to establish the metamodel. This method was applied to a nonlinear SDOF system, a reinforced concrete frame, and a steel frame, and the results were compared with those of the MCS and D-M-SFA methods. The main conclusions are as follows:(1)The fragility curves obtained using the proposed approach are very close to those of MCS, but the calculation cost is much less than that of the latter, which validates the efficiency and accuracy of this method.(2)With the same number of calls to finite element analysis, the results of the developed method are more accurate than those of the D-M-SFA method. Moreover, this method avoids the extra computation cost caused by model testing.(3)In the AL sampling process, the proposed method improves the accuracy of the metamodel in the vicinity of the limit states corresponding to the damage levels. As a result, even if the established metamodel is not globally accurate, the accuracy of fragility curves can still be ensured. Moreover, the number of initial training samples has no obvious influence on the results of this method, which demonstrates its robustness.

The developed procedure can be integrated into reliability-based design optimization methods for solving optimization problems considering record-to-record variability. This will be studied in future work.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (grant no. 51078311).