Abstract

In the context of performance-based earthquake engineering, reliability method has been of significant importance in performance and risk assessment of structures or soil-structure interaction (SSI) systems. The finite element (FE) reliability method combines FE analysis with state-of-the-art methods in reliability analysis and has been employed increasingly to estimate the probability of occurrence of failure events corresponding to various hazard levels (e.g., earthquakes with various intensity). In this paper, crucial components for FE reliability analysis are reviewed and summarized. Furthermore, recent advances in both time invariant and time variant reliability analysis methods for realistic nonlinear SSI systems are presented and applied to a two-dimensional two story building on layered soil. Various time invariant reliability analysis methods are applied, including the first-order reliability method (FORM), importance sampling method, and orthogonal plane sampling (OPS) method. For time variant reliability analysis, an upper bound of the failure probability is obtained from numerical integration of the mean outcrossing rate (MOCR). The MOCR is computed by using FORM analysis and OPS analysis. Results by different FE reliability methods are compared in terms of accuracy and computational cost. This paper provides valuable insights for reliability based probabilistic performance and risk assessment of SSI systems.

1. Introduction

In the context of performance-based earthquake engineering (PBEE), a challenging task for structural engineers is to provide performance and risk assessment for structures or soil-structural interaction (SSI) systems over their design life-time. In order to fulfill this task successfully, all pertinent sources of aleatory and epistemic uncertainties must be accounted for during the design process. Thus, proper methods are required for the study of uncertainty propagations from model parameters describing the geometry, the material behaviors, and the applied loadings to structural responses used in defining performance limit states. These methods need also to be integrated with methodologies already well-known to engineers for structural response simulations, such as the finite element (FE) method [14]. In this context, the FE reliability method combines FE modeling and seismic response analysis of structures or SSI systems with state-of-the-art methods in reliability analysis and has been employed increasingly to estimate the probability of occurrence of failure events corresponding to various hazard levels (e.g., the failure probabilities of the system under earthquakes with various intensities). Recently, the FE reliability analysis method has been implemented in a general-purpose software frameworks for nonlinear FE response analysis, that is, OpenSees [5]. OpenSees is an open source object-oriented software framework written in C++ programming language for static and dynamic, linear and nonlinear finite element (FE) analysis of structural and/or geotechnical systems. This framework has been under development by the Pacific Earthquake Engineering Research Center (PEER) since 1997 and has been increasingly widely used in academic society. OpenSees provides capabilities for response, response sensitivity, and reliability analyses of structural, geotechnical, and soil-foundation-structure interaction (SFSI) systems [5].

This paper presents reliability analysis methods for performance and risk assessment of SSI systems. The focus is on comparing and verifying the various reliability methods, that is, FORM, IS, OPS, and so forth. As well known, the SSI effect plays a very crucial role in nonlinear FE analysis. The SSI effect may have beneficial or detrimental effects on the responses of superstructures, depending on the characteristics of earthquake motion, soil, foundation, and super structures. It is almost impossible to draw general conclusions about the SSI effect on the system during seismic motions [69]. Based on the study not shown in this paper, similar observation can be made for reliability of SSI systems. That is, the SSI effect plays very crucial role in the reliability analysis of a nonlinear soil-structure systems, and it is not possible to predict whether the SSI effect has beneficial or detrimental effects on the superstructures, depending on the probabilistic properties of the earthquake motion, soil, foundation, and super structures. It is almost impossible to draw general conclusions about the SSI effect on the system unless a fully reliability analysis is performed.

In this background, the paper presents a brief summary of crucial components of FE reliability analysis theory and recent advances in extending the FE reliability theory to realistic complicated geotechnical and SSI systems. For time invariant reliability analysis, various methods are implemented/adopted/developed in OpenSees, that is, the first- and second-order reliability method (FORM and SORM), importance sampling (IS) method, orthogonal plane sampling (OPS) method, and crude Monte Carlo Simulation (MCS) method [1012]. For time variant reliability analysis, an upper bound of the failure probability is obtained from numerical integration of the mean outcrossing rate (MOCR) [13]. The MOCR is computed by using FORM analysis combining the Koo’s equation [14], as well as the OPS analysis [10], respectively. Results obtained from different FE reliability methods are compared in terms of accuracy and computational cost. This paper provides valuable insights for probabilistic performance and risk assessment of SSI systems based on FE reliability analysis.

2. Crucial Components in Time Invariant FE Reliability Analysis

2.1. Random Variable and Performance Function and Failure Probability

In general, structural reliability problem consists of computing the probability of failure of a given structure, which is defined as the probability of exceeding a specified limit state (or damage state) when the model uncertain quantities are modeled as random variables (RVs). In finite element (FE) reliability methods, it is important to define appropriate RVs for measuring the uncertainties in the FE model. A vector of basic RVs are used to characterize the probabilistic characteristics of the computation model, for example, loads and environmental actions, material properties, geometric properties, boundary conditions, model uncertainty parameters, and so forth. Every RV has a probability distribution function (PDF) (or marginal PDF, e.g., normal distribution, lognormal distribution, beta distribution, etc.). The relationship between two random variables and is specified by the correlation coefficient, , with a value between −1 and 1, defined as , where the covariance of and is defined as . The is the expectation; is the mean of RV . The variance of is is defined as , where is the standard deviation of . It is easy to show that the covariance matrix , correlation coefficient matrix , and have the relationship as [11, 15].

In reliability analysis, the performance criteria are very important in describing the failure of a structures or geotechnical systems. The failure is generally defined as not meeting performance criteria. In order to define the failure, a performance function is defined in terms of engineering demand parameters (EDPs), damage measures, or decision variables. The performance function, also called Limit state function (LSF), is commonly denoted as , where denotes a vector of response quantities of interest and is the vector of all basic RVs. The LSF is chosen such that defines the failure domain/region. A general form taken by LSF is where is the threshold defining failure event. The response quantity is computed from FE analysis, which may involve stresses, strains, displacements, accumulated damages, and so forth and usually is an implicit function of the RVs . This paper focuses on component reliability problems, that is, the problem is described by a single LSF . Thus, the time invariant component reliability problem takes the following mathematical form: where denotes the joint PDF of random variables . In time variant reliability problems, the objective is computing the time variant failure probability, , corresponding to the probability of at least one outcrossing of the LSF during the time interval . An upper bound of is given by (for small failure probability [16]) where denotes the mean downcrossing rate of level zero of the LSF and represents the time.

2.2. Design Point, FORM, and SORM

The problem in Formulas (2) and (3) is extremely challenging for real-world structures and is usually solved numerically. A well-established method consists of introducing a one-to-one mapping between the physical space of variables and the standard normal space (SNS) of variables [11] and then computing the probability of failure as where denotes the standard normal joint PDF and is the LSF in the SNS. Solving the integral in Formula (4) remains a formidable task, but this new form of is suitable for approximate solutions taking advantage of the rotational symmetry of and its exponential decay in both the radial and tangential directions. An optimum point at the limit state surface (LSS) is the so-called “design point” (DP), (see Figure 1), which is defined as the most likely failure point on the LSS in SNS, that is, the point on the LSS that is closest to the origin. If the LSS in the SNS is approximated by its linearized plane-/second-order polynomial surface at DP (see Figure 1), then the first-/second-order estimate of the failure probability can be computed, and the corresponding methods are called first-/second-order reliability method (FORM/SORM), respectively. As shown in Figure 1, since most of the contribution to failure probability is from the shaded region close to the DP and in the failure domain, and considering that the PDF exponentially decay in both the radial and tangential direction, it is clear that FORM and SORM may be accurate enough when the LSS is not extremely highly nonlinear close to the DP. The first order estimate of the failure probability is usually expressed as an equivalent reliability index as

Figure 1 shows the contours of joint PDFs of RVs in physical space and in SNS, respectively. The DP is the point on LSS in SNS which is closest to the origin (i.e., ) and FORM/SORM are first-/second-order approximation of the LSS at DP.

2.2.1. Joint Distribution Model and Probability Transformation

In engineering problems, usually only the information of the marginal distributions and the correlation structures of the random variables are available. In this case, several models can be used to obtain the joint PDF in SNS by using this incomplete information, including Rosenblatt, Nataf, Hermite, or Morgenstern and so forth [17, 18]. The Nataf model is used in this paper since it allows a relatively wide range of correlation values. In Nataf model, the joint distributions are completely defined by the marginal distributions and the correlation structures of the RVs. The joint normal distribution obtained by the marginal PDF and correlation coefficient matrix is where is an n-dimensional joint PDF of variables with zero mean, unit variance, and correlation coefficient matrix , is the standard normal PDF, and is the one-to-one mapping of RV with any PDF to RV with standard normal distribution. and denote the cumulative distribution function (CDF) of variables with any distribution and standard normal PDF . Here , while and are related by is a bivariate normal pdf with zero means, unit variances, and correlation coefficient . By iteration, can be solved for in terms of given values of [19].

2.2.2. Design Point (DP) Search Algorithm

In DP based structural reliability analysis methods, like FORM and SORM, an efficient and robust DP searching algorithm is very crucial. The DP searching is also an optimization problem, because the DP is the point on the LSS that has the closest distance from the origin in the standard normal space (SNS):

Several methods exist to obtain the DPs. The so-called HLRF algorithm originally developed by [20] and later extended to nonnormal random variables by [21] is a one of the most popular methods. Recently, the author and coworkers have adopted into OpenSees a general purpose sparse nonlinear optimization toolbox (SNOPT, a state-of-the-art numerical optimization software [22, 23]). The extended OpenSees-SNOPT framework is general and flexible and can be used to solve a wide range of FE-based optimization problems (including DP search) in structural and geotechnical engineering. SNOPT has several distinguishing features that make it particularly suitable for DP search problems: applicability for large scale problems especially those with large amount of constraints and variables and a moderate number of degrees of freedom; economical computation since SNOPT uses techniques like early termination of the quadratic programming (QP) subproblems and requires relatively few evaluations of the problem functions; robust and reliable features, such as dealing with unfeasible points (e.g., when FE analysis does not converge) and tolerance of discontinuities in the function gradients; and Flexible options allow customizing SNOPT for design point searching problems [23].

2.2.3. FE Response Sensitivity Computation

FE response sensitivity analysis is an essential ingredient for providing function gradients in the DP search process. It is also a necessary step for gradient-based optimization methods required in various subfields of structural and geotechnical engineering such as structural optimization, system identification, and FE model updating [11, 24, 25]. In addition, FE response sensitivities are invaluable for gaining insight into the effects and relative importance of system and loading parameters on system response. If denotes a generic scalar response quantity, the sensitivity of with respect to the geometric, material, or loading parameter is the partial derivative of with respect to parameter , that is, , evaluated at , where denotes the nominal value taken by the sensitivity parameter in the FE response analysis.

Several methodologies are available for response sensitivity computation, such as the Finite Difference Method (FDM), the Adjoint Method (AM), the Perturbation Method (PM), and the Direct Differentiation Method (DDM) [2432]. The FDM is the simplest method for response sensitivity computation but is computationally expensive and can be negatively affected by numerical noise. The AM is extremely efficient for linear and nonlinear elastic structural models, but is not competitive with other methods for path-dependent (i.e., nonlinear inelastic) problems. The PM is computationally efficient but generally not very accurate. The DDM, on the other hand, is general, efficient, and accurate and is applicable to any material constitutive model. These advantages can be obtained at the one-time cost of differentiating analytically the space- and time-discrete equations describing the structural response and implementing these “exact” derivatives in a FE program.

In the DDM, the consistent FE response sensitivities are computed at each time step, after convergence is achieved for the response computation. This requires the exact or analytical differentiation of the FE algorithm for the response calculation with respect to each sensitivity parameter. Consequently, the response sensitivity calculation algorithm affects the various hierarchical layers of FE response calculations, namely, the structure level, the element level, the integration point (section for frame/truss elements) level, and the material level. Details on the derivation of the DDM sensitivity equation can be found in the literature [12, 25, 27]. Recently, the DDM based response sensitivity algorithms are greatly extended for complicated structures or SSI systems, which closes gaps between FE response sensitivity computation using the DDM and state-of-the-art FE response-only analysis. These extensions, corresponding to various hierarchical layers, include DDM for linear and nonlinear FE models with multipoint constraints (MPCs), namely, the transformation equation method, the Lagrange multiplier method, and the penalty function method, respectively [30, 31]; DDM for various elements, such as frame elements, four-node quadratic element, eight-node brick elements, and so forth; DDM for sections like fiber section; DDM for material models, including various one dimensional (1D) concrete and steel constitutive models, 3D multi-yield surface soil model, 3D bound surface soil model, and 3D Cap concrete plasticity model, and so forth [12, 32]. These extensions enable DDM to be extended to complicated realistic SSI systems, providing an efficient and accurate response gradient computation method for DP search algorithms (e.g., SNOPT).

2.3. Sampling Techniques

As mentioned above, an important goal of structural reliability analysis is estimating the failure probability for a given performance function or LSF. Unlike DP-based analytical methods like FORM and SORM, the sampling techniques achieve this goal by sampling a PDF based on computer generated random numbers. The most commonly used method is Monte-Carlo simulation (MCS), in which the failure probability is obtained by where , and is defined as

In this paper, the sampling is performed in standard normal space (SNS, or U space) and the sampling distribution is centered at the origin of SNS for simplicity. In MCS, the variance of the can be computed by and the coefficient of variation by . The MCS is usually treated as the “actual” or “true” solution of a failure probability. However, MCS is often unfeasible due to the fact that the failure probability is usually small (i.e., 1e-2 to 1e-8), and a large number of samples may be required in order to obtain a good estimation of failure probability. This in practice may cause extremely high computational cost considering that the nonlinear FE analysis of realistic SSI systems is very time consuming. Thus, several improved simulation methods are adopted, including importance sampling (IS), orthogonal plane sampling (OPS), and so forth.

2.3.1. Importance Sampling Method (IS)

IS improves the simulation efficiency by biasing the choice of the random vectors through using a designed sampling distribution , called the importance sampling distribution [33], which has a greater portion of the PDF than the actual PDF of   in the failure domain (i.e., in the region ). Then, where . The variance of the is , which can be simplified as while the coefficient of variation is . In this paper, the sampling is performed in the SNS (U space) instead of the space, and the is selected to be a Gauss distribution centered at the design point (DP); see Figure 2.

It is worth mentioning that IS may not be suitable for FE reliability analysis due to the possible nonconvergence in nonlinear FE analysis. From Formula (11), it is clear that even one point in failure domain could possibly significantly change the entire failure probability result if  is large; however, it may cause serious underestimation of failure probability if this point is not available due to the FE nonconvergence. Unfortunately, this does happen in some cases in practice, according to the author’s experience not shown in this paper. The orthogonal plane sampling method may overcome this shortcoming.

2.3.2. Orthogonal Plane Sampling Method (OPS)

In standard normal space (SNS), the OPS requires the information of DP and its corresponding unit vector . The orthogonal plane is defined as the plane normal to , that is, , and passing through the origin O (see Figure 3). Let be a transformation of a sampling point in the SNS, such that the axis coincides with . Assume that the limit state surface (LSS) can be written in the form of a single-valued function defined as the distance from the orthogonal plane to the limit state surface after this transformation. This assumption sometimes does not hold (i.e., when the limit state surface is spherically shaped; there are two values satisfying ). However this assumption may hold over a reasonable neighborhood of the DP [14]. For points far from the DP (e.g., if the distance of this point to the orthogonal plane is more than , where is reliability index) in the failure domain, the contribution of this point to the failure probability is negligibly small (less than ). In the SNS, for any sampling point  , the first step is to project the point onto the orthogonal plane to obtain the projection point . The distance from the orthogonal plane to the LSS is computed by solving after which the failure probability can be computed as where denotes the expectation of the argument with respect to the distribution of . Define the random quantity as , then the failure probability is computed as , where is the th sampling point. The variance of the can be computed by , and the coefficient of variation is . In practice, the orthogonal plane sampling method has five steps as follows: in U space, simulate n-D standard normal vector ; project this point onto the orthogonal plane to get , where the subscript n denotes the dimension of ; along the DP direction, compute the projection point of onto the LSS along DP, , by solving using a one dimensional linear search zero-finding algorithm; compute and failure probability as shown in Formula (12). Repeat steps 1–4 until the maximum number of the simulation is reached or the designed coefficient of variation is satisfied.

The advantage of the orthogonal plane sampling method is that the DP is not necessary to be very accurate, which saves steps in the DP searching process. For each sampling point, the limit state function has to be evaluated repeatedly in order to get the point on the LSS. Thus, it is necessary to adopt an efficient and practical (in the sense that it is able to deal with FE nonconvergence) zero-finding algorithm for reducing the number of evaluations of the LSF [12].

3. Time Variant FE Reliability Analysis

There are two types of failures caused by random vibrations: first excursion failures (or yield failures), defined by the stochastic structural response “” first reaching an upper level; fatigue failures, defined as that the accumulated damage due to response cycles of small or moderate amplitudes reaching a total limit criteria. This paper emphasize the first type of failure, in which the failure probability can be defined as

Solving (13) analytically is extremely difficult, if it is not completely impossible. Usually the time variant failure probability is estimated by an upper bound using Formula (3). The mean outcrossing rate of level at time is defined as , where is the number of outcrossing events. To compute the mean outcrossing rate is still a formidable task. Rice gives an analytical solution as , Where is the joint probability density function of the limit state function at time [34]. This joint density function is still extremely difficult or nearly impossible to obtain for a general nonlinear inelastic system. Alternatively, the numerical evaluation of the mean outcrossing rate reduces to a time invariant two-component parallel system reliability analysis [13]: where and . This equation is understood as one or more outcrossing events having happened during the time . In practice, (14) is solved using a perturbation method, that is, for small . To get an accurate result, the perturbation must be small enough. However, the two events and occur nearly at the same time (only apart from each other) and with opposite inequality signs. If we use a too small , the correlation coefficient between the two linearized events approaches −1, causing a singularity in solving the and a loss in accuracy. In practice, may be a good enough choice. Where is the interval used to discretize the excitation [14].

3.1. Estimation of Mean Outcrossing Rate Using FORM and Koo’s Formula

To solve (14), the first order reliability method (FORM) is employed for computing the . As shown in Figure 4, in standard normal space (SNS), the FORM approximation and exact region of are plotted, respectively. The two DPs at time and are and , with DP directions and . In the FORM approximation, the two limit state surfaces (LSS) linearized at design points and are and . The real region of is then approximated by the area between these two planes as shown in Figure 4.

The probability content in the failure domain defined by the intersection of the two linearized limit state surfaces is given by [35] where is the correlation coefficient between the two linearized failure modes, and is the bivariate normal joint PDF with zero means, unit variances, and correlation coefficient , that is, . In practice, and . The second LSF can be computed by using its first order approximation as , and the gradient of the second LSF may be computed in the same way by using a chain rule. In this way, only the first LSF and its gradient need to be computed by using FE analysis, greatly reducing the computational cost.

Despite these advantages, as mentioned before, the high correlation between and (or, ) and the fact will cause the in Formula (15) to be almost singular, thus the integration in Formula (15) requires the two DPs to be extremely accurate. In real engineering problems, this requirement is very difficult to meet. Considering the close relation of the two LSFs and , the problem can be solved in another manner. The second LSF is almost identical to the first one except a very small time shift . Thus the second DP can be approximated by shifting the first DP by the small time increment . This is the case for the stationary excitation since the DP excitations at different time points have the same shape but are shifted by the time interval between these time points. For the nonstationary case, this equation also gives a good approximation when time interval is much smaller than the time scale of changes of the probabilistic structure of the nonstationary process [14]. Under this assumption, the second reliability index is , and the second DP direction is obtained by shifting   in time by . The analytical solution to Formula (15) can then be estimated directly by computing the limit value when tend to zero [14], where . Compared with the method of separately obtaining the two DPs for two LSFs, Koo’s equation (16) has tremendous advantage, that is, the DP is computed only once, cutting computational costs in half, and does not need to be extremely accurate as required by the “two DPs method.” This method is especially useful for real world large scale SSI problems, where the tolerance of the convergence for FE analysis is relatively large (e.g., for norm of incremental displacement vector, the tolerance may be set to 1.0E-4 [m], for satisfying the engineer’s requirement). In this case the DP is not accurate, and Koo’s equation is an effective way to get the mean outcrossing rate of these problems.

3.2. Estimation of Mean Outcrossing Rate Using OPS

Although Koo’s equation makes the integration of Formula (15) feasible and is able to solve a large range of practical problems, it still has shortcomings, that is, it is not accurate enough especially when applying to highly nonlinear systems. This is due to the fact that the analytical solution in Formula (16) is based on the linearization of the two LSSs at the two DPs. As shown in Figure 4, although the standard normal PDF exponentially decay in both the radial and tangential direction, the real failure region (Figure 4) is such a very narrow wedge shape that the failure probability is not dominantly contributed from a small region close to DP (like time invariant case). Furthermore, as shown in Figure 4, the FORM approximation (i.e., the wedge shape in Figure 4) may be significantly different from the real for highly nonlinear LSSs. These make the estimation in Formula (16) inaccurate. In this background, the paper adopted a new algorithm based on the two DPs and orthogonal plane sampling (OPS) methods, denoted as “DP+OPS” method. The steps of “DP+OPS” method are similar to the OPS method, except that step is replaced by the following two steps: (i) along the DP direction, compute the projection point of on the second LSS by solving using a one-dimensional zero-finding line search algorithm; (ii) compute and then add its contribution to the failure probability. In this way, the exact shape of the two LSSs is considered. Compared with the FORM approximation of the mean outcrossing rate, the DP+OPS gives a much more accurate solution.

3.3. Discrete Representation of Stochastic Excitation

To solve structural random vibration problems, the input processes are usually represented by a stochastic process. In this paper, a general nonstationary stochastic processes process is modeled by using a finite number of random variables as where is a deterministic time-varying mean function and parameter is a deterministic constant. The sequence is a train of pulses in which each is defined as a standard normal random variable at time point; , and are equally spaced along the time axis. The sequence represents the intermittent ruptures at the fault and approaches a Gaussian white noise process as the number approaches infinity. is th deterministic modulating function, used to control the variance of the process along the time axis. The deterministic filter function , generally selected as the unit impulse response function with damping, can represent the ground medium through which the wave travels. [14, 36]. In this paper, the Gaussian white noise is employed to represent the earthquake input. By selecting ,  , and in Formula (17), is simplified to a Gaussian white noise with zero mean and standard deviation . In practice, the integration time step of the FE analysis is often shorter than the time interval between two neighboring impulses. In this case, the linear interpolation for discrete Gaussian white noise is employed. Assuming the random excitation process varies linearly between the values , it can be shown that the standard deviation of the white noise, the time interval , and the intensity are related by [37].

4. Application Examples

4.1. Time Invariant Reliability Analysis of a 2D SSI System

The first application example is a 2D two-story two-bay RC frame with RC footing foundation on layered clay. The frame structure is modeled using displacement-based Euler-Bernoulli frame element with distributed plasticity, each with four Gauss-Legendre integration points. Section stress resultants are computed by using a 2D fiber section discretization. Foundation footings and soil layers are modeled through isoparametric four-node quadrilateral plane strain elements (with thickness 4.0 m) with bilinear displacement interpolation (Figure 5). The constitutive behavior of the reinforcing steel is modeled using a one-dimensional plasticity model with kinematic linear hardening [28]. The concrete is modeled using the smoothed popovics-saenz model [12]. The soil is modeled using a pressure-independent multiyield-surface plasticity material model [31], specialized for plane strain condition. The soil is assumed to be under a simple shear condition with its bottom nodes fixed and the corresponding boundary nodes at same depth horizontally tied together. The nodes of the beam (with two translation DOFs and a rotation DOF) and the corresponding nodes at the same location on the foundation concrete block (with only two translation DOFs) at the same location are tied together in both the horizontal and vertical directions to connect the structures and soils. The structure is subjected to horizontal pushover condition with an upper triangular loading pattern.

Different material parameters are used for the confined (core) and unconfined (cover) concrete in the columns and beams and for the four soil layers. Thirteen material parameters are used to characterize the various structural materials employed in the frame model, in which ten parameters are modeled as RVs, that is, strength of cover concrete ,   strain at the strength of cover concrete strain at residual strength of cover concrete ,   strength of core concrete ,   residual strength of core concrete ,   strain at strength of core concrete strain at residual of concrete concrete strength of steel stiffness of steel , and kinematic hardening parameter of steel . The mean and coefficient of variation of these RVs are listed in Table 1. The residual strength of cover concrete, the stiffness of the concrete, and the properties of concrete foundation block are modeled as a deterministic number instead of RVs. For the soil, two parameters in each of the four soil layers are modeled as RVs, that is, low-strain shear modulus and shear strength tmax are modeled as RVs. These RVs are numbered from top layer to bottom layers as   ,   ,   ,   ,   ,   ,   , and    (Table 2). The poisson’s ratio of the soil is taken as 0.35. The maximum value of the distributed loading, that is, in Figure 5, is modeled as RV with number as shown in Table 3. The correlation between various RVs is listed in Table 4. The Nataf model [11] was used to generate samples of the random modeling parameters.

After application of the static gravity loads, the structure is subjected to a quasi-static pushover analysis. Two types of limit state functions (LSFs) are considered, that is, where is the first interstory drift and is the displacement threshold, taken as 2.5, 5.0, and 7.5 cm, respectively; the parameters and are the displacements of top and bottom of the building, respectively, and thus quantity denotes the “total” displacement of the structures. The criterion is taken as 5.0 cm, 7.5 cm, 10 cm, and 14.4 cm (2% of the height of the building), respectively. The SNOPT-based DP search algorithm is used to find the DPs. In this paper, a “sequential DP searching method” is used; that is, the DP obtained from a LSF with lower threshold is used as a “warm” starting point for the DP search of the subsequent LSF with higher criterion. The DPs normalized by their mean, of the two LSFs, are shown in Figures 6 and 7. Note that the horizontal axis is the number of DVs.

From Figures 6 and 7, it is observed that as the limit criterion increase, the DPs “evolved” along same directions (i.e., some RVs always increase while others always decrease). This path is also marked out by arrows denoted as “evolution direction” in Figures 6 and 7. This shows that the DP from LSF with lower criterion is very suitable for the “warm starting point” of DP search with higher criterion. This searching method is also called the “sequential DP searching method” by Gu [12]. From the numerical experiments not shown in this paper, when the DP search algorithm fails to find the optimum, one may define another LSF (denoted as LSF2 here) which has higher threshold whose DP (denotes as DP2) may be computed relatively more easily. The DP of the original LSF may be solved using the “sequential DP searching method,” taking advantage of the DP2 to be a warm starting point. In this way, a complicated DP search problem can be solved by sequential DP search problems. It is also observed from Figures 6 and 7 that the loading parameter moves towards higher-value direction, whose change becomes dominant among the RVs.

The responses of the first floor drift at DPs for the LSF and at various and are shown in Figures 8 and 9, respectively. It is obvious that the system yields significantly at the DPs. Furthermore, the system exhibits more nonlinear behavior with larger criterion in the LSFs.

After obtaining the DPs, IS and OPS analyses are performed based on the information of DPs. The computation results are reported in Tables 5 and 6 for the LSF defined by first interstory drift and total structure displacement, respectively.

From these results, it is observed that the failure probability is consistent when using FORM, IS, OPS, and MCS except that MCS cannot reach a required accuracy when the threshold is large enough such that the failure becomes an event with extremely small probability. It is worth mentioning that although MCS is usually treated as true or actual solution, it is often unfeasible when it takes unacceptable computational cost (i.e., 10000 times FE simulations). FORM gives fairly reasonable results but requires much less number of evaluations of LSF than IS and OPS. However it is not guaranteed to be sufficiently accurate, especially when the LSS is highly nonlinear [12]. IS requires fewer evaluations of LSF than OPS and gives nearly consistent results with OPS. However based on other numerical tests not shown in this paper, IS may not converge when the system reaches highly nonlinear state and FE analysis is not guaranteed to converge. The “sequential DP searching method” is practically a robust method in DP searching process.

4.2. Time Variant Reliability Analysis of a 2D SSI System

The same 2D SSI example in Section 4.1 is used for time variant reliability analysis, except that the concrete model is replaced by a nonsmooth Kent-Scott-Park model with zero tension stiffening [38] since this model is very efficient. The properties of the SSI system are considered to be determinant, while the only uncertainties are assumed in the earthquake input. This stochastic excitation takes the simple form of white noise, that is, an impulse train with constant time interval sec (refer to Formula (17)). The system is initially at rest. The integration time step is taken as 0.005 sec, and the time increment perturbation used to evaluate the mean outcrossing rate is taken as  sec (Formula (14)). The intensity of the white noise is . The system does not have Rayleigh damping; however, equivalent damping is observed by the strongly cyclic nonlinear plastic behavior of the soil and structure systems. If assuming the damping ratio of the SSI system is between 10% and 20%, using Mile’s equation [39, 40], the root mean square acceleration in ’s is

The LSF is the same as the first LSF of the time invariant case (i.e., in Formula (18)) with the displacement threshold taken as 1% of the first story height (i.e.,  cm). The mean outcrossing rate analysis is performed with a fixed time interval  s (i.e., ). A similar “sequential DP searching method” is used herein to guarantee the convergence of DP searching process; that is, the DP obtained at time is shifted along the time axis to (the DP values form 0 to is set to be zero) and used as a “warm” starting point of the DP searching algorithm at . The DPs corresponding to various time points are shown in Figure 10 and the first interstory drift responses at DPs are shown in Figure 11. From Figure 10, it is obvious that the shapes of the DPs at two sequential times are very similar except the values are shifted along the time axis. This provides a proof that the “sequential DP searching method” is appropriate in DP searching process. It is also observed that the responses have the shape of “opposite damped free vibrations” in which the “opposite” denotes that the time changes from to 0 instead of 0 to . Furthermore, the responses at the end of time are equal to 3.6 cm for all DPs (Figure 11). For each of the six DPs, the moment-curvature responses at point K (Figure 5) and shear stress shear strain responses at point N (Figure 5) are shown in Figures 12 and 13, respectively. It is obvious that the systems have yielded at the DPs.

The FORM, OPS, and MCS results are compared in Table 7 and Figure 14. From Table 7, it is clear that after 2.0 sec, the mean outcrossing rates obtained by both methods tend to be stable (i.e., mean outcrossing rate does not change anymore); however, FORM and OPS give significantly different mean outcrossing rates. This difference is due to the fact that the LSS is highly nonlinear close to the DPs, the FORM results consider only the first order approximation of the mean outcrossing rate and could be inaccurate. It is also noticed that the mean outcrossing rate obtained by FORM at point  sec is higher than its neighbors because at this point, the DP cannot be obtained with the same accuracy (1.0e-5) as at other points due to the FE nonconvergence, instead a relaxed criterion (1.0e-4) is used to obtain an approximate DP. The OPS does not require a highly accurate DP, and thus the results by OPS at DP at  sec are still acceptable although they are not accurate.

A Crude MCS method is used to check the mean of the outcrossing events and the time variant failure probability. In this paper, in order to compare the different methods, the example is purposely designed such that the mean outcrossing rate is relatively high and the MCS is able to obtain an accurate enough failure probability with acceptable computational cost (i.e., 2800 FE simulations). The comparison results for the mean of the outcrossing events and the failure probability by different methods are shown in Figure 14. From these results, it is clear that the DP+OPS method gives a much better approximation of the mean outcrossing rate and failure probability than those obtained by FORM while requires limited extra computations. For example, at time  sec, the tolerance of failure probability by OPS is 19.64%, while that by FORM is 208%. It is also noticed that the “sequential DP searching method” is practically a robust method in time variant DP searching process.

5. Conclusions

This paper presents recent advances of performance and risk assessment methods based on finite element (FE) reliability analysis for complicated soil-structure interaction (SSI) systems. The FE reliability method combines FE analysis with state-of-the-art methods in reliability analysis and has been employed increasingly in the performance assessment to estimate the failure probability of a nonlinear system under earthquakes. The paper briefly summarizes the crucial components for FE reliability analysis and presents recent advances in both time invariant and time variant reliability analysis methods. These methods are employed to a benchmark example consisting of a two-dimensional (2D) two story building on layered soil. For time invariant reliability analysis, the first-order reliability method (FORM), importance sampling (IS) method, and orthogonal plane sampling (OPS) method are employed to compute the failure probabilities corresponding to various threshold in the limit state function (LSF). These few methods give consistent results, while FORM gives slightly less accurate results but requires much fewer evaluations of LSF than IS and OPS. For time variant reliability analysis, an upper bound of the failure probability is obtained from numerical integration of the mean outcrossing rate. The mean outcrossing rate is computed by using FORM analysis and OPS analysis respectively. The OPS method greatly improved the accuracy of the mean out-crossing events obtained by FROM with limited extra computation. For both time invariant and time variant reliability methods, a novel “sequential DP searching method” is presented and demonstrated to be practically very robust in DP searching process. This paper provides valuable insights for probabilistic performance and risk assessment of structures or SSI systems based on FE reliability analysis.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The author gratefully acknowledges the invaluable advices from Dr. Joel P. Conte from UCSD and Dr. Michele Barbato from LSU and the supports of this research from NSFC with Grants no. 51261120376 and 51281220267, Open Research Fund Program of State key Laboratory of Hydroscience and Engineering with no. sklhse-2013-C-02, and Program for New Century Excellent Talents in Fujian Province. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the writer and do not necessarily reflect the views of the sponsoring agencies.