Abstract

This study focuses on seismic fragility assessment of horizontal curved bridge, which has been derived by neural network prediction. The objective is the optimization of structural responses of metaheuristic solutions. A regression model for the responses of the horizontal curved bridge with variable coefficients is built in the neural networks simulation environment based on the existing NTHA data. In order to achieve accurate results in a neural network, 1677 seismic analysis was performed in OpenSees. To achieve better performance of neural network and reduce the dimensionality of input data, dimensionality reduction techniques such as factor analysis approach were applied. Different types of neural network training algorithm were used and the best algorithm was adopted. The developed ANN approach is then used to verify the fragility curves of NTHA. The obtained results indicated that neural network approach could be used for predicting the seismic behavior of bridge elements and fragility, with enough feature extraction of ground motion records and response of structure according to the statistical works. Fragility curves extracted from the two approaches generally show proper compliance.

1. Introduction

A fragility curve describes the relationship between a seismic intensity measure and the corresponding probability of exceedance as determined by a specified limit state. Through different methods of fragility curves, analytical development has many applications [13]. In the past two decades, many commercial software packages to simulate progressive collapse are provided; for example, using the ABAQUS software, three-dimensional models using four-node elements are conducted in order to gain simplified analysis procedures for evaluation of progressive collapse in steel moment frames and bridge components [4, 5]. To evaluate the seismic fragility curves for isolated bridges, Siqueira et al. utilized ADINA software for bearing FE models and compared it to the experimental results [6]. DIANA finite element (FE) program was used to establish the reinforced concrete frame models and simulation of progressive collapse. The validity of the Macromodel based on progressive collapse is evaluated by high-fidelity FE analyses [7]. For blast-resistant structural concrete and steel connections, Krauthammer used the FE code DYNA3D [8]. To investigate resistance against progressive collapse, Brunesi and Nascimbene presented a procedure for reinforced concrete buildings subjected to blast loading using a fiber-based model [9]. As demonstrated by Moller et al., there is an applicable method for the improvement of the numerical efficiency of the fuzzy stochastic structural collapse simulation under consideration of uncertainty [10]. Studies by Guo and Gao have assessed the effectiveness and possibility of viscoelastic dampers for enhancing the seismic performance of the self-centering bridge using OpenSees software [11]. Because analytical fragility curves based on nonlinear time history analysis have some inherent limitations, metaheuristic methods based on soft computing have so far been little studied, including neural network prediction. Recently, some metaheuristic methods such as neural networks and fuzzy logic have been proposed and compared with the results of structural analysis methods which are reliable.

Several research efforts were made to investigate the seismic behavior of horizontal curved bridge. Yang et al. investigated the seismic fragility analysis of straight and skewed bridges. Comparisons of the analytical fragility curves for six bridge types indicate that the larger the skew angle, the more vulnerable the bridges [12]. Brunesi et al. developed progressive collapse fragility assessment of reinforced concrete frame by incremental dynamic analysis approach using finite element models [13]. Del Gaudio et al. represent a simplified analytical method for the seismic fragility assessment of reinforced concrete buildings. Results compared with the empirical curves are well estimated [14]. In another study, Kumar and Gardoni investigate the seismic degradation of reinforced concrete highway bridges. Nonlinear time history analysis is conducted using the finite element software platform. The vulnerability of RC highway bridges increases by seismic degradation consideration in columns [15]. Hancilar et al. present probabilistic structural fragility of school buildings, which were taken from nonlinear time history analysis (NTHA) in SAP2000 and OpenSees. Mean damage is estimated from the derived fragility functions [16]. Derivation of fragility functions of blockwork wharves using neural networks prediction is the work of Calabrese and Lai. Modeling is performed in software FLAC 2D [17]. Artificial neural network (ANN) is implemented in order to achieve the unknown nonlinear relationships between input data versus the expected performance. Kim and Feng indicated that the ductility demands for the columns with regard to spatial variation can be underestimated in cases that the bridges are analyzed using similar support ground motions [18]. Karim and Yamazaki developed a simplified procedure for fragility curves of bridges with effect of isolation based on Japanese seismic design code. The method is obtained from numerical simulation with respect to the ground motion parameters using nonlinear static pushover and dynamic analyses [19]. The works published by Nielson demonstrated analytical fragility curves for nine classes of highway bridges and the methodology adopted uses NTHA method [20]. Padgett and Desroches presented an analytical methodology for developing fragility curves of retrofitted bridges and achieved the improvement in fragility. Bridge models are generated in OpenSees for the development of the probabilistic seismic demand models (PSDMs) and fragility estimation [21]. The analytical fragility approach uses NTHA with 3D detailed models for isolated bridges in eastern Canada using data from experimental results [6]. Finite element models are generated in OpenSees [22]. In order to avoid the computational effort of providing fragility analyses, a neural network scheme is presented in Mitropoulou and Papadrakakis work, which can create accurate predictions of the structural response at a fraction of the time of conventional structural analysis required by an incremental dynamic analysis. The fragility curves are provided for regular and irregular structures [23]. Seismic fragility analysis of structures using common analytical methodology has been extensively studied, including irregular structures [24], masonry buildings [25], wooden structures [26], and public buildings [27]. It was observed, with the exception of [23, 28], that a limited number of studies have provided fragility curves using neural networks to reduce computational efforts. This study focuses on obtaining the seismic fragility of a horizontal curved bridge by the neural network based approaches.

2. Methodology for Fragility Analysis

Fragility function, , represents the probabilities of exceedance of the response structure (EDP) of a selected limit state (LS) for a specific intensity measure of earthquake action. These curves are expressed using two-parameter lognormal distribution functions including the median and standard deviation, which are used to evaluate fragility and the estimation of these parameters is achieved using maximum likelihood estimation [20]. The likelihood function takes the form given in where is the number of PGA levels and is a fragility function and is written as follows: where commonly denotes the standard normal cumulative distribution function. The probability of observing collapses out of ground motions with being assigned by the binomial distribution. and are the mean and standard deviation of . It is necessary to get the fragility function and estimation of fragility parameters by maximizing this likelihood function. The right side of (1) is all readily available from NTHA results, and the optimization to determine the maximum value is easily performed using computational programs. It is typically easier to optimize the likelihood function using logarithmic space. Using an optimization algorithm, two parameters and are obtained (1) that maximize .

2.1. Incremental Dynamic Analysis

Fragility analysis for bridges is typically carried out using incremental dynamic analysis (IDA) [29]. The process of making IDA curves is shown in the flowchart in Figure 1. The IDA procedure provides the probabilistic distribution of damage at different seismic intensity levels that develop clear expression of the relationship between the demand and capacity of the structure (Figure 2). In other words, it involves performing NTHA of a structural model under an ensemble of ground motions of increasing intensity, IMs, with the objective of attaining an accurate indication of the response structure under a seismic excitation. Selecting an appropriate IM and engineering demand parameter (EDP) is one of the most important steps of the IDA methodology. In this current work, structural analysis was performed under a set of 129 strong ground motion records in OpenSees. The records were compiled from the PEER Center [30]. The records selected belong to a bin of relatively large magnitudes, 4.5–7.5, type of fault, soil conditions, and distance from the site 20–70 km. In the work by researchers [11, 20, 21] and in this study, the EDPs include curvature ductility of columns, longitudinal and transverse deformations of fixed and expansion bearings, and longitudinal and transverse deformations of abutments.

2.2. Optimization Algorithm

A lot of problems in different scientific fields are formulated as optimization problems and solved using different optimization algorithms. Today’s important engineering problems seek maximum benefit with minimum cost. In order to achieve this goal, engineers depend on optimization techniques. Recently, metaheuristic methods based on soft computing have been extensively used for optimization of the structures. In order to achieve an analogy between the natural phenomena and optimization problem, the harmony search (HS) algorithm has been employed to solve the optimization problem [31]. In the following, we focused on the principles of HS.

2.2.1. The HS Algorithm

Inspired by music as a phenomenon, the metaheuristic harmony search (HS) algorithm was first put forward by Geem et al. in 2001 [32]. In comparison with other numerical optimization methods, HS algorithm has certain advantages including the ability to work with discrete variables and a very low probability of being trapped in local optima (Lee and Geem, 2004). This algorithm also benefits from some features of other metaheuristic algorithms such as maintaining previous vectors, keeping harmony memory from the beginning to the end, and simultaneous evaluation of several vectors. However, it adopts fewer mathematical prerequisites compared with other algorithms and can solve various engineering optimization problems [33, 34]. Figure 3 indicates the flowchart process of the HS method with principal trend step.

3. Artificial Neural Networks

ANN is one of artificial intelligence applications, which has been especially of interest to engineers since 1980 for design and analysis. In recent studies, ANN has been successfully applied in various areas of earthquake and structural engineering such as analysis and design of structures, structure damage evaluation, and structural element optimization [35]. The prediction results obtained from this study and structural analysis under the effect of earthquake were compared. In addition, the effect of seismic characteristics was determined using a set of 11 different neural network training algorithms and a parametric study was conducted on this basis. Moreover, compatibility of prediction of neural network algorithms in structural performance is discussed.

3.1. NN Predictions Scheme

The objective of the NN prediction is to estimate the EDP for various combinations of IMs that are represented reliable of the ground motions. Figure 4 shows the structure of a multilayer, feed-forward network including input layer, hidden layers, and output layer. In engineering problems, no general rule exists to select the number of neurons in a hidden layer and the user’s engineering judgment generally determines this value. The training phase is one of the most important steps of prediction in neural networks, which is carried out using error back-propagation algorithm. Training via this algorithm updates network weights and biases and displays the rapid, decreasing trend of the performance function. The calculations of this algorithm are performed using the chain rule of calculus. Table 1 demonstrates the adopted ten back-propagation algorithms. Two categories of back-propagation algorithms are adopted in this study. The first one employs standard numerical optimization techniques (BF, CGB, CGF, CGP, LM, OSS, and SCG). The second one is metaheuristic techniques including variable back-propagation learning rate (GDA, GDM, and GDX) and flexible back-propagation (RP), derived from the performance analysis of a standard steepest descent algorithm [36]. Therefore, the number of input nodes of the NN is 8 with two hidden layers and hidden nodes which provide a compatibility between accurate predictions and computationally efficient calculations. The output layer has 4 nodes corresponding to the EDP for the IMs. To have a highly efficient network, it is generally better to normalize the input and output data defined in the neural network.

3.2. ANN Based Models for Estimating Structural Response

MATLAB neural network toolbox was adopted to estimate structural response. The multilayer, feed-forward network with a back-propagation error model comprises eight input and four output nodes. In the neural network model, input parameters are based on earthquake record features including Arias intensity (), cumulative absolute velocity (CAV), characteristic intensity (), and specific energy density (SED) for estimating the effect of earthquake features in the seismic response of a bridge. The parameters required to begin simulation in this study are as follows:(1)The number of input, output, and hidden nodes.(2)The number of training data.(3)The number of hidden layers.(4)The number of iterations (epochs).(5)The learning rate.(6)The error tolerance.(7)The momentum constant.The necessary parameters and their selected values may be found in Table 2. However, no general rule exists for determining the value of training data. From all data, 70% was randomly allocated to neural network training data. In the simulation process, the configurations of 11 ANNs were taken into consideration and a hidden layer was selected for each configuration. Training error, test error, training time, and correlation coefficient were extracted to evaluate the initial performance of various back-propagation training algorithms.

The input layer includes raw information and the performance of a hidden layers is determined by inputs and the weight of connections between them and hidden layers. The performance of an output layer is related to the hidden unit activities and the weight of connections between hidden unit and output. By creating a network of these nodes and applying a training algorithm to it, the network can be trained. Despite the different neural networks, multilayer neural networks can be used to learn nonlinear problems and problems involving various decisions. A back-propagation training set network includes input-target pairs . If a set of weight parameters like is allocated to network connections, a pattern like is defined between the input vector and the output vector . The quality of this pattern is measured using the following error function. To use this descending gradient method, the output error must be calculated in is the total output error, is the set of training samples, output is the total training outputs, is the th value of the objective function (corresponding to the th output unit) for the th training sample, and is the th output value (corresponding to the th output unit) for the th training sample. Therefore, minimum iteration period of an algorithm is used to obtain the optimal values of weight parameters. Most of the numerical minimization methods are based on the following form:The calculated error is distributed throughout the network on the backward path from the output layer through the network layers. In fact, the training algorithm tries to change the network weights in accordance with the following equation so that the sum of squared network errors can be minimized. The value of is defined asIn the above equation, and are constants with values between zero and one, and they control the learning rate and the partial changes in the network weight, respectively. Furthermore, indicates the error function, is the weight vector, and is an index showing the number of iterations. Thus, we apply the mentioned algorithms to train the network [23, 37].

3.3. Feature Extraction

Statistical methods such as principal component analysis (PCA) and factor analysis (FA) are used to investigate a set of correlated relevant variables. Generally, PCA is mainly used to reduce the number of variables and find the relation structure among them. Since no probability distribution functions are considered for samples, this is not a statistical method. This method is used only to present data in a simpler form and lower dimensions. FA is intended to discover details on the nature of independent variables influencing them, although such independent variables cannot be directly measured. Factors are the independent variables which are referred to. A causality in FA means that there is a set of common factors which is the cause of concurrent changes in samples (observations), and the causes of variables cannot be considered factors. This method depends highly on the correlation between variables (Figure 5). The reduction in the dimensions of inputs from the matrix to the matrix can be seen in the following equation in which refers to the number of variables while indicated the number of observed samples, and represents the number of extracted factors.Feature extraction forms input data (A: data matrix, B: data correlation matrix, and C: the matrix including factors).

Although the covariance matrix can also be used in this method, the correlation matrix is used in many studies. The very important assumption of FA is that the covariance of the observed variables is caused by one or more latent variables (factors), and these latent factors have a causal impact on the observed variables.

3.3.1. Mathematical Model of Factor Analysis

When is the observed variable and are the extracted effective factors, if refers to the number of variables, and indicates the number of factors, the mathematical model of factor analysis can be as follows: In the event that factors express the observation of variable exactly, then error value is zero. The model assumptions are as follows:It is obvious that the second assumption indicates that factors are orthogonal (independent).

The most common rotation method is Varimax. Since FA is intended to connect some variables to create a factor, the correlation coefficient pertaining to these variables should be over 0.3 in the correlation matrix. If a variable does not have any correlations with other variables in the correlation matrix, it should be deleted from the set. The number of samples should be usually 10 times the number of variables.

(1) KMO Index. KMO index represents the adequacy of samples (observations). This index checks whether the partial correlation between variables is small or not. It also indicates how the common variance of some latent and principle factors is influenced by the variance of research variables. This index is between zero and one. Previous studies indicated that if this index was greater than 0.6, the data would be appropriate for factor analysis. Otherwise, this index would not be valid for those data. The following equation is used to determine KMO index:In the above equation, is the correlation coefficient between the variables and while is their partial correlation coefficient.

(2) Bartlett’s Test. If the correlation matrix between variables is not unit matrix, and there are at least 0.3 correlations between the elements outside the main diagonal of the matrix, the significant relationship between variables is clarified. Therefore, based on the correlation between variables, it is possible to identify and define new available factors. If the parameter sig is smaller than 5% in Bartlett’s test, FA is appropriate to identify factors because the assumption that the correlation matrix is known is not true [38].

3.3.2. Feature Extraction of Ground Motion Records

It is difficult to extract the quantitative and qualitative features of an incremental nonlinear dynamic analysis curve, which also considers the characteristics of the earthquake as well as the structure. Due to the incremental procedure of IDA analysis, the extracted features do not have any specific distribution. On the other hand, each one of the earthquake features, IMs, has a different range, which must be taken into consideration [23, 37]. One of the most important and most common and basic assumptions in statistics is that the data is normal. Therefore, it must be ensured whether the data with this specific distribution can be used. More accurate results can be achieved by assigning appropriate distribution.

Too many features are required for a complete description of strong ground motions. The selection of a ground motion intensity measure (IM) is very important to provide a probabilistic relationship between the ground motion hazard and the resulting seismic response of structures. Several studies have utilized the effects of using different IMs for probabilistic seismic demand models (PSDMs) analysis of structures such as the peak ground acceleration (PGA), the peak ground velocity (PGV), the damped spectral acceleration at the structure’s fundamental-mode period (SA(T1, 5%)) with %. IA, IC, CAV, and SED IMs are indicative of the amplitude, the duration, the frequency content, and energy of a strong ground motion, respectively [8, 15]. Arias intensity was expressed as a parameter related to the amplitude of the ground motion that indicates the potential damage of an earthquake as the time-integral of the square of the ground acceleration:where is the acceleration due to gravity (9.81 m/s2) and is the acceleration time history. Then is determined in units of velocity. The CAV is included in a complex index of strong ground motion damage ability and is estimated as an area under absolute accelerogram according to the following equation:where is an absolute acceleration value in bracket duration between the first and last exceedances of some threshold acceleration. The threshold acceleration level is usually 0.05 g. The types of spectra such as Fourier spectra, power spectra, or response spectra can be described of ground motion frequency content. Spectral parameters can be used in the form of dominant frequency, mean frequency, bandwidth, central frequency, and so forth. However, is defined as follows:Root mean square (RMS) acceleration ground motion parameter is defined as . SED is obtained by integrating velocity square over effective duration of an earthquake and has units of m2/s. This parameter captures the variation in kinetic energy input during and is defined in Hence, in order to establish an appropriate estimation of the seismic performance of the structure, a set of 129 corrected strong ground motion records have been collected [30]. Figure 6 presents acceleration response spectra of ground motions.

3.3.3. Feature Extraction of NTHA Outputs

Similarly, response selection (targets) from NTHA analysis in NN is used as target values. Responses are also included in maximum piers ductility, maximum bearing, and abutment deformations. The reason for selecting columns ductility is because the superstructure is expected to remain linearly elastic under seismic loading. Obviously, the stability of the bridge depends on the columns.

4. Description of Example Bridge and Numerical Simulation

To demonstrate the development of analytical fragility curves, the Three-Span Continuous Curved Steel Girder Bridge is used, which is the most abundant of the bridges in terms of the number of spans. The typical bridge configuration used in this study has three spans which all have the same length of 30.3 m giving an overall length of 90.9 m to the bridge and the height is 5.2 m. The bridge structure is created in OpenSees [22]. The detailing for the typical reinforced concrete column, the discretization of beam, and cross-sections of the superstructure are shown in Figure 7. When the superstructures are made continuous, the demand appears to shift partially from the bearings to the columns and abutments. The deck is modeled using elastic beam-column elements. The deck width for this bridge is also 15 m and is constructed with eight steel plate girders. The bearing model considers high type bearings that are typically used for longer spans and therefore deemed appropriate for this model. The nominal cylindrical strength for the concrete is assumed to be 20.7 MPa while nominal yield stress of reinforcing steel has a yield strength of 414 MPa. The unconfined concrete behavior of column and cap beam sections is modeled using the Concrete01 material as provided in OpenSees. This material uses the Kent-Scott-Park model which utilizes a degraded linear uploading-reloading stiffness and a residual stress. The elements for the columns and cap beams are generated using displacement beam-column elements in OpenSees that have an associated fiber section being representative of the true column section. The bridges use a 914.4 mm diameter circular column with 12  mm bars. It should be noted that the overall curvature of the bridge is 45 degrees; its analysis is similar to straight bridges.

4.1. Statistical Analysis of the Distribution of Inputs

Analysis of the input data has shown that the input variables do not have a specific distribution. There are a number of methods that make some changes to data in order to normalize them. The log transform, the Box-Cox transform, the log probability plot, Finney plot, and so forth are some of the available normalization methods. It was observed that the results of Box-Cox transform do not fall into an acceptable range for kurtosis and skewness, and therefore, the normality hypothesis is rejected. In log transform method, let be a variable; then the following transform is applied:where is usually set to 1 and has a positive or negative value. As it is shown in Table 3, the kurtosis and skewness values indicate that the normality hypothesis is confirmed. The Kolmogorov-Smirnov and Shapiro-Wilk tests both confirm the normality of the data after applying this transform that is given in Table 4. The appropriate normal distribution fitted to each of the earthquake features, after applying the log transform. The linear formation of the data on Q-Q plot and the appropriate range of data in box plot, after applying the log transform (the finalized input data of the neural network), are shown in Figures 810.

After factor analysis, the number of input nodes of the NN is 2 with two hidden layers and hidden nodes which provides a compatibility between accurate predictions and computationally efficient calculations. The output layer has 4 nodes corresponding to the EDP for the IMs. Thus, all processes performed up to this section are illustrated in Figure 11.

5. ANN Utilization Results and Discussion

ANNs are useful in applications where the underlying process is complex, such as nonlinear response estimation of structures. Therefore, in order to predict the seismic demand of structure (EDP) based on NN for a limit state, architecture and network training is very important. In this study, curvature ductility, bearing deformation, and abutment deformation are employed. Obviously, the ability to predict neural network is dependent on input parameters such as IA, IC, CAV, and SED which are representative of earthquake ground motions. With due regard of the results presented, the authors approved the Levenberg-Marquardt algorithm (LMA) in this study. This paper discusses every aspect of ANN model development such as training data collection, data processing, and training algorithms in finding the best architecture and performance.

ANN is adopted in this study to predict rational values for the curvature ductility of columns, abutments, and bearing deformations in bridges with concrete frames and steel decks. Hence, the ANN structure 2:HN:4 was utilized for two input factors and output values of curvature ductility, abutments, and bearing deformations.

The prediction of responses in the first approach is performed within each one of the 129 seismic groups which were extracted from the NTHA.

Firstly, in the second approach, the prediction is performed in a seismic group that contains all the input data from the previous 13 seismic groups. Despite having 6708 samples in recent approach, the evaluation results of the neural network with 4 outputs such as absolute maximum ductility of the columns in two directions showed low accuracy in approach two by LMA. Because of the incremental procedure of adding inputs and outputs, the data does not follow a proper distribution and causes more complexities for the neural network. Therefore, for prediction in the second approach the outputs are computed separately by 1677 samples and 4 neural networks are used.

Average predicted results show more than 85 percent accuracy in regression by LMA (Tables 57). A great number of trials were conducted and ANN model results as well as an optimal number of the nodes of the hidden layer are presented. The performance of ANN back-propagation methods for output estimation may be found in Table 5. In light of these tables, the configuration of networks is very different in terms of regression coefficients. A comparison of algorithms in Table 5 and Figure 12 indicates that LMA exhibits a more favorable performance in terms of equation parameters ( and ), regression coefficients (), and MSE. In other words, LM algorithm displays a better performance in the estimation of the responses in the models of bridge compared with other ANN approaches. In the first approach, except for the LMA algorithm, the response values from other algorithms (e.g., SCG) are better estimated while the training time is increased. Oss algorithm in the linear behavior can predict appropriate values, while the nonlinear process is unpredictable. From Tables 57, the success of the LM training algorithms is clearly on the neural network structure and data set. The results in Table 6 represent the performance of LMA to predict the responses of the bridge in the first approach. This trend was also approved using approach 2 in Table 7. Hence, feed-forward networks with tan-sigmoid transfer function in the hidden layer and linear transfer function in the output layer have been accepted. These types of neural networks are compatible with applications that are estimated by regression functions. The number of hidden neurons was determined by trial and error method. Figure 12 represents the regression plot, histogram error, and MSE of predicted response.

The number of nodes for hidden layers was selected to be between 5 and 10 in this study. The optimal number of hidden nodes was determined using distinct solutions for each node. The fewer the number of nodes in hidden layers is, the better it can show data suitability and performance.

In addition to the learning rate, number of iterations, momentum constant, and the error tolerance presented in Table 4, the most important factor that has influenced the success of the program is the learning algorithm. In both categories of Table 1, with the exception of LM and RP, algorithms exhibit poor performance in approach 1 while the problem is being solved owing to its different convergence characteristics. Precise results are attained when the appropriate function is adopted for modeling problem behavior.

In approach two, when the number of iterations and the training period are of interest, the rates of response values of the LMA are faster than the other 10 algorithms. Also, if training time is taken into account, these algorithms are the most suitable options. It may be observed from the learning algorithms of the second category that GDA, GDM, and GDX training and test error tolerance are the worst scenario in other ANN algorithms in approaches 1 and 2. Therefore, the data of this study is not appropriate for gradient descent algorithms in these categories.

Another noteworthy point is the highly rapid and highly accurate (over 90%) result estimation upon extracting response in LM method, which does not involve the complicated, time-taking common technique.

6. Fragility Curves Based on Neural Network

IDA is provided with a comprehensive assessment of maximum response of structure (e.g., column ductility) sometimes called engineering demand parameter (EDP) versus appropriate intensity measure (IM) which is chosen to represent the seismic hazard (e.g., PGA, Sa). As it is mentioned that ANN model is a computational model that is inspired by the structure and (the) functionality of biological neurons, it is used as nonlinear statistical data modeling tool to model complex relationships between inputs and output. Figures 1316 demonstrated the fragility curves at each different limit state based on NN and NTHA methods. These curves show good coincidence in two approaches.

7. Conclusion

The primary objective of this study was to achieve seismic fragility curves of the structure using “soft computing based framework” tools and to compare analytically seismic fragility approach. The HS algorithm is presented for optimization of mean and standard deviation of PSDMs in the fragility framework.

The trained ANN is shown to be effective in predicting responses related to limit states using a set of influence variables, including inputs and outputs factors. The success of the developed LM models is considered satisfactory. The novelty of the approach presented in this work lies in the fact that an innovation is used in which the developed ANN model is based on features extracted from factor analysis and reduces the computational complexity of the neural network.

In both of the approaches, the difference of these curves is reasonable from slight to collapse limit states, considering the nonlinearity and complexity of the problem that must be solved by the neural network. The only significant difference of probability of failure between the curves is due to the range of damage index and the coefficients of linear equations fitted in the neural network. Obviously, if the number of training samples increases, network performance and efficiency will improve. The fragility curves based approach discussed here can be extended to include uncertainties, the reliability of the neural network, and accuracy which remain potential issues for investigation.

Excessive analysis time, complexity of nonlinear analyses, algorithm for solving nonlinear equations, nonconvergence, error tolerance, time period of analysis, damping ratio, nonlinear behavior curve of materials, nonlinear element type, considering bound-slip effects in concrete columns, taking into account concrete tensile behavior, strain-hardening percentage in steel, optimal number of section fibers, proper mass distribution, and selection of seismic record are among the problems pertaining to presenting fragility curves on the basis of data obtained from NTHA. A total of 1677 NTHAs require 12 days using a computer with a CORE i5 CPU and an average analysis time of 10 minutes. The statistical operation relating to the extraction of fragility curves should also be taken into consideration.

Three-span bridges with the specifications mentioned in this study (small horizontal arc angle) have been commonly used in a variety of research. Determination of a framework for “macro- and micro-specifications of structures” and “features of seismic records” is required in future studies, so that fragility curves based on general ANN would be put forward by employing NTHA data.

In technical literature, ANNs are generally adopted to determine fast solutions that save time and cost. The present study also shows that LMA with a high accuracy rate may be applied for rapidly presenting fragility curves.

In this study, fragility curves are estimated on the basis of NTHA data and soft computing. The matching of these curves is evaluated to be desirable for the following components via two methods, namely, analytical and soft computing. It should be noted that ANN training pattern is based on the outputs obtained from NTHA method in each of the components. It should be taken into consideration that the adequacy of the data set, extracted features, and their development is inevitable in achieving accurate results. In future research, for a structural benchmark, its main advantage is the reduction of time in the extraction of fragility curves by applying NN.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.