Abstract

The rotation capacity of wide-flange beams is a mechanical and physical parameter that shows a structural member’s ductility. It is a crucial factor in the plastic design phase of wide-flange beams, especially useful in extreme circumstances such as earthquakes. This study proposes an approach that facilitates the calculation of the rotation capacity (R) based on a soft computing technique developed using an experimental database accumulated from prior studies. The ensemble decision tree (EDT) model was studied to construct a soft computing model that accurately predicts R based on training and testing datasets. The model’s performance metrics used were well-known criteria, namely the coefficient of determination (CC), root mean square error (RMSE), as well as mean absolute error (MAE). With CC = 0.925, RMSE of 3.20, and MAE of 2.60, the study’s findings indicate that the EDT model accurately estimates the rotation capacity of wide-flange steel beams. Furthermore, sensitivity analysis and 2D partial dependence analyses were proposed to determine the effect of the factors that affect R. This work could be a significant step toward determining the R of wide-flange steel beams and aiding in improving structural member design.

1. Introduction

Currently, with the development trend of science and technology, steel structures are increasingly being applied in practice and are playing a crucial role in the field of structural works. Structural steel beams, in particular, are commonly employed in building construction. A wide-flange beam is a regularly utilized component because of its effectiveness in bending around an axis with a large moment of inertia. As illustrated in Figure 1, the behavior of wide-flange beams may be divided into three different regimes: elastic in the first phase, inelastic in the second phase, and plastic. The overelastic segment illustrates a transition between elastic and plastic behavior as more and more fibers in the cross section melt. Under any instance, the local plate buckling of the compression flange or the web in flexural compression, or lateral-torsional buckling, is the leading cause of the beam’s collapse. The plastic behavior is of particular interest in this work since it allows moment redistribution in indeterminate systems [1].

Rotation capacity is crucial in constructing plastic and seismically resistant building structures. The member must generate plastic hinges in the plastic design, which must rotate till the collapse mechanism is attained without sacrificing moment capacity. Thus, it is ensured that the appropriate redistribution of bending moments occurs [2]. This rotation capability is critical in earthquake-resistant design to ensure that a particular part of the input seismic energy is dissipated through plastic behavior. As a result, determining the rotation capability of steel structures becomes critical [3, 4].

In the literature, the rotation capacity is specified as a nondimensional parameter. For example, Salmon and Johnson [5] established the R as a method of analyzing a cross-deformation section’s capacity prior to the cross-sectional capacity being depleted by instability. Lay and Galambos [6] defined R as the ratio between the plastic rotation (θh) after the moment decreases below the plastic moment (Mp) to the elastic rotation (θp) at the initial achievement of Mp. In another approach, Kemp [7] defined R as the ratio between the plastic rotation up to the maximum moment on the moment rotation curve, and θp. Finally, the American Society of Civil Engineers (ASCE) [8] proposes a frequently used concept for R as the ratio between the rotation at which the moment capacity on the unloading section falls below Mp and the theoretical rotation at which the total plastic capacity is obtained. Additionally, theoretical, empirical, and approximate techniques for determining the available rotation capacity of wide-flange steel beams have been given in the literature, as described by Gioncu and Petcu [9, 10].

Machine learning (ML) algorithms have improved quickly over the past few years. Many experts have presented a new method to measure the R of wide-flange steel beams. The work of Guzelbey et al. [3] was among the first to be reported. In this research, a neural network (NN) model was developed to predict the R of wide-flange steel beams based on experimental data that were gathered from the literature. The suggested NN model was proven to be in perfect accordance with the experimental data, with high precision (i.e., the R value is 0.997). Following that, Cevik [4] uses genetic programming to create an estimation model for wide-flange steel beams’ rotation capacity. The proposed GP formula is much more accurate than the numerical results and the current analytical equations. Other machine learning approaches are also addressed in the works of Cevik [11], Samui et al. [12], and Alavi et al. [13] for developing predictive models of wide-flange steel beams’ rotation capacity. The results and the machine learning algorithms employed in these studies are presented in Table 1. Three observations are drawn from the literature review in relation to the most current prediction of the rotational capacity of wide-flange steel beams. The first is that, while various studies have suggested a variety of suitable models for machine learning that can predict the R value, the selection procedure is questioned since no validation of models was carried out. This means that the models proposed and their prediction accuracy were validated solely on the training and testing datasets that were selected in previous research. The first point raises another issue regarding the utilization of the previously suggested models for use during the design phase of wide-flange steel beams. The reliability of predictions is certainly a problem with these models. Last but not least, it is not simple to comprehend the machine learning models to understand their applications in structural engineering without being able to guarantee the accuracy of the predictions. In fact, it is often said that machine’s learning tools are black-boxes. This has led to the development of explainable AI recently, where engineers can comprehend the outcomes of the AI system. However, without confirmation of the reliability and generalizability of ML models, their understanding can be difficult. This is also a drawback of the research studies on wide-flange steel beams, because analysis on the effect of input factors on the rotation capacity has not yet been done.

Overall, the main objective of this research is to predict the rotation capacity of wide-flange steel beams as well as determine the influence of parameters on the rotation capacity. The development of a machine learning model based on an ensemble decision tree is presented, which also highlights a model that has not been previously investigated. Moreover, repeated k-fold cross-validation is conducted to ensure the model’s reliability and generalizability. To this aim, an experimental dataset on wide-flange steel beams’ rotation capacity is used to illustrate the research’s methodological approach. Finally, a variety of diagrams are generated, thanks to the developed model, that demonstrate the effect of the beams’ characteristics on the resulting rotation capacity. The remaining content of this paper consists of the following parts: the second (part 2) contains the fundamental details about the dataset, followed by the basic information on the used algorithm (Section 3). The results, discussions, and practical results are discussed in Section 4.

2. Database Construction

This paper proposes an ensemble model that is based on 77 experimental results to forecast the rotation capacity of wide-flange beams. This dataset was compiled from 7 major international journals [7, 1419]. The study’s purpose is to determine the wide-flange beam’s rotational capability (denoted R). The input variables considered are the half-length of the flange (denoted b, mm), the web’s height (denoted by d, mm), the flange’s thickness (denoted tf, mm), the web’s thickness (denoted tw, mm), the beam’s length (L, mm), the flange’s yield strength (Fyf, MPa), and the web’s yield strength (Fyw, MPa). Figure 2 a illustrates the geometric form of the cross-sectional variables of the beams under test. The procedure for conducting the experiment to establish the wide-flange steel beams’ rotation capacity is illustrated in Figure 3.

The range of inputs and outputs, including the max and min values, are the following: The length of the flange varies over a fairly wide range, from 36.95 to 150.4 mm; the height of the web ranges from 120.3 to 320 mm; the thickness of the flange ranges from 1.44 to 17.3 mm; and the thickness of the web ranges from 4 to 11.5 mm. The range of length of the beam is quite large, from 940 to 4000 mm; the yield strength of the flange has a variable value in the range from 236 to 817 Mpa. Finally, the yield strength of the web ranges from 217 to 990 MPa.

The distribution graph and correlation between the input and output parameters considered in this investigation are shown in Figure 4. The Pearson’s correlation coefficient (r) was computed and documented in each pair of parameters. R has no significant direct correlation with the other input parameters, as can be shown. Some of the input parameters have high correlation, such as the flange’s thickness and the web’s thickness (correlation of 0.91), the correlation between the flange’s half-length and thickness is 0.83, and between the flange’s half-length and the web’s thickness is 0.84. Although these parameters have a high correlation, these parameters are the sizes of different components in the beams’ structure, which are related to the local stability of the beams. Therefore, all input parameters are considered in this study after the collection, analysis, and evaluation process. Finally, this dataset is normalized to the range of values 0-1 to reduce the mistakes created by the EDT model during simulation. This commonly used method in artificial intelligence problems limits errors generated by numerical simulations.

3. Model Details

3.1. Ensemble Decision Trees (EDT)

Quinlan presented decision trees (DT) [20], a popular machine learning method that may be used for many real-world problems. The data points are divided at each node using the given split criterion in a recursive partition technique. The DT’s fundamental premise is to utilize a set of criteria to find the areas with the most homogenous output and input variables, after which each zone is fitted with a constant. The DT technique offers the benefits of being nonparametric, simple to grasp, and rapid to fit, even for more significant problems, without requiring much statistical understanding. However, putting the decision tree into practice may be tricky. For instance, even little changes in the dataset may have a substantial impact on the tree structure. Similarly, given an unknown dataset, the findings might not be correct anymore (over-fitting problems). Ensemble approaches combine multiple decision trees to provide better predictive performance than one decision tree to address the drawbacks (Figure 5). The principle behind the ensemble model is that a variety of weak learners come together to produce a more powerful learner. Even when the settings are altered, the performance of decision tree ensembles is rather excellent. There have been a variety of ensemble approaches suggested. Some of them are broad approaches that may be applied to any model, such as bagging [21] and boosting [22].

Bagging (bootstrap aggregation) is used in order to decrease the variation of the decision tree. The aim is to produce diverse subsets of data out of an initial training sample that is selected randomly with replacement. Every subset of data is then used to build the decision tree. This means a collection of diverse models are generated and used. The sum of predictions from various trees is used and is more durable than one decision tree. Boosting is a different method for creating a set of predictors. The learners are taught in a gradual manner in this way, starting with the early learners making basic models from data and then evaluating the data for any flaws. Also, new trees are created (random samples) with the intention of finding net errors from the prior tree at every stage.

This work considers the EDT model to perform the prediction tasks, with a decision tree (DT) as the base estimator. It is important to note that EDT is relatively similar to the random forest (RF) algorithm, as they fit multiple models on different subsets of a training dataset, then combine the prediction results from all models to get the final decision. The fundamental difference between EDT and RF is that RF is an extension of EDT bagging that randomly selects subsets of features used in each data sample. Only a subset of features is selected randomly from the total in RF, and the best split feature from the subset is used to split each tree’s node. On the contrary, in EDT bagging, all features are considered for splitting a tree’s node.

3.2. Repeated K-fold Cross-Validation

In the area of machine learning, the cross-validation technique is a common method of preventing overfitting during the training phase. A typical dataset is split into three sets that include a training set, a validation set, and a testing set. The first set is utilized for training, and the validation set is used during the training phase to verify the trained model’s accuracy, whereas the testing set is used for the final evaluation of the model. When the datasets are split into two parts, the training and testing datasets, cross-validation is also an option to avoid overfitting problems.

The test dataset will be kept separate and saved for the final assessment stage, which will evaluate the model’s “response” when encountering completely unknown data. After just one run, the K-fold cross-validation approach might produce a noisy assessment of the model’s performance. Different datasets, on the other hand, might provide quite different results. Therefore, repeated K-fold cross-validation is a technique to improve the predictive performance of a machine learning model. Simply repeat the cross-validation procedure numerous times and average the results overall folds and runs. This average, calculated using standard error, represents the model’s true underlying mean performance on the dataset more accurately. The basic technique of the iterative K-fold algorithm is to shuffle and randomly sample the dataset many times, resulting in a model that is as robust as it is comprised of most of the training and testing operations. However, the precision with which this cross-validation approach assesses the correctness of a machine learning model is determined by two criteria. The first input, K, is an integer that specifies the number of folds in which the supplied training dataset should be split (or subsets). The model is then trained on the K-1 subset of K folds, with the remaining subset utilized to verify the model’s functioning. These steps will be repeated up to a specific number of times, which will be decided by the algorithm’s second parameter, giving rise to the name repeated K-fold cross-validation. The model’s ultimate accuracy will be determined by the mean performance score across all the cases.

3.3. Partial Dependence Plot

The functional connection between a limited number of input variables and predictions is shown by a partial dependency (PD) plot. They demonstrate how the predictions are influenced by the values of the input variables of interest. The most fundamental PD plots are 1-way plots, which show how a model’s predictions are affected by a single input. PDPs with two input features of interest demonstrate how the two characteristics interact with one another.

PD plots examine variables of interest across a certain range. The model is assessed for all observations of the other model inputs at each value of the variable, and the outcome is then averaged. As a result, the connection they describe is only true if the variable of interest does not have substantial interaction with the other model inputs.

After marginalizing the impacts of all other features, the partial dependency function of a model explains the predicted influence of a feature. The partial dependence of a feature set X, ℑ ⊆ (usually |ℑ| = 1) is defined as follows:where Xξ are the remaining features so that ℑ∪ξ =  and ℑ∩ξ = ∅. The PD is estimated using Monte Carlo integration:

For simplicity, we write instead of , and instead of when we refer to an arbitrary . The PD plot consists of a line connecting the points , with R grid points that are usually equidistant or quantiles of .

3.4. Performance Assessment

A variety of metrics were used in this work to measure the effectiveness of the proposed ML model, such as the coefficient of determination (CC), mean absolute error (MAE), and root mean square error (RMSE). MAE refers to the average magnitude of the model’s error but does not include the skewed trends between the model output and actual data. Besides, root mean square error (RMSE) is a basic criterion for evaluating predictive modeling performance. The RMSE is a fundamental metric for evaluating the performance of predictive models. RMSE is especially sensitive to significant error values. As a result, the model error is more stable when the RMSE is close to the MAE values. RMSE, similar to MAE, is in the range of (0; +∞). CC shows the data’s appropriateness for the method, ranging from −∞ to 1. Lower values of CC indicate poor model performance, while CC values near 1 indicate strong model accuracy. The following equations reflect these values [2325]:where E is the actual value, P is the model’s output, and n is the data points.

4. Results & Discussion

4.1. Prediction Results

This section describes the numerical simulation process to develop the EDT model for accurately predicting the R of wide-flange beams. The first part of this section is dedicated to the selection process of the hyperparameters values of the EDT model. In general, the performance of tree-based machine learning models depends on important hyperparameters such as the number of trees or variables defining the branches and leaves. Therefore, a trial and error technique is used in this study to find the best hyperparameters associated with the proposed EDT model. As a result, the number of trees was chosen as 1000, the bagging method was selected, the number of learning cycles was 30, and the minimum leaf size was 1.

After an extensive model selection process, these hyperparameters were chosen, where repeated k-fold cross-validation was conducted. Repeated k-fold CV is also the key highlight in this research, with the EDT machine learning model being more reliable. Precisely, the training dataset is partitioned into ten folds for cross-validation in the first step. With 10 simulations, each corresponding to a 10-fold CV, the average performance of each training and testing dataset is obtained and displayed in Figure 6. Notably, the testing data (30% of the dataset) were not considered during the hyperparameters selection and were only used for model performance assessment. It means that the EDT hyperparameters are chosen based on only the training data.

It can be observed that as the training data is randomly shuffled, the EDT model’s prediction ability changes. The performance evaluation criteria for the training dataset vary within certain intervals, but the amplitude is relatively stable. For instance, the RMSE fluctuates around 2.6, with a minimum RMSE of 2.2, and a maximum of 3.1, respectively, with the fourth repeated CV (denoted as CV4) and the first repeated CV (CV1). Besides, MAE varied from 1.6 to 2.1, and CV4, CV1 were found to be the best and poorest simulations, respectively. Lastly, the CC ranges from 0.940 to 0.965, with CV3 being the best and CV6 the poorest. Therefore, it could be concluded that the trained EDT model has a high prediction accuracy with the training dataset and may be used for further evaluation of the testing dataset. Considering the testing dataset now contains 23 samples that were entirely unknown throughout the training process. The suggested EDT model with 10 times repeated 10-fold cross-validation has excellent prediction performance. Furthermore, there is no overfitting since EDT’s capacity on the training set is higher than on the testing dataset. While predicting entirely unknown data, the EDT model produces excellent performance, with the mean values of CC, RMSE, and MAE ranging from [0.83; 0.925], [2, 3, 6], and [2, 6; 4.3], respectively. CV1 has the best overall performance based on all three criteria. As can be observed, the difference between the training and testing datasets of CV1 is not significant, and the EDT model still performs well, with a CC of 0.925. Overall, engineers can use this model to estimate the R of wide-flange beams thanks to its great accuracy.

The following part presents typical EDT model prediction results derived from the trained and verified EDT model in the previous section. Figure 7 shows the actual and predicted rotation capacity of wide-flange beams using the proposed EDT model for training and testing datasets. The solid lines represent experimental values, and the dotted ones represent the predicted values. As shown, the target values of 54 samples in the training dataset are remarkably similar to the actual findings. Minor mistakes were also discovered in the test dataset’s remaining samples (23 test results). The error levels and correlation between the experimental findings and the prediction results of the EDT model described in the next part are used to quantify this accuracy.

Figure 8(a) shows the EDT model’s distribution and cumulative error for the training set, while Figure 8(b) shows those for the testing set. According to the comparison, the predicted values are close to the experimental value. Besides, it is simple to calculate the error percentage within a range using the cumulative distribution line (red line). For example, 85 percent of samples with error between the experimental and simulated EDT values are in the range [−3; 3] for the training data. Similarly, the error range in [−3; 3] is 70% with the testing set.

Furthermore, the regression graphs illustrating the correlation between the experimental and predicted values of R of wide-flange beams for the training part (Figure 8(c)) and the testing part (Figure 8(d)) are shown. The linear regression lines are relatively close to the diagonals, indicating that the outputs of EDT were highly correlated with the experimental ones. In summary, Table 2 shows the results of the EDT model’s three performance evaluation criteria. The training set has a better CC value of 0.94, while the testing set is at 0.925. The training dataset’s RMSE and MAE are 3.05 and 2.15, respectively, while those for the testing dataset are 3.2 and 2.6. The findings show that the suggested EDT model is a good predictor and reasonably forecasts the wide-flange beams’ rotation capacity.

Finally, a comparison of EDT with the deep neural networks (DNN) model is conducted in this part to confirm the performance of the tree-based algorithm proposed herein. With DNN, BFGS quasi-Newton backpropagation training function is used with a two-hidden layer structure, each containing 8 and 6 neurons in the first and second hidden layer, respectively. Trial and error tests are also conducted, and the tansig activation functions are selected in the two-hidden layers, whereas a linear function is applied to the output layer. The results (see Figure 9). show that for the training set, DNN achieves excellent prediction results (RMSE = 1.70, MAE = 0.33, and CC = 0.972). However, it seems that the DNN model exhibits overfitting problems as the prediction results are relatively poor on the testing set (RMSE = 5.67, MAE = 3.67, and CC = 0.81). Compared with those obtained with EDT (Table 2), it is clear that the EDT model is superior to DNN in terms of prediction accuracy and stability.

4.2. Sensitivity Analysis and Discussion

Sensitivity analysis can be utilized to study the impact of the value of an independent factor on the value of a dependent variable based on certain conditions. It is a technique for forecasting the result of a choice based on a set of factors. For instance, an analyst could assess how changes to one factor affect the outcome by establishing a specific number of variables. This technique is applied in conjunction with the constraints set by a set of input variables.

In this investigation, sensitivity analysis was conducted to assess the significance of input factors on the wide-flange beam’s rotation capacity. A significant feature index was constructed based on different levels of variables from quantiles 0 to 1 to define the effect of each feature. The essential performance index of the seven input variables is shown in Figure 10. The yield strength of the flange is the input variable with the most important feature index (0.515), followed by the length of the beam (0.5), the thickness of the web (0.42), the yield strength of the web (0.34), and three less significant input variables (the height of the web, the thickness of the flange, and finally half the length of the flange) with feature indices of 0.275, 0.205, and 0.115, respectively. Overall, four input parameters, namely yield strength of the flange, length of the beam, the thickness of the web, and the yield strength of the web, are the most important features that affect R. Therefore, further in-depth analysis of the interaction effect of these variables should be conducted and discussed.

Finally, this part is dedicated to the presentation and analysis of two-dimensional PDP (Figure 11). As discussed in Section , calculating PDP enables visualizing the relationship between variables and analyzing the effect of a combination of variables on the problem’s output. A 1D-PDP is generally calculated by fixing six variables and changing the variable’s value to be studied in the database’s value domain. However, using 1D-PDP is challenging if the objective is to analyze the influence of input variables on the target, mainly due to the complicated relationship and coupling effect between variables. A 2D-PDP can overcome the above shortage by including two variables and analyzing the complex correlation between the two variables affecting R.

2D-PDP analysis of the four critical input parameters discussed previously is presented. Figure 11(a) illustrates the PDP relationship between the length of the beam (L) and the yield strength of the flange (Fyf). It is noted that these two variables have the most significant influence on R, as previously mentioned. When the values of L and Fyf change within the database’s constraints, the value of R varies between 6 and 13. In general, a small L value results in a large R, and as L increases from 1000 to 4000 mm, R tends to decrease regardless of the value of Fyf. When Fyf varies between 236 and 800 MPa, however, the variation of R is not uniform. Precisely, R achieves its highest values when Fyf is about 300 MPa. Additionally, there is a special region in the PDP heatmap in which R approaches a minimum value (close to 6) when Fyf is between 350 and 600 MPa and L is between 3000 and 4000 mm. Additionally, when L values are close to 1000 mm and Fyf approaches 300 MPa, R reaches its maximum value. Using this PDP analysis, structural engineers could have an idea during the design phase of wide-flange beams to achieve the desired values of R.

Next, the influence of Fyf and the thickness of the web (tw) on the value of R is investigated (Figure 11(b)). It can be seen that, as tw increases, the trend of R increases proportionately, regardless of the value of Fyf. Similar to the previous analysis, when Fyf is less than or greater than 300 MPa, the value of R increases significantly. When tw is between 10 and 11.5 mm and Fyf is less than 300 MPa, R reaches its maximum value. When tw is between 4 mm and 7 mm and Fyf ranges from 350 MPa to 600 MPa, R reaches its minimum value.

Figure 11(c) illustrates the PDP relationship between the yield strength of web (Fyw) and Fyf and their effect on R. Different from the two previous cases, a relatively large region can be observed in which R is minimized. The heatmap shows that such a zone is obtained when Fyf varies between 380 and 600 Mpa and Fyw varies between 400 and 990 MPa. When both Fyf and Fyw are at their smallest possible values within the corresponding ranges, R reaches its maximum value. However, when the values of Fyw and Fyf are within the remaining range, R undergoes a nonuniform value change. Combining the analyses with respect to Fyf, it could be stated that a high value of R could be obtained when Fyf is outside of the range from 380 to 600 Mpa. The maximum R value is obtained when Fyf is about 300 MPa, and L and Fyw are minimized, whereas tw is maximized.

Figure 11(d) illustrates the 2D-PDP interaction between L, tw, and R. In general, R values increase when tw increases and L decreases. The smallest region of R is discovered when tw is less than 6 mm and L is more than 3000 mm. When tw has the greatest value and L has the lowest value within the boundaries of these inputs, R achieves its maximum value.

The diagram showing the relationship between L, Fyw and their effect on R is shown in Figure 11(e). R reaches its minimum value when L is close to 4000 mm, and Fyw fluctuates over a wide range, from 300 MPa to 990 MPa. In general, regardless of Fyw, when the L value decreases, R tends to increase. R reaches its maximum region when both L and Fyw have small values in the dataset’s limited range.

Figure 11(f) illustrates the dependence of tw and Fyw with respect to R. As can be seen, R tends to increase as tw increases, regardless of the value of Fyw. However, with constant values of tw, the change of R is insignificant as Fyw increases from 300 to 990 MPa. This is also the region in which R reaches its minimum values when tw varies between 4 and 6 mm. On the contrary, when Fyw is below 400 Mpa, and tw is superior to 8 mm, R has the largest value.

Overall, R has a rather complex relationship with input parameters, especially the four that significantly impact R, resulting from the feature importance analysis and evaluation based on 2D-PDP. Based on the histograms developed in this study, the EDT model can assist engineers in selecting the appropriate geometric dimensions and material properties for the structure design phase.

5. Conclusion

Determining the rotation capacity of wide-flange beams has always been a critical topic in the building industry, particularly in extreme situations like earthquakes. This work constructed and developed the EDT model to handle this challenge. A database containing 77 experimental results has been compiled from respectable international journals. The correlation between the EDT model’s outputs and experimental values was evaluated using CC, RMSE, and MAE. According to the findings, the developed EDT model predicted the rotation capacity of wide-flange beams with great accuracy (CC = 0.925). Furthermore, the relationship between input parameters and wide-flange beam rotation capacity can be determined using sensitivity analysis and 2D-PDP analysis. Overall, the proposed numerical tool based on EDT could be used by structural engineers to quickly estimate the rotation capacity of wide-flange beams using the input variables of the current work.

Data Availability

Data will be made available on request.

Conflicts of Interest

All authors declare that they have no conflicts of interest.