Complexity

Complexity / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 5548988 | https://doi.org/10.1155/2021/5548988

Thuy-Anh Nguyen, Hai-Bang Ly, Hai-Van Thi Mai, Van Quan Tran, "On the Training Algorithms for Artificial Neural Network in Predicting the Shear Strength of Deep Beams", Complexity, vol. 2021, Article ID 5548988, 18 pages, 2021. https://doi.org/10.1155/2021/5548988

On the Training Algorithms for Artificial Neural Network in Predicting the Shear Strength of Deep Beams

Academic Editor: Zhen Zhang
Received17 Feb 2021
Revised21 Apr 2021
Accepted13 May 2021
Published24 May 2021

Abstract

This study aims to predict the shear strength of reinforced concrete (RC) deep beams based on artificial neural network (ANN) using four training algorithms, namely, Levenberg–Marquardt (ANN-LM), quasi-Newton method (ANN-QN), conjugate gradient (ANN-CG), and gradient descent (ANN-GD). A database containing 106 results of RC deep beam shear strength tests is collected and used to investigate the performance of the four proposed algorithms. The ANN training phase uses 70% of data, randomly taken from the collected dataset, whereas the remaining 30% of data are used for the algorithms’ evaluation process. The ANN structure consists of an input layer with 9 neurons corresponding to 9 input parameters, a hidden layer of 10 neurons, and an output layer with 1 neuron representing the shear strength of RC deep beams. The performance evaluation of the models is performed using statistical criteria, including the correlation coefficient (R), root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). The results show that the ANN-CG model has the best prediction performance with R = 0.992, RMSE = 14.02, MAE = 14.24, and MAPE = 6.84. The results of this study show that the ANN-CG model can accurately predict the shear strength of RC deep beams, representing a promising and useful alternative design solution for structural engineers.

1. Introduction

Deep beams are defined as load-bearing structural elements in the form of simple beams, in which a considerable amount of load is transferred to the supports by a combined compression force of load and jet. Deep beams are characterized by a larger beam depth compared to conventional beams, classified by the ratio of the length of the cut span to the beam depth (a/h) or on the ratio between calculated span length and beam height (l/h). Several design codes have given the conditions for defining deep beams. For instance, according to IS Code 456-2000, the deep beam is defined by a ratio of effective span-to-overall depth (l/h), which does not exceed 2.0 for the simple beam and 2.5 for the continuous beam [1]. Besides, the ACI 318–14 [2] classifies a beam as a deep beam if it satisfies the following: (a) the spacing does not exceed four times of overall structural depth, or (b) the cutting span does not exceed twice the overall part depth. According to Eurocode 2 (EC2) [3], when the ratio l/h is less than three, the beam is considered a deep beam.

Currently, RC deep beams are widely used in structural works, such as transfer beam, wall foundation, foundation pile cap, floor partition wall, and shear wall [4]. In particular, deep beams play a crucial role in the design of large structures as well as small structures. In several specific cases for architectural purposes, the buildings are designed without using any columns for a very large span. In this case, if normal beams are used, failures such as bending failures might occur. So using deep beams is an effective solution that could increase the durability of structures [57]. Due to the large height of deep beams, the primary type of damage is shear damage [810]. For deep beams, cracks often appear quite early, in the direction of primary compressive stress, perpendicular to the direction of tensile stress. In many cases, the crack appears vertical or inclined when the beam is damaged by shear force. This leads to a sudden malfunction of the beam when the beam’s height increases [11]. In deep beams, the shear capacity can be 2 to 3 times greater than that determined by the calculation method obtained with conventional beams. Therefore, the shear stress in the high beam cannot be ignored compared with the conventional bending beam. The stress distribution is not linear even in the elastic phase. At given ultimate stress, the stress field is not the same parabolic shape as the conventional beams anymore, which is also a significant reason for slippage problems in deep beams [5].

In the past several decades, many methods have been proposed to analyze the shear strength of deep beams, including the strut-and-tie method (STM) [12, 13] and the upper limit theorem of plasticity theory [14, 15]. Based on the STM, theoretical methods to calculate the shear strength are proposed, such as compression field theory (CFT) and modified compression field theory (MCFT) [16, 17], the theory of softened strut-and-tie model (SSTM) considering the compression softening of concrete [18, 19], and strut-and-tie model based on the crack band theory [20, 21]. Besides, the current design codes, such as ACI 318–14 [2], EN 1992-1-1:2004 [3], and CSA A23.3–04 [22], have recommended the STM approach as a deep beam design tool. In addition, some in-depth studies have been carried out to analyze the shear behavior of deep beams as well as determine the most critical parameters affecting the shear strength. According to studies [4, 2325], several important parameters have been identified, including compressive strength of concrete, yield strength of longitudinal and transverse reinforcement, the ratio of effective depth to breadth, as well as the main reinforcement ratio. In fact, the relationship between the parameters and the shear capacity of deep beams is nonlinear [9, 12, 23]. Consequently, building an accurate model that can accurately estimate shear strength based on mathematical equations is challenging [26]. Meanwhile, the deep beam shear strength obtained by experimental tests or numerical analysis is more or less limited because of the complexity of such kind of material and beam structure [12, 23]. To overcome these difficulties and to improve the ability to estimate the shear strength of deep beams, artificial intelligence (AI) approaches have been used in several investigations [27, 28].

Indeed, the construction field has effectively applied AI models to solve many problems such as geotechnical [29, 30], building materials [31, 32], structure analysis, and design [3335]. The application of AI models for problems related to the shear strength of deep beams has been studied by many scientists. Goh’s first study in 1995 [36] applied the artificial neural network (ANN) model to predict beam shear resistance with 6 input parameters. Later, Sanad and Saka [37] also checked the effectiveness of the ANN model in predicting the shear strength of deep beams using 10 input parameters related to the geometry and material properties. The results showed that ANN provides an effective alternative solution in predicting the shear artificial neural network of reinforced concrete (RC) deep beams. It is obvious that the ANN algorithm is a widely used machine learning (ML) prediction tool, but the selection of an appropriate ANN algorithm is still being questioned. In fact, it is challenging to find the best ANN model that could accurately predict the target and optimize many factors, such as the processing speed, numerical precision, and memory requirements. Such an optimization problem lies in the learning process in a neural network and could be solved by using an appropriate training algorithm. In fact, the ANN algorithm contains four principal training algorithms, including Levenberg–Marquardt (ANN-LM), quasi-Newton method (ANN-QN), conjugate gradient (ANN-CG), and gradient descent (ANN-GD). A given training algorithm might be suitable for a given problem but might fail in another case [38]. Gradient descent is the slowest training algorithm but requires less memory than the other three algorithms. The fastest algorithm is Levenberg–Marquardt, which requires the most memory. Therefore, an in-depth investigation is crucial to determine the best training algorithm in general and in predicting the shear strength of deep beams in particular. Besides, the basis of selecting the best ANN black-box raises a number of fundamental questions, especially the criterion to define the best one. In the field of ML, the performance evaluation of the models is assessed by different metrics [3941], namely, the correlation coefficient (R) or the coefficient of determination (R2), mean absolute error (MAE), root mean square error (RMSE), and mean absolute percentage error (MAPE). A concise evaluation and comparison of different criteria need to be conducted to confirm ML models’ effectiveness.

Therefore, in this study, the procedure to determine the best ANN algorithm is conducted through different ANN training algorithms and evaluation metrics, with the highest aim is to accurately and reliably predict the shear strength of the deep beam. To achieve this goal, in the first step, the construction of the deep beam database is conducted by gathering different experimental results published in the literature. The general theory of ANN models is then presented, including four previously mentioned training algorithms. An architecture of ANN models is proposed, along with an extensive investigation on the ANN epoch numbers. The best ANN algorithm is deduced by comparing different performance metrics and the corresponding probability density functions, taking into account the random sampling effect, while constructing the two datasets. Finally, the representative results in predicting the shear strength of deep beams are presented and compared with several existing prediction results in the available literature.

2. Significance of the Research Study

Accurate prediction of the deep beam shear strength is crucial in the construction design. Although some machine learning models have been proposed to predict the shear strength of deep beams in the available literature, namely, genetic-simulated annealing [4], backpropagation neural network [42], artificial neural network [43], gene expression programming [43], support vector machine [42], multivariate adaptive regression splines [42], smart artificial firefly colony algorithm and least squares support vector regression [24], and adaptive neural fuzzy inference system [44], the prediction accuracy and reliability could be further improved. Therefore, different contributions of the present investigation could be pointed out by the following ideas:(1)Four representative training algorithms for the ANN model are investigated to predict the shear strength of deep beams, in which the training epoch of each model is fine-tuned.(2)The reliability of ANN models is carefully evaluated by Monte Carlo simulations with random sampling strategy to construct the database.(3)The model using the conjugate gradient algorithm (ANN-CG) containing 10 neurons in the hidden layer is deduced as the best predictor.(4)The performance of the best ANN-CG architecture is compared with 10 previously published works in the literature and achieved the highest value of the correlation coefficient (R) and lowest values of mean absolute error (MAE). Thus, the simplicity and effectiveness of the proposed approach using ANN-CG are confirmed.

3. Database Construction

In this study, the database used to develop the ML models is collected from published research. The dataset includes 106 test results of the shear strength of deep beams. Specifically, 19 test results of high-strength RC deep beams are collected in the study by Tan et al. [45], 52 test results from the work of Smith et al. [46], and 35 test results from the work of Kong et al. [47]. This database includes various parameters affecting the shear strength of RC deep beams (denoted as V), including the ratio of effective span to effective depth (L/d), ratio of effective depth to breadth (d/bw), ratio of shear span to effective depth (a/d), concrete cylinder strength (f’c), yield strength of horizontal reinforcement (fyh), yield strength of vertical web reinforcement (fyv), ratio of horizontal web reinforcement (ρh), ratio of longitudinal reinforcement to concrete area (ρs), and ratio of vertical web reinforcement (ρv). Representative information on these parameters is detailed in Table 1. Besides, the histograms of each input and output parameter are shown in Figure 1. The beam test diagram and a schematic illustration of RC deep beams are illustrated in Figure 2. Prior to the training process of ANN models, all input and output values are normalized in the range of [0, 1] and then converted back to the initial range of values for the sake of clarity and postprocessing processes.


NotationUnitMinMedianAverageMaxStdaSKb

Ratio of effective span to effective depthL/d1.053.082.885.381.120.08
Ratio of effective depth to breadthd/bw2.842.994.4410.032.241.49
Ratio of shear span to effective deptha/d0.271.001.012.700.0490.40
Concrete cylinder strengthfc10−1 x kN/mm20.160.210.260.590.1116.91
Yield strength of horizontal reinforcementfyhkN/mm20.000.480.400.500.15−1.60
Yield strength of vertical web reinforcementfyvkN/mm20.000.380.350.480.18−1.11
Ratio of horizontal web reinforcementρh10–20.000.450.492.450.56193.12
Ratio of longitudinal reinforcement to concrete areaρs0.000.010.010.020.00−0.48
Ratio of vertical web reinforcementρv10–20.000.480.512.450.54213.04
Shear strengthVkN74.00161.00203.58675.00121.932.20

aStandard deviation. bSkewness.

The database is randomly divided into two parts, representing the generation of the random sampling effect. The first part (containing 70% of the total data, 74 samples) is used to train the ANN network, called the training part. The second part (using the remaining 30% of data, 32 samples) is used to verify the ANN models, referred to as the testing part. The random sampling effect generates variability in the input space of the training part, which considerably affects the accuracy of the ML models. Besides, the meaning of separating the training and testing parts in machine learning problems is to fully assess the accuracy of the models, as the testing data are entirely unknown to the model during the training phase. In general, the prediction capacity of the model is the most important factor. Therefore, the results in the next sections only focus on the evaluation criteria of the testing parts.

4. General Presentation of ANN Models

An artificial neural network (ANN) is a computational model that is built based on the human brain with many biological neurons. It consists of many artificial neurons, interconnected in a network, including input and output data. From input data to come up with a complete result or output, a set of learning rules is used. It is called the backpropagation or backward propagation of error. The structure of a backpropagation network is a combination of different layers, including the input layer, the output layer, and the hidden layer. The input layer is the first layer, the output layer is the last one, and the connection between the two layers is the hidden layer, which might contain one or many hidden layers (Figure 3).

During the training phase of the algorithm, ANN learns to recognize patterns from the input data. Then, it compares the produced result with the desired output. The difference between the two results is adjusted through a backward working process until such a difference is lower than a predefined criterion. Therefore, to train a neural network, the selection of an appropriate training algorithm is very important. The training algorithms are the underlying engines for building neural network models with the goal of training features or patterns from the input data so that a set of internal model parameters can be found to optimize the model’s accuracy. There are many types of training algorithms, but frequently used ones can be listed as gradient descent, conjugate gradient, quasi-Newton method, and Levenberg–Marquardt algorithms.

4.1. Training Algorithms of ANN Model
4.1.1. Gradient Descent Algorithm (ANN-GD)

Gradient descent is an iterative optimization algorithm used in ML and deep learning problems with the goal of finding a set of internal variables for model optimization. Inside, the “gradient” is the rate of inclination or declination of a slope, and the “descent” means descending. Gradient descent often performs in 3 steps, namely, (1) internal variable initialization, (2) evaluating the model based on the internal variable and loss function, and (3) updating internal variables in the direction of finding optimal points. The gradient descent method possesses the iteration step bywhere is the set of variables to be updated, is the gradient of the loss function f according to set , η is the training rate, and i = 0, 1, …, η can be a fixed value or determined by one-dimensional optimization along the training direction per step. The nature of the optimization process of the loss function is finding the suitable points to minimize or maximize the loss function. The goal of the gradient descent method is to find such global minimum points. The stopping criterion of the gradient descent method can be (i) the maximum number of epochs reached, (ii) the value of the loss function is small enough, and the accuracy of the model is large enough, and (iii) the value of the loss function remains stable after a finite number of epochs. The gradient descent algorithm is often used with the big neural networks. The advantage of this method lies in the storage of the gradient vector, instead of the Hessian matrix. The diagram for the training process with the gradient descent is shown in Figure 4.

4.1.2. Conjugate Gradient Algorithm (ANN-CG)

The conjugate gradient algorithm could be considered as one of the algorithms to improve the convergence rate of the artificial neural network, being the intermediate between gradient descent and Newton’s method. The advantage of this approach lies in the fact that there is no need to evaluate, store, and reverse the Hessian matrix. In this algorithm, the search is performed along with conjugate directions, which produce generally faster convergence than gradient descent directions. These training directions are conjugated concerning the Hessian matrix. In this algorithm, the sequence of training directions is built using the following formula:with the initial training direction vectorwhere y is the training direction vector, c is the conjugate parameter, and i = 0, 1,…

The training direction, in all the cases, is reset to the gradient’s negative [48]. The parameters’ improvement process with the conjugate gradient algorithm is defined bywhere i = 0, 1, …, η is the training rate, usually found by line minimization. The diagram for the training process with the conjugate gradient is shown in Figure 4.

4.1.3. Quasi-Newton Algorithm (ANN-QN)

The advantage of the quasi-Newton method is that it is computationally inexpensive because it does not need many operations to evaluate the Hessian matrix and calculate the corresponding inverse. An approximation value to the inverse Hessian matrix is built at each iteration. It is computed using only information on the first derivatives of the loss function. The Hessian matrix is composed of the second partial derivatives of the loss function. The quasi-Newton formula is presented bywhere is the inverse Hessian approximation. The quasi-Newton method is commonly used because it is faster than gradient descent and conjugate gradient. The diagram of the quasi-Newton method is shown in Figure 4.

4.1.4. Levenberg–Marquardt Algorithm (ANN-LM)

The Levenberg–Marquardt (LM) algorithm, also called the damped least squares method, is used to solve nonlinear least squares problems. Instead of computing the exact Hessian matrix, this algorithm calculates with the gradient vector and the Jacobian matrix. The loss function is expressed as a sum of squared errors aswith a is the number of instances in the dataset and u is the vector of all error terms. The Jacobian matrix of the loss function is defined as follows:for i = 1,…, a and j = 1,…, b and a is the number of instances in the dataset, b is the number of parameters in the neural network, and A is the Jacobian matrix. The size of the Jacobian matrix is [a, b]. The gradient vector of the loss function is calculated as

The Hessian matrix is approximately computed bywhere B is the Hessian matrix, β is a damping factor that ensures the positive of the Hessian, and I is the identity matrix. The large parameter β is chosen in the first step. Next, if there is an error in any iteration, β will be increased by some factor. On the contrary, if the loss decreases, β will be decreased so that the Levenberg–Marquardt algorithm approaches the Newton method. Finally, the parameters’ improvement process using the Levenberg–Marquardt algorithm is defined asfor i = 0, 1, …

The diagram of the ANN-LM training algorithms is shown in Figure 4.

4.2. Validation of Models

To evaluate the performance of the machine learning models, in this investigation, four indexes, namely, root mean square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), and correlation coefficient (R) are used. The RMSE is used to evaluate the difference between the actual and predicted values. MAE shows the average error of the actual and predicted values. The MAPE is defined as the difference between the actual and predicted values and then divided by the actual value. Specifically, the lower the RMSE, MAE, and MAPE values, the higher the accuracy of the models and the better the performance of the models. On the contrary, the higher R values mean higher model performance. The R value varies in the range from −1 to 1. The R values close to 0 show the poor performance of the model and close to 1 means good accuracy. The values of RMSE, MAE, MAPE, and R are defined by the following formulas:where QAV and are the actual and the average values and QPV and are the predicted and the average predicted values.

5. Methodology Flowchart

In this study, the flowchart of the proposed methodology includes the following steps:(a)Data collection: this is the first step, and the dataset is built by gathering data from the available literature. All data are randomly divided into 2 parts: training data and testing data, in which the training part accounts for 70% of the dataset and the testing part accounts for 30% of the dataset.(b)Building models: in this step, the data of the training part was used for training the models based on training algorithms such as gradient descent, conjugate gradient, quasi-Newton, and Levenberg–Marquardt.(c)Model validation: in this final step, the data of the testing part is applied to validate the proposed models. Statistical indicators including RMSE, MAE, MAPE, and R are utilized to evaluate the models.

A schematic diagram of the methodology is illustrated in Figure 5.

6. Results and Discussion

The definition of the ANN structure is critical in solving problems [49, 50]. In the case that the number of input and output is fixed, the performance of the ANN model depends on the hidden layer number and the neuron number in each hidden layer. Cybenko [51] and Bound [52] have succeeded in using a single hidden layer model in classifying the input variables for model processing. Besides, some studies [5355] have shown that an ANN model with only one hidden layer could be enough to successfully explore a complex nonlinear relationship between input(s) and output. Therefore, one hidden layer is proposed for the structure of the ANN model in this investigation. Moreover, semiempirical relationships proposed by Nagendra [56], Tamura [57], and some investigations [5860] have recommended that the neuron number of the hidden layer is equal to the total number of inputs and outputs. In the current database, the number of input and output representing deep beams’ shear strength is equal to 9 and 1, respectively. Therefore, 10 neurons in the hidden layer ANN is proposed. The sigmoid activation function for the hidden layer is selected, while the activation function for the output layer is a linear function. The cost function has been chosen as the mean square error one. Due to the random sampling effect, the number of simulations is proposed 300 times to obtain reliable results.

The main purpose of this work is to investigate the performance of four ANN models to predict the shear strength of deep beams, trained by the four algorithms, namely, Levenberg–Marquardt (ANN-LM), quasi-Newton method (ANN-QN), conjugate gradient (ANN-CG), and gradient descent (ANN-GD). The training process is repeated until the network output error reaches an acceptable value (less than the initial specified error threshold). In this study, the network training is performed with various epoch numbers, ranging from 100 to 1000 with a step of 100. Finally, Table 2 summarizes the characteristics of the ANN models proposed in this study.


ParameterParameterDescription

FixNeurons in input layer9
Neurons in output layer1
Hidden layer activation functionSigmoid
Output layer activation functionLinear
Cost functionMean square error (MSE)
Number of hidden layer1
Neurons in hidden layer10
Number of simulations300

InvestigationTraining algorithmsLevenberg–Marquardt (ANN-LM)
Conjugate gradient (ANN-CG)
Quasi-Newton method (ANN-QN)
Gradient descent (ANN-GD)
Number of epochsVarying from 100 to 1000 with a step of 100

6.1. Comparison of ANN Models’ Prediction Capability

The results of the network training by different algorithms are evaluated by the values of criteria R, RMSE, MAE, and MAPE. Figures 6(a)6(d) show the mean and std values of R, RMSE, MAE, and MAPE in function of different epoch numbers for the testing part of the ANN-LM algorithm. Similarly, Figures 7(a)7(d) show the mean and std values of R, RMSE, MAE, and MAPE in function of different epoch numbers for the testing parts obtained by using ANN-QN, ANN-CG, and ANN-GD algorithms. For the ANN-LM model, the mean and std of R values decrease with a higher number of epochs, and the mean value and std of RMSE, MAE, and MAPE increase. This behavior shows that the accuracy of the ANN-LM model is highly affected by the number of epochs. It means that, with a higher number of epochs, the accuracy of the ANN-LM model decreases. It could be confirmed that the ANN-LM model can produce high accuracy results with high speed, and the same conclusion could be drawn for the case of the ANN-CG model. However, an opposite conclusion is found for the case of ANN-QN and ANN-GD models, in which the mean and std values of R increase and those of RMSE, MAE, and MAPE decrease with a higher number of epochs. Thus, the accuracy of ANN-QN and ANN-GD models increases with a higher number of epochs.

Moreover, Table 3 details the values and std of R, RMSE, MAE, and MAPE of four models with different epoch numbers, varying from 100 to 1000 with a step of 100. It is found that the accuracy of the ANN-LM model is very low, where the maximum value of R is only 0.747 at 100 epochs. Similarly, for the ANN-CG model, at 100 epochs, the highest R value is 0.971. Therefore, the optimal ANN-LM and ANN-CG model is at 100 epochs. In contrast, with ANN-QN and ANN-GD algorithms, the highest value of R is R = 0.961 and R = 0.969, respectively. Besides, the std values of the three criteria RMSE, MAE, and MAPE of the ANN-LM model are the highest compared to the other models. This shows that the ANN-LM model has the lowest accuracy among the 4 models.


Epochs
1002003004005006007008009001000

Mean (R)
LM0.7470.6970.6440.6030.5650.5350.4880.4790.4760.441
QN0.9180.9400.9490.9550.9590.9550.9590.9600.9610.958
CG0.9710.9650.9610.9530.9520.9350.9340.9300.9170.911
GD0.9240.9550.9610.9630.9640.9650.9650.9650.9690.966

Std (R)
LM0.2070.1970.2160.2390.2590.2760.2660.2920.2780.283
QN0.0550.0490.0300.0300.0270.0410.0410.0300.0320.035
CG0.0200.0310.0430.0430.0430.0640.0620.0740.0790.088
GD0.0520.0400.0250.0260.0280.0250.0330.0310.0330.032

Mean (RMSE)
LM110.43128.02161.42185.82207.44221.53246.05292.03305.51338.32
QN48.2441.4838.2035.9934.2934.7133.0833.1332.8033.12
CG28.0729.7632.3835.5437.3442.4642.8544.3748.9052.36
GD45.8135.0833.2432.6330.5430.4130.2230.8629.6029.44

Std (RMSE)
LM79.1069.85109.87129.88141.54150.88162.70250.70302.82296.92
QN13.7214.6910.649.8110.2614.389.199.708.6210.98
CG8.449.9811.0311.9314.1919.9816.4522.4722.5323.66
GD13.9412.168.938.978.158.6110.0811.0211.659.01

Mean (MAE)
LM61.0271.4188.19100.41113.37122.23135.83158.97167.94184.97
QN33.9428.7226.5125.1623.9223.9523.1523.1422.5622.79
CG19.2620.0221.0622.8624.0726.1326.4927.2229.3231.44
GD32.0224.3723.1422.7821.1420.9420.7221.0220.3020.28

Std (MAE)
LM32.3733.3050.1964.1771.3478.8287.98127.07144.57156.18
QN8.708.396.455.976.137.585.645.904.896.35
CG4.755.565.746.237.599.598.2110.3610.6211.80
GD8.696.475.575.284.664.825.565.946.045.18

Mean (MAPE)
LM36.6243.7154.0164.1770.0575.4784.91102.03105.79116.68
QN19.1715.8714.8313.9413.3013.4913.0612.9112.7813.02
CG11.0511.5712.1513.3513.8115.0415.4615.8416.6718.02
GD17.4813.5813.0112.8012.0211.7911.5811.8511.3511.35

Std (MAPE)
LM21.3424.0634.8645.1648.6555.1457.4588.2494.7592.62
QN6.075.144.223.913.655.993.874.163.814.49
CG3.454.234.435.045.606.056.037.447.177.92
GD4.894.443.853.643.073.573.493.983.953.50

Next, the values of criteria RMSE, MAE, and MAPE of the three remaining models are compared. With the ANN-CG model, the values of these criteria are the lowest at 100 epochs, compared with the lowest value of the ANN-QN model at 900 epochs and the lowest value of the ANN-GD model at 1000 epochs. Through evaluation and analysis, it is found that the ANN-CG model is the model with the best accuracy with the least number of epochs. Considering the case of large numbers of epochs, it can be seen that the ANN-GD model is superior to the ANN-QN model. Therefore, a reliability evaluation of the three models is performed in the following sections. The lowest accuracy ANN-LM model for shear beam prediction is not proposed for the next investigation.

6.2. Reliability Evaluation of the Best ANN Training Algorithms

The main purpose of this section is to evaluate the reliability of the three models, including the optimal ANN-CG at 100 epochs, the optimal ANN-GD at 900 epochs, and the optimal ANN-QN at 1000 epochs. Figure 8 shows the distribution of the probability density function (PDF) of the four statistical criteria for the training part, namely, R (Figure 8(a)), RMSE (Figure 8(b)), MAE (Figure 8(c)), and MAPE (Figure 8(d)), over 300 simulations performed using the mentioned ANN structures. Meanwhile, Figures 9(a)9(d) show the corresponding distribution of the probability density function for the testing part. According to observation, the PDF curves for the four statistical criteria of the training and testing parts of the ANN-CG model are the narrowest, and the best values of the four criteria are better than the two other algorithms.

Simultaneously, Table 4 presents in detail the values of the four statistical criteria (maximum, minimum, average, and standard deviation) according to the three proposed ANN models. Considering the testing part, the values of average and standard deviation for the case of R are 0.961 and 0.032 for ANN-QN, 0.971 and 0.02 for ANN-CG, and 0.966 and 0.032 for ANN-GD, respectively. For RMSE, these are 32.8 and 8.62 for ANN-QN, 28.07 and 8.44 for ANN-CG, and 29.44 and 9.01 for ANN-GD. In the case of MAE, the average and standard deviation are 22.56 and 4.89 for ANN-QN, 19.26 and 4.75 for ANN-CG, and 20.28 and 5.18 for ANN-GD. Finally, these values of MAPE are 12.78 and 3.81, respectively, for ANN-QN, 11.05 and 3.45 for ANN-CG, and 11.35 and 3.5 for ANN-GD. Thus, in terms of the average value, the ANN-CG model outperforms the other two models, with the average value of R being the highest. Meanwhile, the mean values of RMSE, MAE, and MAPE are the lowest. More importantly, the standard deviation values obtained from the ANN-CG model are also the lowest, which shows that ANN-CG is the most stable and reliable method. The results show that the ANN-CG is the most reliable training algorithm for predicting the shear strength of the deep beam. Therefore, the ANN-CG model is chosen to predict the shear strength of the deep beam in the next section.


CriteriaANN-QNANN-CGANN-GD
TrainTestTrainTestTrainTest

R
Min0.9760.7050.9840.8540.9540.764
Average0.9890.9610.9940.9710.9930.966
Max0.9950.9930.9970.9930.9970.994
Std0.0030.0320.0020.0200.0030.032

RMSE
Min12.2814.519.3714.029.2014.15
Average17.4932.8012.8628.0713.7629.44
Max24.7371.9918.0676.5728.8466.07
Std2.238.621.518.442.049.01

MAE
Min9.1811.077.2810.607.2111.25
Average13.0122.569.6819.2610.2320.28
Max17.5241.2413.8438.7218.8041.54
Std1.614.891.174.751.405.18

MAPE
Min5.036.003.725.794.025.97
Average7.0212.785.2911.055.5211.35
Max9.6128.267.7029.7010.3326.58
Std0.923.810.693.450.793.50

6.3. Prediction of Beam Shear Strength Using the Best ANN Model

In this section, the capability of the ANN-CG model to predict the shear strength of deep beams is investigated. The selection criterion of the best model depends on the values of the four statistical criteria used in this study. Therefore, 4 cases of performance evaluation are considered, namely, (1) maximum value of R, (2) minimum value of RMSE, (3) minimum value of MAE, and (4) minimum value of MAPE. With respect to each case, the values of the criteria, as well as the standard deviation and mean error, are detailed in Table 5.


RMSEMAEErr. MeanErr. StdRMAPE

Case 1: Max (R)
Training set12.539.820.1612.610.9935.79
Testing set17.9213.300.7718.190.9936.47

Case 2: Min (RMSE)
Training set14.7310.90−0.4514.820.9935.89
Testing set14.0211.240.1214.240.9926.84

Case 3: Min (MAE)
Training set13.029.880.1713.110.9955.20
Testing set15.4910.60−2.4715.540.9816.92

Case 4: Min (MAPE)
Training set13.4810.200.1013.570.9926.00
Testing set26.4915.41−3.6626.650.9905.79

In the first case, the maximum value of R is 0.993 for both training and testing parts. For the second case, the minimum RMSE value is 14.73 for the training part and 14.02 for the testing part. The minimum value of MAE is considered in Case 3, where MAE = 9.88 for the training part, and MAE = 10.06 for the testing part. The last case finds the minimum MAPE of 6 and 5.79 with the training and testing parts, respectively.

In analyzing the results presented in Table 5, the prediction performance is evaluated through 4 criteria for the testing part. The maximal value of R is slightly different, considering cases 1, 3, and 4. The difference between the Case 4 and min MAPE values of cases 1, 2, and 3 is relatively small, especially when comparing with those of RMSE and MAE values. It means that the Cases 2 and 3 have higher performance compared with the two other cases. However, Case 3 possesses an RMSE value higher than that of Case 2. Therefore, Case 2 has the best performance of shear strength prediction.

The error diagrams of the training and the testing parts of the ANN-CG model are presented in Figures 10(a) and 10(b). According to the results, the number of samples with errors out of the range from −30 to 40 kN is small (only 2 samples) for the training part. The error of the testing part is lower than that of the training phase. Besides, the cumulative red lines show that 80% of error is in the range from −20 to 20 kN for the training part, whereas about 90% of error is within −15 to 20 kN for the testing part.

Finally, Figures 11(a) and 11(b) show a regression model representing the correlation between the actual and predicted shear strength values for the training and testing parts, respectively. A linear fit is also applied and plotted in each case. The values of R calculated for the training part are R = 0.993 and R = 0.992 for the testing part, respectively. The values of RMSE, MAE, and MAPE for the training and testing parts are shown in Table 5.

Finally, the results of this investigation are compared with the results previously published with some other predictive methods, summarized in Table 6. Using the artificial neural network-conjugate gradient (ANN-CG) in this study, the performance of shear strength prediction of the deep beam seems to be the best with the highest value of R, the lowest value of MAE, and almost the lowest values of RMSE and MAPE. More importantly, while comparing the four algorithms proposed in this study, the ANN-CG appears as the best predictor with respect to the accuracy in estimating the shear strength of the deep beam as well as less computation time is required (i.e., best performance at 100 iterations). Furthermore, the computation memory and cost are less demanded in comparison with other algorithms. It implies that the prediction of deep beam shear strength would not require a high-performance computer with the use of the ANN-CG algorithm. Usually, hybrid ML algorithms take a longer computation time than standalone ones. However, given the prediction accuracy achieved in this study, the development of a hybrid approach would not be necessary. Overall, this confirms the effectiveness of ANN-CG proposed in this study, suggesting a promising and useful alternative design solution for structural engineers. For practical applications, the final weight and bias values of the best ANN-CG model are given in Table 7 and could be used to develop a supporting numerical tool for estimation of shear strength of deep beams.


ModelStatistical criteria
RRMSEMAEMAPE

Genetic-simulated annealing (GSA) [4]0.929
Backpropagation neural network (BPNN) [42]0.91634.03211.273
Radial basis function neural network (RBFNN) [42]0.976720.297.63
Artificial neural network (ANN) [43]0.971142.2730.28
Gene expression programming (GEP) [43]0.965451.5740.99
Support vector machine (SVM) [42]0.946530.13414.435
Multivariate adaptive regression splines (EMARS) [42]0.98613.0115.887
Genetic-simulated annealing (GSA) [4]0.92912.3
Smart artificial firefly colony algorithm and least squares support vector regression (SFA LS-SVR) [24]0.9418.87
Adaptive neural fuzzy inference system (ANFIS) [44]0.98434.7625.24
Artificial neural network-conjugate gradient (ANN-CG) of this study0.99214.0211.246.84


IW (10 × 9)LW (1 × 10)BL (1 × 10)BO (1 × 1)

−0.576−0.075−0.6790.078−0.113−0.9501.181−0.2570.3430.4101.589−0.390
0.3670.0520.518−0.424−0.7390.3500.7840.819−0.877−0.285−1.485
1.281−0.106−0.2860.648−0.3390.771−0.8970.585−0.5400.645−0.663
0.2780.060−1.7880.298−0.343−0.4020.0140.0610.8870.976−0.924
−0.469−0.0990.137−0.5381.265−0.2850.417−0.103−1.0100.3070.384
−0.036−0.8910.371−0.0540.7430.517−0.866−0.2740.671−0.083−0.067
−0.4400.312−1.232−0.598−0.621−1.2950.010−0.475−0.2290.019−0.601
−0.290−0.348−0.4820.5051.006−0.248−0.507−0.1330.9300.234−1.274
0.3950.257−0.233−0.3620.718−0.1660.734−0.552−1.2480.4711.490
−0.3110.6580.839−0.2480.2760.9470.8750.519−0.0700.398−1.798

Note. IW = the input to hidden layer weights; LW = the hidden layer to output layer weights; BL = the input to hidden layer biases; BO = the hidden layer to output layer bias.

7. Conclusion

In this study, the neural network (ANN) model is proposed to predict the shear strength of deep beams. For this purpose, a database of 106 results from shear tests of RC deep beams is built from the available literature. The ANN model is built with 9 input parameters divided into two groups, namely, the geometric size parameter group and the parameter group representing the material properties. Four training algorithms of ANN are explored, namely, the Levenberg–Marquardt (ANN-LM), quasi-Newton method (ANN-QN), conjugate gradient (ANN-CG), and gradient descent (ANN-GD). The prediction performance of different ANN training algorithms is compared. Four different statistical criteria, namely, the correlation coefficient (R), root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE), are introduced to validate and evaluate the performance of the ANN model. The conjugate gradient (CG) algorithm is chosen as the best training ANN algorithm for predicting the shear strength of deep beams. With the ANN-CG model chosen as best, four cases corresponding to four different black-boxes are studied. The results show that the crucial information to choose an accurate machine learning model might lie on the criterion that the smallest value of RMSE is obtained. Besides, the analysis of error between predicted and actual shear strength shows that the ANN model can be a promising numerical tool that could considerably avoid time-consuming and costly experimental procedures. Despite an extensive investigation on different potential training algorithms and epochs, this study is only conducted on one ANN architecture. Therefore, regardless of the highest and outstanding prediction accuracy achieved, it is interesting to perform another investigation related to the neuron number and the hidden layer number to, possibly, enhance the performance of the ANN-CG model, or to further decrease the computation time by decreasing the neuron in the hidden layer.

Data Availability

The data used to support the findings of the study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Bureau of Indian Standards, Concrete, Plain and Reinforced, Bureau of Indian Standards, New Dehli, India, 2000.
  2. ACI 318-14, ACI 318-14 - Building Code Requirements for Structural Concrete. 2014.
  3. British Standards Institution, Eurocode 2: Design of Concrete Structures: Part 1-1: General Rules and Rules for Buildings, British Standards Institution, London, UK, 2004.
  4. A. H. Gandomi, A. H. Alavi, D. M. Shadmehri, and M. G. Sahab, “An empirical model for shear capacity of RC deep beams using genetic-simulated annealing,” Archives of Civil and Mechanical Engineering, vol. 13, no. 3, pp. 354–369, 2013. View at: Publisher Site | Google Scholar
  5. G. Harsha and p raju, “Shear Strength of Deep Beams: A State of Art,” Apr. 2019.
  6. K. Ismail, “Shear Behaviour of Reinforced Concrete Deep Beams,” 2016.
  7. D. B. Birrcher, Design of Reinforced Concrete Deep Beams for Strength and Serviceability, The University of Texas, Austin, USA, 2009, Ph. D Thesis.
  8. A. Arabzadeh, R. Aghayari, and A. R. Rahai, “Investigation of experimental and analytical shear strength of reinforced concrete deep beams,” International Journal of Civil Engineering, vol. 9, no. 3, pp. 207–214, 2011. View at: Google Scholar
  9. M. Shahnewaz, A. Rteil, and M. S. Alam, “Shear strength of reinforced concrete deep beams - a review with improved model by genetic algorithm and reliability analysis,” Structures, vol. 23, pp. 494–508, 2020. View at: Publisher Site | Google Scholar
  10. M. Tasleema, M. A. Kumar, and J. L. Raj, “Evaluation of shear strength of deep beams using artificial neural networks,” International Journal of Recent Technology and Engineering, vol. 7, no. 6C2, pp. 341–345, 2019. View at: Google Scholar
  11. K.-H. Yang, H.-S. Chung, E.-T. Lee, and H.-C. Eun, “Shear characteristics of high-strength concrete deep beams without shear reinforcements,” Engineering Structures, vol. 25, no. 10, pp. 1343–1352, 2003. View at: Publisher Site | Google Scholar
  12. T. S. Tan and L. W. Weng, “A strut-and-tie model for deep beams subjected to combined top-and-bottom loading,” Structural Engineer, vol. 75, no. 13, 1997. View at: Google Scholar
  13. J. Park and D. Kuchma, “Strut-and-tie model analysis for strength prediction of deep beams,” ACI Structural Journal, vol. 104, pp. 657–666, 2007. View at: Google Scholar
  14. C. Y. Tang and K. H. Tan, “Interactive mechanical model for shear strength of deep beams,” Journal of Structural Engineering, vol. 130, no. 10, pp. 1534–1544, 2004. View at: Publisher Site | Google Scholar
  15. M. Lezgy-Nazargah, “A four-variable global-local shear deformation theory for the analysis of deep curved laminated composite beams,” Acta Mechanica, vol. 231, no. 4, pp. 1403–1434, 2020. View at: Publisher Site | Google Scholar
  16. M. P. Collins, “The modified compression-field theory for reinforced concrete elements subjected to shear,” ACI Journal Proceedings, vol. 83, 2 pages, 1986. View at: Publisher Site | Google Scholar
  17. M. P. Vecchio and F. J. Collins, “Compression response of cracked reinforced concrete,” Journal of Structural Engineering, vol. 119, pp. 3590–3610, 1993. View at: Google Scholar
  18. S.-J. Hwang and H.-J. Lee, “Strength prediction for discontinuity regions by softened strut-and-tie model,” Journal of Structural Engineering, vol. 128, 2002. View at: Publisher Site | Google Scholar
  19. S.-J. Hwang, W.-Y. Lu, and H.-J. Lee, “Shear strength prediction for deep beams,” ACI Structural Journal, vol. 97, pp. 367–376, 2000. View at: Google Scholar
  20. K.-H. Yang and A. F. Ashour, “Strut-and-Tie model based on crack band theory for deep beams,” Journal of Structural Engineering, vol. 137, no. 10, pp. 1030–1038, 2011. View at: Publisher Site | Google Scholar
  21. Z. Bazant and J. Planas, Fracture and Size Effect in Concrete and Other Quasibrittle Materials, CRC Press, Washington, DC, USA, 2019.
  22. Canadian Standards Association, Design of Concrete Structures; CSA A23.3–04, Canadian Standards Association, Mississauga, ON, Canada, 2004.
  23. M. Pal and S. Deswal, “Support vector regression based shear strength modelling of deep beams,” Computers & Structures, vol. 89, no. 13-14, pp. 1430–1439, 2011. View at: Publisher Site | Google Scholar
  24. J.-S. Chou, N.-T. Ngo, and A.-D. Pham, “Shear strength prediction in reinforced concrete deep beams using nature-inspired metaheuristic support vector regression,” Journal of Computing in Civil Engineering, vol. 30, no. 1, Article ID 04015002, 2016. View at: Publisher Site | Google Scholar
  25. M. Y. Mansour, M. Dicleli, J. Y. Lee, and J. Zhang, “Predicting the shear strength of reinforced concrete beams using artificial neural networks,” Engineering Structures, vol. 26, no. 6, pp. 781–799, 2004. View at: Publisher Site | Google Scholar
  26. J. Amani and R. Moeini, “Prediction of shear strength of reinforced concrete beams using adaptive neuro-fuzzy inference system and artificial neural network,” Scientia Iranica, vol. 19, no. 2, pp. 242–248, 2012. View at: Publisher Site | Google Scholar
  27. X.-H. Tan, W.-H. Bi, X.-L. Hou, and W. Wang, “Reliability analysis using radial basis function networks and support vector machines,” Computers and Geotechnics, vol. 38, no. 2, pp. 178–186, 2011. View at: Publisher Site | Google Scholar
  28. J.-S. Chou, M.-Y. Cheng, and Y.-W. Wu, “Improving classification accuracy of project dispute resolution using hybrid artificial intelligence and support vector machine models,” Expert Systems with Applications, vol. 40, no. 6, pp. 2263–2274, 2013. View at: Publisher Site | Google Scholar
  29. M. D. Nguyen, B. T. Pham, T. T. Tuyen et al., “Development of an artificial intelligence approach for prediction of consolidation coefficient of soft soil: a sensitivity analysis,” The Open Construction and Building Technology Journal, vol. 13, no. 1, pp. 178–188, 2019. View at: Publisher Site | Google Scholar
  30. H.-B. Ly and B. T. Pham, “Prediction of shear strength of soil using direct shear test and support vector machine model,” The Open Construction and Building Technology Journal, vol. 14, no. 1, pp. 41–50, 2020. View at: Publisher Site | Google Scholar
  31. D. Dao, H.-B. Ly, S. Trinh, T.-T. Le, and B. Pham, “Artificial intelligence approaches for prediction of compressive strength of geopolymer concrete,” Materials, vol. 12, no. 6, p. 983, 2019. View at: Publisher Site | Google Scholar
  32. H.-B. Ly, B. Thai Pham, D. Van Dao, V. Minh Le, L. Minh Le, and T.-T. Le, “Improvement of ANFIS model for prediction of compressive strength of manufactured sand concrete,” Applied Sciences, vol. 9, no. 18, p. 3841, 2019. View at: Publisher Site | Google Scholar
  33. H.-B. Ly, L. M. Le, H. T. Duong et al., “Hybrid artificial intelligence approaches for predicting critical buckling load of structural members under compression considering the influence of initial geometric imperfections,” Applied Sciences, vol. 9, no. 11, p. 2258, 2019. View at: Publisher Site | Google Scholar
  34. Q. H. Nguyen, H.-B. Ly, V. Q. Tran et al., “A novel hybrid model based on a feedforward neural network and one step secant algorithm for prediction of load-bearing capacity of rectangular concrete-filled steel tube columns,” Molecules, vol. 25, no. 15, p. 3486, 2020. View at: Publisher Site | Google Scholar
  35. H. Q. Nguyen, H.-B. Ly, V. Q. Tran, T.-A. Nguyen, T.-T. Le, and B. T. Pham, “Optimization of artificial intelligence system by evolutionary algorithm for prediction of axial capacity of rectangular concrete filled steel tubes under compression,” Materials, vol. 13, no. 5, p. 1205, 2020. View at: Publisher Site | Google Scholar
  36. A. T. C. Goh, “Prediction of ultimate shear strength of deep beams using neural networks,” ACI Structural Journal, vol. 92, 1 page, 1995. View at: Publisher Site | Google Scholar
  37. A. Sanad and M. Saka, “Prediction of ultimate shear strength of reinforced-concrete deep beams using neural networks,” Journal of Structural Engineering, vol. 127, 2001. View at: Publisher Site | Google Scholar
  38. M. T. K. Niazi, Arshad, J. Ahmad, F. Alqahtani, F. A. Baotham, and F. Abu-Amara, “Prediction of critical flashover voltage of high voltage insulators leveraging bootstrap neural network,” Electronics, vol. 9, no. 10, pp. 1620–10, 2020. View at: Publisher Site | Google Scholar
  39. A. J. Jakeman, R. A. Letcher, and J. P. Norton, “Ten iterative steps in development and evaluation of environmental models,” Environmental Modelling & Software, vol. 21, no. 5, pp. 602–614, 2006. View at: Publisher Site | Google Scholar
  40. G. Vazquez Amabile, B. A. Engel, and D. Flanagan, “Modeling and risk analysis of nonpoint-source pollution caused by atrazine using SWAT,” Transactions of the ASABE (American Society of Agricultural and Biological Engineers), vol. 49, pp. 667–678, 2006. View at: Publisher Site | Google Scholar
  41. J. C. Bathurst, J. Ewen, G. Parkin, P. E. ’O’Connell, and J. D. Cooper, “Validation of catchment models for predicting land-use and climate change impacts. 3. Blind validation for internal and outlet responses,” Journal of Hydrology, vol. 287, no. 1–4, pp. 74–94, 2004. View at: Publisher Site | Google Scholar
  42. M.-Y. Cheng and M.-T. Cao, “Evolutionary multivariate adaptive regression splines for estimating shear strength in reinforced-concrete deep beams,” Engineering Applications of Artificial Intelligence, vol. 28, pp. 86–96, 2014. View at: Publisher Site | Google Scholar
  43. A. H. Gandomi, G. J. Yun, and A. H. Alavi, “An evolutionary approach for modeling of shear strength of RC deep beams,” Materials and Structures, vol. 46, no. 12, pp. 2109–2119, 2013. View at: Publisher Site | Google Scholar
  44. A. Khajeh, S. R. Mousavi, and M. Rakhshani Mehr, “Adaptive neural fuzzy inference system models for predicting the shear strength of reinforced concrete deep beams,” Journal Of Rehabilitation In Civil Engineering, vol. 3, no. 1, pp. 14–23, 2015. View at: Google Scholar
  45. F.-K. K. Teng and L.I. Guan, “High-strength concrete deep beams with effective span and shear span variations,” ACI Structural Journal, vol. 92, no. 4, 1995. View at: Publisher Site | Google Scholar
  46. A. S. Vantsiotis, “Shear strength of deep beams,” ACI Journal Proceedings, vol. 79, no. 3, 1982. View at: Publisher Site | Google Scholar
  47. P. J. R. David and F. Cole, “Web reinforcement effects on deep beams,” ACI Journal Proceedings, vol. 67, no. 12, 1992. View at: Publisher Site | Google Scholar
  48. P. S. Sandhu and S. Chhabra, “A comparative analysis of conjugate gradient algorithms & PSO based neural network approaches for reusability evaluation of procedure based software systems,” Chiang Mai Journal of Science, vol. 38, no. 2, pp. 123–135, 2011. View at: Google Scholar
  49. P. Sarir, J. Chen, P. G. Asteris, D. J. Armaghani, and M. M. Tahir, “Developing GEP tree-based, neuro-swarm, and whale optimization models for evaluation of bearing capacity of concrete-filled steel tube columns,” Engineering with Computers, vol. 37, no. 1, pp. 1–19, 2021. View at: Publisher Site | Google Scholar
  50. D. V. Dao, H. Adeli, H.-B. Ly et al., “A sensitivity and robustness analysis of GPR and ANN for high-performance concrete compressive strength prediction using a Monte Carlo simulation,” Sustainability, vol. 12, no. 3, p. 830, 2020. View at: Publisher Site | Google Scholar
  51. G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Mathematics of Control, Signals, and Systems, vol. 2, no. 4, pp. 303–314, 1989. View at: Publisher Site | Google Scholar
  52. L. M. Bounds and W. Mathew, “A multilayer perceptron network for the diagnosis of low back pain,” in Proceedings of the IEEE 1988 International Conference On Neural Networks, no. 2, pp. 481–489, San Diego, CA, USA, 1988. View at: Publisher Site | Google Scholar
  53. E. T. Mohamad, R. S. Faradonbeh, D. J. Armaghani, M. Monjezi, and M. Z. A. Majid, “An optimized ANN model based on genetic algorithm for predicting ripping production,” Neural Computing and Applications, vol. 28, no. S1, pp. 393–406, 2016. View at: Publisher Site | Google Scholar
  54. T. N. Singh, V. K. Singh, and S. Sinha, “Prediction of cadmium removal using an artificial neural network and a neuro-fuzzy technique,” Mine Water and the Environment, vol. 25, no. 4, pp. 214–219, 2006. View at: Publisher Site | Google Scholar
  55. B. Gordan, D. Jahed Armaghani, M. Hajihassani, and M. Monjezi, “Prediction of seismic slope stability through combination of particle swarm optimization and neural network,” Engineering With Computers, vol. 32, no. 1, pp. 85–97, 2015. View at: Publisher Site | Google Scholar
  56. S. Nagendra, Practical Aspects of Using Neural Networks: Necessary Preliminary Specifications, GE Research and Development Center, New York, NY, USA, 1998, http://citeseerx.ist.psu.edu/viewdoc/citations;jsessionid=0DE4DB7A72E3FD87F294E49E95FEAA2F?doi=10.1.1.
  57. S. Tamura, “Method of determining an optimal number of neurons contained in hidden layers of a neural network,” Jan. 21, 1997.
  58. K. G. Sheela and S. N. Deepa, “Review on methods to fix number of hidden neurons in neural networks,” Mathematical Problems in Engineering, vol. 2013, Article ID 425740, 11 pages, 2013. View at: Publisher Site | Google Scholar
  59. K. Hornik, “Approximation capabilities of multilayer feedforward networks,” Neural Networks, vol. 4, no. 2, pp. 251–257, 1991. View at: Publisher Site | Google Scholar
  60. A. Kuri, “Closed determination of the number of neurons in the hidden layer of a multi-layered perceptron network,” Soft Computing, vol. 21, pp. 1–13, 2016. View at: Publisher Site | Google Scholar

Copyright © 2021 Thuy-Anh Nguyen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1695
Downloads760
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.