Advances in Materials Science and Engineering

Advances in Materials Science and Engineering / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 5540853 | https://doi.org/10.1155/2021/5540853

Hai-Van Thi Mai, Thuy-Anh Nguyen, Hai-Bang Ly, Van Quan Tran, "Investigation of ANN Model Containing One Hidden Layer for Predicting Compressive Strength of Concrete with Blast-Furnace Slag and Fly Ash", Advances in Materials Science and Engineering, vol. 2021, Article ID 5540853, 17 pages, 2021. https://doi.org/10.1155/2021/5540853

Investigation of ANN Model Containing One Hidden Layer for Predicting Compressive Strength of Concrete with Blast-Furnace Slag and Fly Ash

Academic Editor: Luigi Di Sarno
Received17 Feb 2021
Revised25 May 2021
Accepted10 Jun 2021
Published18 Jun 2021

Abstract

The prediction accuracy of concrete compressive strength is important and considered a challenging task, aiming at reducing costly and time-consuming experiments. Moreover, compressive strength prediction of concrete using blast-furnace slag (BFS) and fly ash (FA) is more difficult due to the complex mix design of a composition. In this investigation, an approach using the artificial neuron network (ANN), one of the most powerful machine learning algorithms, is applied to predict the compressive strength of concrete containing BFS and FA. The ANN models with one hidden layer containing 13 neuron number cases are proposed to determine the best ANN structure. Under the effect of random sampling strategies and the network structures selected, Monte Carlo simulations (MCS) are introduced to statistically investigate the convergence of results. Next, the evaluation of the model is concluded over 100 simulations for the convergence analysis. The results show that ANN is a highly efficient predictor of the compressive strength using BFS and FA, with maximum values of the coefficient of determination (R2), root mean square error (RMSE), and mean absolute error (MAE) of 0.9437, 3.9474, and 2.9074, respectively, on the training part and 0.9285, 4.4266, and 3.2971, respectively, for the testing part. The best-defined structure of ANN is [8-24-1], with 24 neurons in the hidden layer. Partial Dependence Plots (PDP) are also performed to investigate the dependence of the prediction results of input variables used in the ANN model. The age of sample and cement content are found to be the two most crucial factors that affect the compressive strength of concrete using BFS and FA. The ANN algorithm is practical for engineers to reduce costly experiments.

1. Introduction

In view of the global sustainable development, supplementary cementitious materials (SCM) need to be used for cement replacement in the concrete industry. The most worldwide available SCM are fly ash (FA), a fine powder and a by-product of burning pulverized coal in electric generation power plants, and blast-furnace slag (BS), a by-product of iron ore processing. In the current context, the developed industry generates a large amount of industrial waste and seriously affects the environment. Amongst various by-products generated by the industries, FA and BFS are of great interest to concrete researchers. Taking advantage of these materials will contribute to reducing environmental pollution and be also a cost-effective solution for producing concrete. Besides, the use of BFS and fly ash in concrete as a partial cement replacement could significantly improve concrete properties, such as compressive strength and permeability of concrete, the durability of concrete [14], and the workability of concrete [5]. For these reasons, the determination of BFS and FA contents for concrete mix design is essential and meaningful, especially in improving the compressive strength of concrete.

Numerous experimental studies have been conducted to determine the BFS content in the concrete mix design. Oner and Akyuz [6] have proved that the content of BFS to maximize the strength is about 55–59% of the total binder content. Shariq et al. [7] have studied the effect of BFS content on the concrete compressive strength using 20%, 40%, and 60% of BFS and three different water-to-cement (W/C) ratios. The compressive strength of concrete containing 40% BFS is higher than those containing 20% or 60% of BFS for all W/C ratios. Siddique and Kaur [8] have concluded that 20% of cement replaced with BFS can be used appropriately in structures resistant to high temperature. Tüfekçi and Çakır [9] have shown that the compressive strength of concrete using 60% BFS content reached the highest value at 28 days. Besides, Majhi et al. [10] have experienced the highest concrete compressive strength by using 40% BFS replacement.

Moreover, many experimental investigations for determining the BFS and FA replacement content in concrete mix design have been performed. Gehlot [11] has evaluated the compressive strength of concrete containing BFS and FA with different BFS/FA weight ratios, such as 0/0, 10/20, 20/10, 30/0, and 0/30. It has been found that the higher the weight ratio of BFS/FA, the higher the compressive strength of concrete. Li and Zhang [12] also experimented with the compressive strength of concrete using FA and BFS. The accuracy of the compressive strength prediction is strongly dependent on the number of experimental tests and the range of the mixture composition content. Therefore, a new approach needs to be developed for reducing the time-consumed and experimental cost due to a high number of experimental tests. Also, a universal prediction approach with high prediction accuracy needs to be used.

In recent years, Artificial Intelligence (AI) has been widely used for modeling many problems in the areas of science and engineering [1317]. AI approaches have been developed to predict different properties of concrete, such as the shear strength of reinforced concrete beams [18, 19], corrosion of concrete sewers [20], crack width of concrete [21], the ultimate strength of reinforced concrete beams [22], strength of recycled aggregate concrete [23], the compressive strength of silica fume concrete [24], compressive strength of geopolymer concrete [25], compressive strength prediction of concrete using BFS [2631], or concrete using FA [3235]. Among AI algorithms, ANN is currently the most powerful algorithm to simulate complex technical problems [36, 37]. ANN model is capable of solving complex, nonlinear problems and especially problems in which the relationship between the inputs and outputs is not easily established explicitly. As an example, Bilim et al. [30] have used 225 data samples with six input parameters (including cement content, ground granulated blast-furnace slag content, water content, superplasticizer, aggregate content, and the age of samples) for the development of the ANN model to predict the compressive strength of concrete. The best value of the coefficient of determination for this ANN model is equal to R2 = 0.96. Besides, in using 204 data samples, Chopra et al. [38] have proposed an ANN model containing one hidden layer with 50 neurons to predict the compressive strength of concrete using BFS and FA. The performance of such an ANN model is evaluated by a coefficient of determination of 0.92. Besides, Yeh [39] has used the highest number of data with 990 data samples to develop an ANN model for predicting the compressive strength of concrete containing BFS and FA. In the investigation of Yeh [39], the ANN structure containing one hidden layer with eight neurons is proposed, the accuracy of the ANN model is relatively high with the highest coefficient of determination of R2 = 0.922. Overall, the performance of the ANN model depends significantly on the database (such as the number of data samples or the range distribution of variables) and the ANN structure, reflected by the number of hidden layers and number of neurons in each hidden layer [40]. Therefore, the determination of neuron number and the hidden number is crucial for increasing ANN performance.

Therefore, the primary purpose of this investigation is to propose an efficient ANN model to improve the compressive strength prediction performance of concrete containing BFS and FA, thanks to a significant number of data gathered from the literature. Furthermore, the efficiency of the proposed ANN model is determined by (i) determination of neuron number for one hidden layer using empirical formulations proposed in the literature, (ii) investigation on the results convergence of each ANN structure, (iii) evaluation of the prediction performance of each model to determine the best ANN structure, and (iv) using the best ANN structure for predicting the compressive strength of concrete. The best ANN architecture is evaluated through three statistical measurements, namely, the coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE). Finally, a sensitivity analysis using Partial Dependence Plots (PDP) is performed to evaluate the influence of each input variable on the prediction of compressive strength of concrete containing BFS and FA.

2. Research Significance

Predicting concrete compressive strength using supplementary cementitious materials, such as BFS and fly ash, with high accuracy and reliability plays a crucial role in many civil engineering applications. Although this research topic has been the subject of intense researches over the past two decades (i.e., Yeh [39], Han et al. [41], Boukhatem et al. [27], Kandiri et al. [28], Boğa et al. [29], Behnood et al. [42], Dao et al. [43], and Bui et al. [44]), there are still problems that need to be dealt with. First of all, the limited number of data points and range of input parameters used to construct ML models put strong constraints that inhibit the applicability of these models from an engineering point of view. Second, the reliability of ML models in predicting compressive strength requires a rigorous assessment approach. Third, from a practical point of view, the efficiency of an algorithm, otherwise, the total computation time, should be considered the most important factor. Therefore, the present work attempts to address the above-mentioned gaps with the following ideas:(1)To the best of the authors’ knowledge, the second-largest dataset containing 1274 data points is used, in which the collecting process is carefully conducted, and duplicate samples are removed from the database(2)The prediction performance of different one hidden-layer ANN architectures are evaluated, using semiempirical formulas suggested in the relevant literature to determine the appropriate number of neurons(3)Only single hidden layer ANN models are considered and developed, with the highest aim to promote simplicity and boost efficiency(4)The reliability of ANN models is rigorously assessed by Monte Carlo simulations(5)The predictability of the best architecture is shown more relevant compared with 11 investigations published in the literature and clearly confirms the simplicity and effectiveness of the proposed ANN model

3. Database Construction

In this study, 1274 data samples of experiments on compressive strength of concrete containing BFS and FA are rigorously gathered from 4 other investigations (cf. Table 1), including 10 samples from Pitroda [33], 204 samples from Chopra et al. [38], 990 samples from Yeh [39], and 72 samples from Lee et al. [45]. Different from most of the works published in the literature, 40 duplicate data points from the work of Yeh [39] are filtered out of the original 1030 instances, as this might affect the accuracy and reliability of prediction results. The database includes eight input variables, namely, the cement content (I1), water content (I2), coarse aggregate or gravel content (I3), fine aggregate or sand content (I4), blast-furnace slag content (I5), fly ash content (I6), superplasticizer content (I7), and age of samples (I8), along with one output variable, the compressive strength (CS) of concrete. Table 1 summarizes the database, including the number of data samples collected in each reference and their percentages of proportion.


NumberReferenceNumber of data samplesPercentage (%)

1Pitroda [33]100.79
2Chopra et al. [38]20416.01
3Yeh [39]99077.71
4Lee et al. [45]705.49
5Total1274100

Figure 1 describes the database distributions. It can be seen that most of the input variables in the database possessed a wide range of values. The cement content is 100 ÷ 610 (kg/m3). The water content is mainly in the range of 150 ÷ 200 (kg/m3). The coarse aggregate or gravel content is in the range of 800 ÷ 1200 (kg/m3), with a few values higher than 1200 kg/m3. In contrast, fine aggregate or sand content is mainly in the range of 400 ÷ 950 (kg/m3). The blast-furnace slag content is distributed in the 0 ÷ 360 (kg/m3) range. The fly ash content varies from 0 to 200 (kg/m3), but the values are mainly in the 60 ÷ 175 (kg/m3) range. The superplasticizer content possesses a 0 ÷ 32 (kg/m3) range, but most of the values are in the 5 ÷ 15 (kg/m3) range. With the age of samples, there are fifteen values, where the minimum age of the sample is one day, and the maximum age of the sample is 365 days. Regarding these values, the range of the concrete compressive strength (fc) is in the range of 0.5 ÷ 90 (MPa).

The corresponding correlation analysis with the fc is displayed in Figure 2. As clearly shown, none of the variables are significantly correlated. The highest linear correlation coefficient is equal to 0.54 between input I1 and fc. Therefore, the eight inputs are relatively independent of each other and could be used as input variables for the prediction problem.

4. Simulation Using Neural Networks

ANN is a powerful machine learning-based data analysis algorithm, which is a model of bioneural networks. This machine learning approach attempts to simulate the process of knowledge acquisition and inference occurring in the human brain [46]. ANN has been widely used to address nonlinear regression analysis problems. Backpropagation neural network (BPNN), a standard training method of ANN, is often used for regression analysis and practical applications [47]. A backpropagation network structure is a combination of different layers, where the first layer is the input layer, the last layer is the output layer, and the middle layers are hidden layers connected to both the input and output layers. Typical backpropagation networks typically use a gradient descent algorithm like Widrow–Hoff arithmetic. In this network, weights are changed or moved along the negative value of the gradient of the executing function. The term backpropagation is used because it relates to how the gradual computation of nonlinear multilayer neural networks is performed. In practice, to design or use backpropagation neural networks to learn or train linear networks to solve a particular problem, the following basic steps are usually performed: (a) practicing aggregate learned or trained data; (b) building neural networks; (c) network training; and (d) application of neural networks to simulate new data. The block diagram of the backpropagation network is shown in Figure 3.

4.1. Number of Hidden Neurons

An important issue in designing a network is how many neurons are needed in each hidden layer. Using too few neurons can lead to either incomplete signal recognition in a complex dataset or underfitting. Using too many neurons increases the lattice time, perhaps too much to train when it is impossible to train in a reasonable amount of time. A large number of neurons can lead to overfitting, in which case the network has too much information, or the amount of information in the training set does not have enough specific data to train the network [40]. The best number of hidden units depends on many factors—the number of inputs, outputs of the network, the number of cases in the sample set, the noise of the target data, the complexity of the error function, network architecture, and network training algorithm.

In the majority of cases, there is usually no way to easily determine the optimal number of neurons in the hidden layer without having to train the network [48]. The best way is to use the trial-and-error method. In fact, it is possible to use the forward selection or backward selection method to determine the number of units in the hidden layer. Progressive selection begins with choosing a reasonable rule for the performance evaluation of the network. After that, a small number of hidden units, train, and test are chosen and then evaluate the network performance. After that, slightly increase the number of hidden units and conduct and retry until the error is acceptable, or there is no further significant improvement. The backward selection, in contrast to the forward selection, starts with a large number of units in the hidden layer and then descends. This process is very time-consuming but helpful in finding the right number of units for the hidden layer.

4.2. Neural Network Evaluation Procedure

The disadvantage of the backpropagation algorithm is that its convergence speed is relatively slow [49]. Therefore, many powerful optimization algorithms have been used, most of which have been based on simple gradient descent algorithms. One of the algorithms to improve the convergence rate or the learning rate of the neural network is the backpropagation training network according to the Conjugate Gradient algorithm.

In conjugate gradient algorithms, the search direction for all conjugate gradient algorithms is occasionally reset to the gradient’s negative [50]. When the number of repetitions is equal to the number of network parameters, namely, weights and bias, the standard reset point occurs. To improve the efficiency of training, there have been other reset approaches. In those approaches, Powell [51], based on an earlier version proposed by Beale [52] which suggests the technique will restart if there is very little orthogonality left between the current gradient and the previous gradient. If very little orthogonality remains between the current and the previous gradient, the technique will restart. This is tested with the following inequality:where is the gradient of the kth iteration. If this condition is satisfied, the search direction is reset to the negative of the gradient.

4.3. Validation of Models

Evaluating the model’s accuracy is an essential part of the machine learning modeling process to describe the model’s performance level in its predictions. In this study, three measures of statistical performance, namely, root mean square error (RMSE), mean absolute error (MAE), and coefficient of determination (R2), are used to evaluate the difference between results of each network and its ability to make accurate predictions. In general, these criteria are popular for quantifying the performance of machine learning algorithms. More specifically, the mean squared difference between the real and estimated determines the RMSE, while the average magnitude of the errors determines the MAE. R2 evaluates the correlation between the actual and estimated values. Quantitatively, the lower RMSE and MAE values indicate better performance of the models. In contrast, higher R2 values ​​indicate better model performance. RMSE, MAE, and R2 are estimated as follows:where aj is the actual value; is the predicted value; is the average of actual values; and N is the total number of samples.

5. Methodology Flowchart

The methodology of this study includes three main steps as follows:(i)Preparation of the data: in this step, the collected dataset is randomly divided into two parts: the first part accounts for 70% of the data and is used to train the network. The second part is the remaining 30% of the data and used to test the network’s performance.(ii)Model building and training: in this step, the training dataset is used to construct the ANN model. From there, the appropriate structure of the ANN model is selected.(iii)Model validation: in this step, the trained models are tested and validated using the testing dataset. The predictability of the proposed model is assessed through statistical criteria such as R2, RMSE, and MAE.

A schematic diagram of the methodology is illustrated in Figure 4.

6. Results and Discussion

6.1. Number of Neurons in a Single Hidden Layer of ANN

In this section, the number of neurons in a hidden layer is determined through several formulas proposed in the literature. Eighteen formulas are collected and summarized in Table 2. In the 18 formulas, the twelve formulas are based on the input variables to determine the number of neurons in one hidden layer. The six remaining formulas are based on both input variables and output variables to determine the number of neurons in one hidden layer. Based on eight input variables and one output variable, 18 values of neurons are determined and shown in Table 2. It is worth noticing that the adjacent integer values are taken if the calculated values are not an integer. By doing so, 25 values are determined using 18 formulas. These values are distributed in the range from 1 to 255 and displayed in Figure 5.


NumberAuthorsFormulasValues computedValues takenReference

1Neville (1986)66[53]
2Neville (1986)1717[53]
3Hush (1989)2424[54]
4Popovics (1990)4.54, 5[55]
5Gallant (1993)1616[56]
6Wang (1994)5.335, 6[57]
7Masters (1994)33[58]
8Li et al. (1995)3.533, 4[59]
9Tamura and Tateishi (1997)99[60]
10Lai and Serra (1997)88[61]
11Nagendra (1998)99[62]
12Zhang et al. (2003)3333[63]
13Shibata and Yusuke (2009)2.832, 3[64]
14Sheela and Deepa (2013)4.634, 5[40]
15Hunter et al. (2012)255255[65]
16Ripley (1993)4.54, 5[66]
17Kannellopoulas and Wilkinson (1997)1616[67]
18Paola (1994)1.281, 2[68]

Figure 5 depicts the neuron number for 24 cases, except for the case with 255 neurons (not shown in Figure 5 for better illustration). By removing similar values, a total of 13 values are identified for use in the ANN model. Besides, the basic parameters of ANN used in this study are presented in Table 3, including fixed parameters and neuron numbers in the hidden layer. The number of inputs is equal to 8, and the only compressive strength of concrete is considered the output. The sigmoid function is used as the activation function of the hidden layer, and the linear function is used as the activation function of the output layer. The training algorithm with conjugate gradient backpropagation with Powell-Beale restarts is used.


ParameterParameterDescription

FixNeurons in input layer8
Neurons in output layer1
Hidden layer activation functionSigmoid
Output layer activation functionLinear
Training algorithmConjugate gradient backpropagation with Powell-Beale restarts
Cost functionMean square error (MSE)
Number of hidden layers1

InvestigationNeurons in hidden layerVarying 13 different values

6.2. Prediction Performance and Statistical Analysis

In this section, performance criteria of the ANN model with 13 different cases of neuron number are shown in Figure 6, including (a) coefficient of determination (R2), (b) root mean square error (RMSE), and (c) mean absolute error (MAE). A total of 1300 simulations are conducted. The statistical measurements of the simulations are shown in Figure 6, highlighting the mean values and standard deviation (Std) over 100 simulations.

It can be seen that the maximum value of R2 obtained for the training dataset is R2 = 0.967, and the minimum value is R2 = 0.667. Besides, the maximum value of R2 obtained for the testing datasets is R2 = 0.883, and the minimum value is R2 = 0.665. The RMSE values range from 3 to 9.5 for the training parts, and from 5.75 to 11.7 for the testing parts. The results also indicate the MAE values are in the range of 2 ÷ 7 (training datasets) and 3.9 to 7.1 for the testing datasets.

The Std values of each case also are displayed in Figure 7. It is shown that StD values are high for certain cases of neuron numbers, such as 1, 2, 3, 8, and 17 neurons. In particular, the ANN structures containing 1 and 2 neurons in the hidden layer have the highest Std values. The ANN structure containing 17 neurons in the hidden layer has high accuracy with R2 = 0.94, RMSE = 4.2, and MAE = 32 for the training part, and R2 = 0.85, RMSE = 5.9, and MAE = 4.0 for the testing part. However, the Std values of R2, RMSE, and MAE are high regarding both the training and testing datasets, so that the reliability of the ANN model containing 17 neurons is not high. Overall, the ANN model containing 16, 24, and 33 neurons in 1 hidden layer has the highest reliability with the highest mean value of R2, the lowest mean values of RMSE, MAE, and the lowest values of StD.

Table 4 shows four values of quality assessment criteria (maximum, minimum, average, and Std) over 100 simulations with three ANN-16N, ANN-24N, and ANN-33N models corresponding to 16, 24, and 33 neurons, respectively. According to the testing part results, the mean and Std of R2 are equal to 0.8886 and 0.028 for ANN-16N; 0.8895 and 0.0295 for ANN-24N; and 0.885 and 0.0221 for ANN-33N, respectively. For the RMSE criterion, the mean and Std values are, respectively, equal to 5.5521 and 0.7268 for ANN-16N, 5.5317 and 0.7873 for ANN-24N, and 5.6654 and 0.5431 for ANN-33N. With respect to the MAE criterion, the mean value and Std values are, respectively, equal to 3.8967 and 0.2493 for ANN-16N, 3.8506 and 0.318 for ANN-24N, and 3.8979 and 0.3243 for ANN-33N. Overall, thanks to the highest value of R2 and lowest values of RMSE and MAE, the ANN-24N model is considered better than the other ANN models. Moreover, the Std values obtained from the ANN-24N model are the lowest. This means that ANN-24N is the most reliable model with the highest performance. However, before any further conclusion, the convergence analysis of these three ANN models needs to be evaluated.


CriteriaANN-16NANN-24NANN-33N
TrainTestTrainTestTrainTest

R2
 Min0.90770.69470.84620.68820.85420.7766
 Average0.93650.88860.94180.88950.94860.885
 Max0.94820.92280.95740.92850.96110.9201
 Std0.00620.0280.01730.02950.01450.0221

RMSE
 Min3.80594.50313.40864.42663.33284.8466
 Average4.15915.55213.9545.53173.71715.6654
 Max4.952510.67466.486810.95256.36957.8546
 Std0.19250.72680.49150.78730.42610.5431

MAE
 Min2.76293.32482.50263.29712.45313.3841
 Average3.07423.89672.90713.85062.72123.8979
 Max3.75824.74474.92185.16844.67055.8684
 Std0.15410.24930.37740.3180.3230.3243

6.3. Investigation on the Convergence of Prediction Results

The use of Monte Carlo simulation for convergence analysis on the results is important, aiming at determining (i) the suitable number of Monte Carlo simulations and (ii) the reliability of the prediction results. Figure 7 depicts the convergence of results for three ANN architectures proposed in this investigation, namely, the ANN models with 16, 24, and 33 neurons. It is worth noting that the convergence is performed for both training and testing datasets for all cases over 100 simulations. Figures 7(a)7(c) show the convergence curves of R2, RMSE, and MAE, respectively. It can be observed that these values are relatively stable after about 50 simulations for both the training and testing parts. It could be stated that the results obtained by the proposed ANN model with a different number of neurons in the hidden layer are converged, even under the random sampling effect.

Regardless of the training phase of the ANN model, the performance analysis is focused on the testing parts, as the latter directly reflect the prediction performance of machine learning models. It can be seen that the 24-neuron ANN model (24N) exhibits the best-converged prediction accuracy (highest converged values of R2, and lowest converged values of RMSE and MAE). Moreover, these 3 ANN architectures exhibit a low level of fluctuation and require only about 20 simulations to achieve the converged results. Therefore, it could be concluded that the ANN model with 24 neurons is the best architecture for predicting the fc of concrete.

6.4. Prediction Performance of Typical ANN Architecture

This section is dedicated to the presentation of typical prediction results of the best ANN architecture containing 24 neurons in a single hidden layer. The correlations between the predicted and the experimental values are shown in Figures 8(a) and 8(b) for the training and testing part, respectively, through a regression model. The plot of a linear fit is performed in each case, represented by a continuous blue line. Figure 8 demonstrates a high correlation between the experimental and predicted compressive strength of concrete using BFS and FA.

Figures 9(a) and 9(b) show the probability distribution of errors for the training and testing datasets of the best ANN model, respectively. Figure 9(a) depicts that the ANN-24N can successfully predict the compressive strength of concrete for the training set, where the prediction error is relatively low. Almost error prediction is equal to about 0 with 370 samples for the training part, and 160 samples for the testing part. For the testing part, in only two prediction cases, the errors are found high, with an absolute value equal to 20 MPa.

Table 5 shows different values of performance criteria for the best architecture of the ANN model including 24 neurons. The best values of R2 are 0.9437 and 0.9285 for the training part and for testing part, respectively. The values of RMSE, MAE, Err. Mean, and Err. Std for the training dataset are 5.4480, 4.1365, −0.0563, and 5.4563 and, for the testing dataset, are 4.9585, 3.9423, 0.6252, and 4.9647, respectively.


RMSEMAEErr. meanErr. StdR2

Training set3.94742.9074−0.01243.94960.9437
Testing set4.42663.29710.12854.43060.9285

Table 6 shows the comparison of different machine learning models proposed in the literature with the ANN model of this investigation. The comparisons are presented in the form of the machine learning algorithm, input number, number of data, and performance measure. The results show the ANN model of this investigation, using only a single hidden layer, could predict the compressive strength of concrete with high reliability and higher accuracy than almost all investigations.


ReferenceMachine learning algorithmInputNumber of data samplesPerformance measure

Han et al. [41]ANN model7 inputs: curing temperature, water to binder ratio, BFS to total binder ratio, water, fine aggregate, coarse aggregate, and superplasticizer269R2 = 0.9610

Boukhatem et al. [27]ANN model5 inputs: cement, water-to-cement ratio, slag content, temperature, age of samples726R2 = 0.9216

Kandiri et al. [28]Hybridized multiobjective ANN and a multiobjective slap swarm algorithm (MOSSA), M5P model tree algorithm7 inputs: cement, BFS, BFS grade, water, fine aggregate, coarse aggregate, age of samples624ANN-16, R2 = 0.941
ANN-7: R2 = 0.865
M5P: R = 0.884

Boğa et al. [29]ANN model and the adaptive neuro-fuzzy inference system (ANFIS)4 inputs: cure type, curing period, BFS ratio, CNI ratio162ANN: R2 = 0.9710
ANFIS: R2 = 0.665

Bilim et al. [30]ANN model6 inputs: cement, ground granulated blast-furnace slag, water, hyperplasticizer, aggregate, and age of samples225R2 = 0.9600

Sarıdemir et al. [31]ANN and fuzzy logic models ANFIS5 inputs: age of samples, cement, BFS, water, and aggregate284ANN: R2 = 0.981
FL: R2 = 0.968

Bui et al. [44]Modified firefly algorithm-artificial neural network (MFA-ANN)8 input parameters: cement, BFS, fly ash, water, superplasticizer, coarse aggregate, fine aggregate, and age of samples1133R2 = 0.9025

Feng et al. [70]AdaBoost algorithm8 inputs: cement, BFS, fly ash, water, superplasticizer, coarse aggregate, fine aggregate, age of samples1030AdaBoost: R2 = 0.982
ANN: R2 = 0.903
SVM: R2 = 0.855

Behnood et al. [42]M5P model tree algorithm8 inputs: cement, blast-furnace slag, fly ash, water, superplasticizer, coarse aggregate, fine aggregate, age of sample1912R2 = 0.900

Golafshani and Behnood [69]Biogeography-based programming8 inputs: cement, silica fume, water, coarse aggregate, fine aggregate, superplasticizer, maximum aggregate size, age of sample1030RMSE = 8.5389
MAE = 6.3882
BBP: R2 = 0.8806

Dao et al. [43]Gaussian process regression and ANN model8 input parameters: cement, BFS, fly ash, water, superplasticizer, coarse aggregate, fine aggregate, age of samples1030R2 = 0.8930
RMSE = 5.46
MAE = 3.86

This paperANN model8 inputs: cement, water, coarse aggregate or gravel, fine aggregate or sand, blast-furnace slag, fly ash, superplasticizer, age of samples1274R2 = 0.9285
RMSE = 4.4266
MAE = 3.2971

First, it is important to notice that high prediction accuracy is reported with a low number of samples in the original database. For instance, Sarıdemir et al. [31], Boğa et al. [29], Han et al. [41], Bilim et al. [30], and Kandiri et al. [28] have reported the values of R2 = 0.981, 0.971, 0.961, 0.960, and 0.9409, respectively, with the data points in the range of 162 to 624. The testing age of concrete samples, which has been found the most influencing parameter of concrete compressive strength [43], has not been considered as input variables in [29, 41] and [31]. Besides, fly ash has not been considered in the works of Kandiri et al. [28] and Bilim et al. [30].

Second, regarding the works of Boukhatem et al. [27], Bui et al. [44], Golafshani and Behnood [69], and Dao et al. [43], more data points are considered. However, the prediction accuracy of this work is greater (R2 = 0.9285) compared with the reported values of R2, ranging from 0.8806 to 0.9216. Notably, the database in this study uses more samples than the studies mentioned earlier.

Third, the results in Feng et al. [70] reach a higher R2 value (R2 = 0.9820). However, it is noted that this contribution used 8 inputs with 1030 samples in Yeh’s work, including 40 duplicate data points. Moreover, 90% of data are used to train the model, the remaining data to test the prediction accuracy so that the reported accuracy only corresponds to the 10% remaining data. In this study, the classical 70-30 train-to-test ratio is used, which puts more constraint on the model’s predictability by covering a broader range of input values and more concrete samples on the testing phase.

Similarly, Behnood et al. [42] have used the highest sample amount of concrete in the literature (1912 samples). This study shows the proposed ANN model can predict the concrete compressive strength with R2 = 0.9285, which is greater than that of Behnood et al. [42] (R2 = 0.90). It is worth noticing that the authors have used a train-to-test ratio of 85/15, compared with a 70/30 ratio of this study.

Overall, these comparisons have confirmed the high accuracy of the proposed ANN model in ensuring prediction reliability. Moreover, the single hidden layer ANN model clearly shows its simplicity and efficiency in total computation time than other hybrid machine learning approaches. These results indicate that if the architecture of an ANN model is carefully selected, it could be effectively used as an alternative prediction tool for material engineers.

6.5. Sensitivity Analysis

In this section, PDP analysis is performed and estimated for eight variables, which correspond to 8 input variables used in the ANN model, namely, the cement (kg/m3), water (kg/m3), coarse aggregate or gravel content (kg/m3), fine aggregate or sand (kg/m3), blast-furnace slag (kg/m3), fly ash (kg/m3), superplasticizer (kg/m3), and age of samples (day). Figure 10 shows the PDP curves of compressive strength of concrete in function of each input variable.

For the cement content, the PDP values of fc vary from 10 to 60, representing a difference of 50 MPa. With respect to the water content, the PDP of fc varies from [45 to 25], which represents 20 MPa of difference. The change in coarse aggregate content generates a difference of 21 MPa (from 37 to 58 MPa), whereas, for the fine aggregate, it is only 6 MPa (from 35 to 41 MPa). Regarding the blast-furnace slag content, the fc values vary from 34 to 43 MPa, only representing 9 MPa of difference. Besides, the PDP values vary from 36 to 65 MPa, 37 to 17 MPa, and 18 to 75 MPa for fly ash, superplasticizer contents, and age of samples, respectively.

On the basis of the PDP values, the effect of input variables on the compressive strength of concrete is most pronounced with the age of samples, followed by cement, fly ash, water, coarse aggregate, superplasticizer, blast-furnace slag, and fine aggregate contents. This order depicts that the most critical input effect is the age of samples. Further, PDP investigation shows that most of the effects are positive, except for superplasticizer content with a negative effect. It is interesting to notice that the negative effect of superplasticizer is also proved by an experimental investigation of Benachai et al. [71]. Besides, the PDP investigation shows that the optimum water content is equal to 150 kg/m3. With higher cement and BFS contents, the compressive strength increases. With an important number of data samples distributed from 75 to 200 kg/m3, the result of the PDP investigation could be considered reliable in this range, where the compressive strength is in the range of 35 to 65 MPa. Finally, it could be observed that the compressive strength strongly increases from 1 to 28 days. After this period, the compressive strength continues to develop, but at a slower rate than the previous strength development.

7. Conclusion

In this investigation, a well-known machine learning ANN algorithm has been introduced to predict the compressive strength of concrete containing blast-furnace slag and fly ash. A number of 1274 experimental results have been gathered to construct a database and develop the ANN model. In this database, 70% of data is randomly chosen for the training phase of ANN, and 30% of the remaining data is used for the testing phase of the ANN model. Monte Carlo simulation is performed to determine the necessary number of simulations for obtaining converged prediction results, in which 100 simulations are proven a sufficient number of runs. The analysis shows that the ANN-24N (24 neurons in a single hidden layer) is the most stable model and produces the best prediction performance. The values of R2, RMSE, and MAE of the best model are, respectively, 0.9285, 4.4266, and 3.2971 for the testing part. Partial Dependence Plots (PDP) analysis is used to investigate the dependence of prediction results of eight input variables in this study. The age of samples and the cement content are determined to be the two most important parameters affecting the compressive strength of concrete. The results of this investigation could help in constructing a reliable soft computing tool to predict promptly and quickly the compressive strength of concrete containing blast-furnace slag and fly ash (see supplementary materials). Once such a tool is carefully built, the prediction process can reduce the time consumption and cost of experimental tests.

The limitation of the present work might be the ranges of inputs and output of the database. Therefore, these ranges would limit the applicability of the ANN model, and also the numerical tool in the supplementary materials. To improve the prediction accuracy and reliability of the ANN model, a new database should be developed, which is the short-term research direction of the present work.

Data Availability

The processed data are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Supplementary Materials

MATLAB Code of compressive strength prediction using the ANN model. (Supplementary Materials)

References

  1. A. Behnood and E. M. Golafshani, “Predicting the compressive strength of silica fume concrete using hybrid artificial neural network with multi-objective grey wolves,” Journal of Cleaner Production, vol. 202, pp. 54–64, 2018. View at: Publisher Site | Google Scholar
  2. A. Cheng, R. Huang, J.-K. Wu, and C.-H. Chen, “Influence of GGBS on durability and corrosion behavior of reinforced concrete,” Materials Chemistry and Physics, vol. 93, no. 2-3, pp. 404–411, 2005. View at: Publisher Site | Google Scholar
  3. E. Özbay, M. Erdemir, and H. İ. Durmuş, “Utilization and efficiency of ground granulated blast furnace slag on concrete properties-a review,” Construction and Building Materials, vol. 105, pp. 423–434, 2016. View at: Publisher Site | Google Scholar
  4. H. Song and V. Saraswathy, “Studies on the corrosion resistance of reinforced steel in concrete with ground granulated blast-furnace slag-an overview,” Journal of Hazardous Materials, vol. 138, no. 2, pp. 226–233, 2006. View at: Publisher Site | Google Scholar
  5. Y. Zhao, J. Gong, and S. Zhao, “Experimental study on shrinkage of HPC containing fly ash and ground granulated blast-furnace slag,” Construction and Building Materials, vol. 155, pp. 145–153, 2017. View at: Publisher Site | Google Scholar
  6. A. Oner and S. Akyuz, “An experimental study on optimum usage of GGBS for the compressive strength of concrete,” Cement and Concrete Composites, vol. 29, no. 6, pp. 505–514, 2007. View at: Publisher Site | Google Scholar
  7. M. Shariq, J. Prasad, and A. Masood, “Effect of GGBFS on time dependent compressive strength of concrete,” Construction and Building Materials, vol. 24, no. 8, pp. 1469–1478, 2010. View at: Publisher Site | Google Scholar
  8. R. Siddique and D. Kaur, “Properties of concrete containing ground granulated blast furnace slag (GGBFS) at elevated temperatures,” Journal of Advanced Research, vol. 3, no. 1, pp. 45–51, 2012. View at: Publisher Site | Google Scholar
  9. M. M. Tüfekçi and Ö. Çakır, “An investigation on mechanical and physical properties of recycled coarse aggregate (RCA) concrete with GGBFS,” International Journal of Civil Engineering, vol. 15, no. 4, pp. 549–563, 2017. View at: Publisher Site | Google Scholar
  10. R. K. Majhi, A. N. Nayak, and B. B. Mukharjee, “Development of sustainable concrete using recycled coarse aggregate and ground granulated blast furnace slag,” Construction and Building Materials, vol. 159, pp. 417–430, 2018. View at: Publisher Site | Google Scholar
  11. T. Gehlot, “Influence of Ggbs and fly ash on compressive strength of concrete,” in Proceedings of the Recent Development in Engineering Sciences, Humanities and Management Engineering College, Bharatpur, Rajasthan, February 2020. View at: Google Scholar
  12. Q. Li and Q. Zhang, “Experimental study on the compressive strength and shrinkage of concrete containing fly ash and ground granulated blast‐furnace slag,” Structural Concrete, vol. 20, no. 5, pp. 1551–1560, 2019. View at: Publisher Site | Google Scholar
  13. D. V. Dao, H.-B. Ly, H.-L. T. Vu, T.-T. Le, and B. T. Pham, “Investigation and optimization of the C-ANN structure in predicting the compressive strength of foamed concrete,” Materials, vol. 13, no. 5, p. 1072, 2020. View at: Publisher Site | Google Scholar
  14. B. T. Pham, M. D. Nguyen, H.-B. Ly et al., “Development of artificial neural networks for prediction of compression coefficient of soft soil,” in CIGOS 2019, Innovation for Sustainable Infrastructure, C. Ha-Minh, D. V. Dao, F. Benboudjema et al., Eds., pp. 1167–1172, Springer, Berlin, Germany, 2020. View at: Google Scholar
  15. B. T. Pham, T. Nguyen-Thoi, H.-B. Ly et al., “Extreme learning machine based prediction of soil shear strength: a sensitivity analysis using monte carlo simulations and feature backward elimination,” Sustainability, vol. 12, no. 6, p. 2339, 2020. View at: Publisher Site | Google Scholar
  16. H.-B. Ly, B. T. Pham, L. M. Le, T.-T. Le, V. M. Le, and P. G. Asteris, “Estimation of axial load-carrying capacity of concrete-filled steel tubes using surrogate models,” Neural Computing and Applications, vol. 33, pp. 1–22, 2020. View at: Google Scholar
  17. H.-B. Ly, T.-T. Le, H.-L. T. Vu, V. Q. Tran, L. M. Le, and B. T. Pham, “Computational hybrid machine learning based prediction of shear capacity for steel fiber reinforced concrete beams,” Sustainability, vol. 12, no. 7, p. 2709, 2020. View at: Publisher Site | Google Scholar
  18. M. Y. Mansour, M. Dicleli, J. Y. Lee, and J. Zhang, “Predicting the shear strength of reinforced concrete beams using artificial neural networks,” Engineering Structures, vol. 26, no. 6, pp. 781–799, 2004. View at: Publisher Site | Google Scholar
  19. H. Naderpour, O. Poursaeidi, and M. Ahmadi, “Shear resistance prediction of concrete beams reinforced by FRP bars using artificial neural networks,” Measurement, vol. 126, pp. 299–308, 2018. View at: Publisher Site | Google Scholar
  20. G. Jiang, J. Keller, P. L. Bond, and Z. Yuan, “Predicting concrete corrosion of sewers using artificial neural network,” Water Research, vol. 92, pp. 52–60, 2016. View at: Publisher Site | Google Scholar
  21. C. Avila, Y. Shiraishi, and Y. Tsuji, “Crack width prediction of reinforced concrete structures by artificial neural networks,” in Proceedings of the 7th Seminar on Neural Network Applications in Electrical Engineering NEUREL 2004, pp. 39–44, Belgrade, Serbia, September 2004. View at: Publisher Site | Google Scholar
  22. R. Perera, M. Barchín, A. Arteaga, and A. D. Diego, “Prediction of the ultimate strength of reinforced concrete beams FRP-strengthened in shear using neural networks,” Composites Part B: Engineering, vol. 41, no. 4, pp. 287–298, 2010. View at: Publisher Site | Google Scholar
  23. F. Khademi, S. M. Jamal, N. Deshpande, and S. Londhe, “Predicting strength of recycled aggregate concrete using artificial neural network, adaptive neuro-fuzzy inference system and multiple linear regression,” International Journal of Sustainable Built Environment, vol. 5, no. 2, pp. 355–369, 2016. View at: Publisher Site | Google Scholar
  24. F. Özcan, C. D. Atiş, O. Karahan, E. Uncuoğlu, and H. Tanyildizi, “Comparison of artificial neural network and fuzzy logic models for prediction of long-term compressive strength of silica fume concrete,” Advances in Engineering Software, vol. 40, no. 9, pp. 856–863, 2009. View at: Publisher Site | Google Scholar
  25. D. V. Dao, H. B. Ly, S. H. Trinh, T. T. Le, and B. T. Pham, “Artificial intelligence approaches for prediction of compressive strength of geopolymer concrete,” Materials, vol. 12, no. 6, 2019. View at: Publisher Site | Google Scholar
  26. I. J. Han, T. F. Yuan, J. Y. Lee, Y. S. Yoon, and J. H. Kim, “Learned prediction of compressive strength of GGBFS concrete using hybrid artificial neural network models,” Materials, vol. 12, no. 22, 2019. View at: Publisher Site | Google Scholar
  27. B. Boukhatem, M. Ghrici, S. Kenai, and A. Tagnit-Hamou, “Prediction of efficiency factor of ground-granulated blast-furnace slag of concrete using artificial neural network,” Materials Journal, vol. 108, no. 1, pp. 55–63, 2011. View at: Publisher Site | Google Scholar
  28. A. Kandiri, E. Mohammadi Golafshani, and A. Behnood, “Estimation of the compressive strength of concretes containing ground granulated blast furnace slag using hybridized multi-objective ANN and salp swarm algorithm,” Construction and Building Materials, vol. 248, Article ID 118676, 2020. View at: Publisher Site | Google Scholar
  29. A. R. Boğa, M. Öztürk, and İ.B. Topçu, “Using ANN and ANFIS to predict the mechanical and chloride permeability properties of concrete containing GGBFS and CNI,” Composites Part B: Engineering, vol. 45, pp. 688–696, 2013. View at: Publisher Site | Google Scholar
  30. C. Bilim, C. D. Atiş, H. Tanyildizi, and O. Karahan, “Predicting the compressive strength of ground granulated blast furnace slag concrete using artificial neural network,” Advances in Engineering Software, vol. 40, no. 5, pp. 334–340, 2009. View at: Publisher Site | Google Scholar
  31. M. Sarıdemir, İ.B. Topçu, F. Özcan, and M. H. Severcan, “Prediction of long-term effects of GGBFS on compressive strength of concrete by artificial neural networks and fuzzy logic,” Construction and Building Materials, vol. 23, pp. 1279–1286, 2009. View at: Publisher Site | Google Scholar
  32. P. Chopra, R. K. Sharma, M. Kumar, and T. Chopra, “Comparison of machine learning techniques for the prediction of compressive strength of concrete,” Advances in Civil Engineering, vol. 2018, Article ID e5481705, 9 pages, 2018. View at: Publisher Site | Google Scholar
  33. D. J. Pitroda, “Prediction of strength for fly ash cement concrete through soft computing approaches,” International Journal of Advanced Research in Engineering, Science and Management (IJARESM), vol. 25, 2020. View at: Google Scholar
  34. B. K. R. Prasad, H. Eskandari, and B. V. V. Reddy, “Prediction of compressive strength of SCC and HPC with high volume fly ash using ANN,” Construction and Building Materials, vol. 23, no. 1, pp. 117–128, 2009. View at: Publisher Site | Google Scholar
  35. İ. B. Topçu and M. Sarıdemir, “Prediction of compressive strength of concrete containing fly ash using artificial neural networks and fuzzy logic,” Computational Materials Science, vol. 41, no. 3, pp. 305–311, 2008. View at: Publisher Site | Google Scholar
  36. M. Ahmadi, H. Naderpour, and A. Kheyroddin, “Utilization of artificial neural networks to prediction of the capacity of CCFT short columns subject to short term axial load,” Archives of Civil and Mechanical Engineering, vol. 14, no. 3, pp. 510–517, 2014. View at: Publisher Site | Google Scholar
  37. P. B. Cachim, “Using artificial neural networks for calculation of temperatures in timber under fire loading,” Construction and Building Materials, vol. 25, no. 11, pp. 4175–4180, 2011. View at: Publisher Site | Google Scholar
  38. P. Chopra, R. K. Sharma, and M. Kumar, “Prediction of compressive strength of concrete using artificial neural network and genetic programming,” Advances in Materials Science and Engineering, vol. 2016, p. e7648467, 2016. View at: Publisher Site | Google Scholar
  39. I.-C. Yeh, “Modeling of strength of high-performance concrete using artificial neural networks,” Cement and Concrete Research, vol. 28, no. 12, pp. 1797–1808, 1998. View at: Publisher Site | Google Scholar
  40. K. G. Sheela and S. N. Deepa, “Review on methods to fix number of hidden neurons in neural networks,” Mathematical Problems in Engineering, vol. 2013, Article ID 425740, 11 pages, 2013. View at: Publisher Site | Google Scholar
  41. I.-J. Han, T.-F. Yuan, J.-Y. Lee, Y.-S. Yoon, and J.-H. Kim, “Learned prediction of compressive strength of GGBFS concrete using hybrid artificial neural network models,” Materials, vol. 12, no. 22, Article ID 3708, 2019. View at: Publisher Site | Google Scholar
  42. A. Behnood, V. Behnood, M. Modiri Gharehveran, and K. E. Alyamac, “Prediction of the compressive strength of normal and high-performance concretes using M5P model tree algorithm,” Construction and Building Materials, vol. 142, pp. 199–207, 2017. View at: Publisher Site | Google Scholar
  43. D. V. Dao, H. Adeli, H.-B. Ly et al., “A sensitivity and robustness analysis of GPR and ANN for high-performance concrete compressive strength prediction using a monte carlo simulation,” Sustainability, vol. 12, no. 3, p. 830, 2020. View at: Publisher Site | Google Scholar
  44. D.-K. Bui, T. Nguyen, J.-S. Chou, H. Nguyen-Xuan, and T. D. Ngo, “A modified firefly algorithm-artificial neural network expert system for predicting compressive and tensile strength of high-performance concrete,” Construction and Building Materials, vol. 180, pp. 320–333, 2018. View at: Publisher Site | Google Scholar
  45. K. M. Lee, H. K. Lee, S. H. Lee, and G. Y. Kim, “Autogenous shrinkage of concrete containing granulated blast-furnace slag,” Cement and Concrete Research, vol. 36, no. 7, pp. 1279–1285, 2006. View at: Publisher Site | Google Scholar
  46. G. Montavon, “Introduction to neural networks,” Machine Learning Meets Quantum Physics, vol. 968, pp. 37–62, 2020. View at: Publisher Site | Google Scholar
  47. R. Lippmann, “An introduction to computing with neural nets,” IEEE ASSP Magazine, vol. 4, no. 2, pp. 4–22, 1987. View at: Publisher Site | Google Scholar
  48. T. Kalman Šipoš, I. Miličević, and R. Siddique, “Model for mix design of brick aggregate concrete based on neural network modelling,” Construction and Building Materials, vol. 148, pp. 757–769, 2017. View at: Publisher Site | Google Scholar
  49. Y. H. Zweiri, J. F. Whidborne, and L. D. Seneviratne, “A three-term backpropagation algorithm,” Neurocomputing, vol. 50, pp. 305–318, 2003. View at: Publisher Site | Google Scholar
  50. P. Sandhu and S. Chhabra, “A comparative analysis of conjugate gradient algorithms & PSO based neural network approaches for reusability evaluation of procedure based software systems,” Chiang Mai Journal of Science, vol. 38, pp. 123–135, 2011. View at: Google Scholar
  51. M. J. D. Powell, “Restart procedures for the conjugate gradient method,” Mathematical Programming, vol. 12, no. 1, pp. 241–254, 1977. View at: Publisher Site | Google Scholar
  52. E. M. L. Beale, “A derivative of conjugate gradients,” in Numerical Methods for Nonlinear Optimization, pp. 39–43, Academic Press, London, UK, 1972. View at: Google Scholar
  53. A. M. Neville, Properties of Concrete, Pearson Education, London, UK, 2013.
  54. D. R. Hush, “Classification with neural networks: a performance analysis,” in Proceedings of the IEEE 1989 International Conference on Systems Engineering, pp. 277–280, Fairborn, OH, USA, August 1989. View at: Publisher Site | Google Scholar
  55. S. Popovics, “Analysis of concrete strength versus water-cement ratio relationship,” ACI Materials Journal, vol. 87, pp. 517–529, 1990. View at: Publisher Site | Google Scholar
  56. S. I. Gallant, Neural Network Learning and Expert Systems, MIT Press, Cambridge, MA, USA, 1993.
  57. C. Wang, A Theory of Generalization in Learning Machines with Neural Application, University of Pennsylvania, Philadelphia, PA, USA, 1994.
  58. T. Masters, Practical Neural Network Recipes in C++, Academic Press, Boston, MA, USA, 1994.
  59. J. Y. Li, T. W. S. Chow, and Y. L. Yu, “Estimation theory and optimization algorithm for the number of hidden units in the higher-order feedforward neural network,” in Proceedings of the IEEE International Conference on Neural Networks - Conference Proceedings, pp. 1229–1233, IEEE, Perth, Australia, December 1995. View at: Google Scholar
  60. S. Tamura and M. Tateishi, “Capabilities of a four-layered feedforward neural network: four layers versus three,” IEEE Transactions on Neural Networks, vol. 8, no. 2, pp. 251–255, 1997. View at: Publisher Site | Google Scholar
  61. S. Lai and M. Serra, “Concrete strength prediction by means of neural network,” Construction and Building Materials, vol. 11, no. 2, pp. 93–98, 1997. View at: Publisher Site | Google Scholar
  62. S. Nagendra, Practical Aspects of Using Neural Networks: Necessary Preliminary Specifications, Gobal Research and Development Center, Niskayuna, NY, USA, 1998, http://citeseerx.ist.psu.edu/viewdoc/citations;jsessionid=0DE4DB7A72E3FD87F294E49E95FEAA2F?doi=10.1.1.
  63. Z. Zhang, X. Ma, and Y. Yang, “Bounds on the number of hidden neurons in three-layer binary neural networks,” Neural Networks, vol. 16, no. 7, pp. 995–1002, 2003. View at: Publisher Site | Google Scholar
  64. K. Shibata and I. Yusuke, “Effect of number of hidden neurons on learning in large-scale layered neural networks,” in Proceedings of the 2009 ICCAS-SICE, pp. 5008–5013, Fukuoka, Japan, August 2009. View at: Google Scholar
  65. D. Hunter, H. Yu, M. S. Pukish III, J. Kolbusz, and B. M. Wilamowski, “Selection of proper neural network sizes and architectures-A comparative study,” IEEE Transactions on Industrial Informatics, vol. 8, no. 2, pp. 228–240, 2012. View at: Publisher Site | Google Scholar
  66. B. D. Ripley, “Statistical aspects of neural networks,” in Networks and Chaos-Statistical and Probabilistic Aspects, pp. 40–123, Chapman & Hall, London, UK, 1993. View at: Publisher Site | Google Scholar
  67. I. Kanellopoulos and G. G. Wilkinson, “Strategies and best practice for neural network image classification,” International Journal of Remote Sensing, vol. 18, no. 4, pp. 711–725, 1997. View at: Publisher Site | Google Scholar
  68. J. Paola, Neural Network Classification of Multispectral Imagery, The University of Arizona, Tucson, AZ, USA, 1994.
  69. E. M. Golafshani and A. Behnood, “Estimating the optimal mix design of silica fume concrete using biogeography-based programming,” Cement and Concrete Composites, vol. 96, pp. 95–105, 2019. View at: Publisher Site | Google Scholar
  70. D.-C. Feng, Z.-T. Liu, X.-D. Wang et al., “Machine learning-based compressive strength prediction for concrete: an adaptive boosting approach,” Construction and Building Materials, vol. 230, p. 117000, 2020. View at: Publisher Site | Google Scholar
  71. M. Benaicha, A. Hafidi Alaoui, O. Jalbaud, and Y. Burtschell, “Dosage effect of superplasticizer on self-compacting concrete: correlation between rheology and strength,” Journal of Materials Research and Technology, vol. 8, no. 2, pp. 2063–2069, 2019. View at: Publisher Site | Google Scholar

Copyright © 2021 Hai-Van Thi Mai et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views585
Downloads440
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.