Table of Contents Author Guidelines Submit a Manuscript
Advances in Civil Engineering
Volume 2019, Article ID 3069046, 11 pages
https://doi.org/10.1155/2019/3069046
Research Article

Prediction of Concrete Compressive Strength and Slump by Machine Learning Methods

Civil Engineering, Tekirdağ Namık Kemal University, Çorlu Faculty of Engineering, Tekirdağ 59860, Turkey

Correspondence should be addressed to M. Timur Cihan; rt.ude.ukn@nahictemhem

Received 26 July 2019; Revised 10 October 2019; Accepted 8 November 2019; Published 29 November 2019

Guest Editor: Murat Kankal

Copyright © 2019 M. Timur Cihan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Machine learning methods have been successfully applied to many engineering disciplines. Prediction of the concrete compressive strength (fc) and slump (S) is important in terms of the desirability of concrete and its sustainability. The goals of this study were (i) to determine the most successful normalization technique for the datasets, (ii) to select the prime regression method to predict the fc and S outputs, (iii) to obtain the best subset with the ReliefF feature selection method, and (iv) to compare the regression results for the original and selected subsets. Experimental results demonstrate that the decimal scaling and min-max normalization techniques are the most successful methods for predicting the compressive strength and slump outputs, respectively. According to the evaluation metrics, such as the correlation coefficient, root mean squared error, and mean absolute error, the fuzzy logic method makes better predictions than any other regression method. Moreover, when the input variable was reduced from seven to four by the ReliefF feature selection method, the predicted accuracy was within the acceptable error rate.

1. Introduction

Concrete is a complex composite material. The predictability of concrete properties is extremely low. Therefore, it is challenging to model the concrete properties according to the effect variables. The biggest challenge of experimental designs is a high number of effect variables affecting the response variables. Multiple effect variables increase the number of trials. The higher amount of uncontrollable variables makes it difficult to obtain the real response function.

Generally, the one-factor-at-a-time method is used in experimental designs to determine the concrete properties. The major disadvantage of this approach is that it does not consider the interaction between the factors (interaction terms). The higher the number of the controlled and uncontrolled effect variables that influence the concrete properties, the lesser the predicted accuracy. Despite this, a few experimental designs have been suggested by considering the controllable effect variables and interaction terms between them [1].

Machine learning (ML) is a highly multidisciplinary field and consists of various methods for obtaining new information [2]. ML is most often used for prediction. Predicting the categorical variable values is called classification, whereas predicting the numerical variable values is called regression. Regression is the process of analyzing the relationship between one or more independent variables and a dependent variable [3].

In recent years, the ML methods have become popular as they allow researchers to improve the prediction accuracy of concrete properties [4] and are used for various engineering applications [5, 6]. The ML methods have been used to increase the prediction accuracy of concrete properties [715], and the data derived from the literature sources were used. However, Chopra et al. [16, 17] applied the data generated under the controlled laboratory conditions.

Regression models tend to be used for the prediction of the compressive strength of high-strength concrete [18, 19]. These models also demonstrate how the concrete compressive strength depends on the mixing ratios [20]. Topçu and Sarıdemir [21] and Başyiğit et al. [22] developed models using the neural network (NN) and fuzzy logic (FL) methods to improve the prediction accuracy of the compressive strength of the mineral-additive (fly ash) concrete and heavy-weight concrete. Both studies concluded that the compressive strength could be predicted by using the models that were developed with the NN and FL methods without any further experiments. NN is more successful than the data mining methods and does not enhance the prediction accuracy of the concrete compressive strength [15, 17, 2326]. Khademi et al. [27] compared the multiple linear regression, neural network, and adaptive neuro-fuzzy inference system (ANFIS) methods to estimate the concrete compressive strength for 28 days and reported that the NN and ANFIS models provide reliable results.

Previous studies evaluated the amount of the concrete component materials and compared their results to the published data. In this study, the ML regression methods were compared to predict the compressive strength and slump values of the cube samples. The samples were prepared by accounting for seven simultaneously controllable effect variables in the laboratory. The study aimed to determine the most successful regression method by comparing the decision tree (DT), random forest (RF), support vector machine (SVM), partial least squares (PLS), artificial neural networks (ANN), bootstrap aggregation (bagging), and FL models for the prediction of the concrete compressive strength and slump values. The R, RMSE, and MAE metrics were used to compare the prediction accuracy of the developed models. Finally, feature reduction was accomplished by the feature selection method. Then, the model’s success rates were compared to predict the compressive strength and slump value using fewer variables.

2. Materials and Methods

2.1. Experimental Datasets

Datasets used for this study comprised seven input variables (i.e., W/C, C, fcc, FA, kk, CA, and TA) and two output (response) variables (i.e., fc and S) for two different maximum aggregate sizes Dmax. = 22.4 mm (D224) and Dmax. = 11.2 mm (D112). The input variables were selected considering the simultaneously controllable effect variables [2830]. D-optimal design obtained by the augmentation of the fractional factorial design (27-3) was used as the experimental design. In the D-optimal design, 58 and 56 test results were employed for D112 and D224, respectively. Each experimental result was calculated as an average of three sample results that are produced under laboratory conditions [2830]. Properties of the constituents are given in Table 1 [2830]. Abbreviations of the effect and response variables and the basic statistic of the datasets are presented in Table 2.

Table 1: Properties of the constituents.
Table 2: Basic statistic of used datasets.

3. Methods

In this study, the concrete compressive strength and slump values were predicted using the ML regression models, namely, the regression tree, RF, support vector machines, artificial neural network, partial least square, bagging, and FL. Datasets were randomly split into 70% for the training set and 30% for the independent test set. The training data were used to train the ML model. The independent test data were applied for the evaluation of the model’s performance. The 10-fold cross-validation procedure helped in the estimation of the ML model skills.

The ML preprocessing steps were applied to the raw datasets before they could be utilized for the regression method training. The datasets were not normally distributed according to the Shapiro–Wilk normality test [31] results. Many normalization methods have been previously developed to normalize the dataset [32]. In this study, four different normalization methods (i.e., min-max, decimal, sigmoid, and z-score) were applied to derive the most successful normalization method for the raw dataset. Then, the K-nearest neighbor (KNN) regression method was applied to the normalized datasets. The prediction results were compared to determine the most suitable normalization method. Later, the raw datasets were normalized with the determined normalization technique.

The ML regression models were trained to predict the fc and S values. The correlation coefficient (R), root mean squared error (RMSE), and mean absolute error (MAE) metrics were employed to compare the models’ prediction performance. According to these statistical results, the most successful regression method was determined to predict the fc and S values. Afterward, the feature selection method was used to obtain the subset with fewer features, and the prediction accuracy was examined. All regression methods and computations were performed using the R programming language [33]. The prediction process is illustrated in Figure 1 in the form of a flow diagram.

Figure 1: Flow diagram of the prediction process.
3.1. Normalization Methods

Normalization is the preprocessing step in ML. The normalization methods are used where the variation intervals of the variables in the dataset differ. When the mean and variance of the variables differ significantly, the variables with a large mean and variance increase the impact on the other variables. This may result in the loss of important variables due to the low variation intervals. It can also affect the success of the ML models [34, 35]. Therefore, regression models are normalized by the numerical data normalization methods to standardize the effect of each variable on the results. In this study, the dataset was normalized by the min-max, decimal, sigmoid, and z-score normalization techniques, and then their performances were compared.

3.2. Machine Learning Methods

The ML regression method estimates the output value using the input samples of the dataset. Such a procedure is also termed as the training set. The purpose of the regression method is to minimize the error between the predicted and actual outputs [36]. Herein, seven different regression methods (i.e., DT, RF, support vector machine, partial least squares, artificial neural networks, bootstrap aggregation (bagging), and FL) were used to predict the concrete compressive strength and slump values. Additionally, the K-nearest neighbor method was applied to determine the suitable normalization method for the dataset. These methods are briefly described below.

Decision tree (DT) [37] is a supervised ML algorithm. It can be used for both regression and classification. The aim of the DT algorithm is to divide the dataset into smaller, meaningful pieces, where each input has its own class label (tag) or value. Different measurements are used for the DT splitting, such as Gini and information gain. Regression tree is a type of a DT and a hierarchical model for the supervised learning. Classification and regression trees (CART), ID3, and C4.5 methods are the most important learning algorithms mentioned in the literature. In this study, the CART [38] model is used for the regression.

Random forest (RF) [39] is an ensemble method that combines many DTs. It can be used for both regression and classification. Each DT in the forest is created by the selection of different samples from the original dataset by the bootstrap technique. These samples are then trained using a set of attributes selected by the bagging mechanism. Subsequently, the decisions made by a large number of individual trees are subjected to voting. As such, the most voted class is presented as the class estimate of the community.

Support vector machine (SVM) has been developed by Vapnik [40]. It is applied both for regression and classification. The SVM method is based on finding an optimal hyperplane that maximizes the margin between the classes.

Partial least squares (PLS) [41] regression generalizes and combines the attributes from the principal component analysis and multiple regression. The most important characteristic of the PLS method is its ability to obtain a simple model with a few components, even when the variables are highly correlated or linearly independent.

Artificial neural networks (ANN) [42] involve a system of many interconnected neurons. The neurons are connected by the weighted links. The ANN architecture consists of the input, hidden, and output layers. The multilayer perceptron neural network (MLP) is a fully connected, feedforward type of network. It is mostly used in network architecture. The output of all the neurons in the input layer is scaled by the related connection weights. Then, the input of the neurons is feedforwarded to the output layer. Activation functions are used for the sum of the input neuron signals in the output layer.

Bootstrap aggregation (bagging) was introduced by Breiman [43] and can be utilized for both regression and classification. Bagging is performed by aggregating the resulting prediction rules using the bootstrap samples from the training sample.

Fuzzy logic (FL) is an ML method and was introduced by Zadeh [44]. FL is a mathematical-based method used to analyze the systems in a manner similar to how people do. As many problems could not be expressed by the exact mathematical definitions, a new method was developed. In the classical approach, an element is a member or nonmember of the cluster, making the result equal to zero or one. However, in the FL, the situation is expressed by the membership degrees, which indicate the element’s involvement in the cluster. The membership function is used to map each element into a continuous interval from zero to one. In other words, the membership degree of the element can vary as an infinite number from zero to one. A typical fuzzy system consists of a rule base, membership functions, and inference procedure. In this study, Wang and Mendel’s technique (WM) was employed to generate the fuzzy rule.

K-nearest neighbor (KNN) [45] is an instance-based algorithm and can be applied for both regression and classification. The KNN method searches for the k-data points closest to the test object and uses the features of these neighbors to classify the new object. For this, a distance is measured between each instance in the training dataset and the test instance. Herein, k = 3, 5, and 7 were chosen. The Euclidean distance was deployed as a distance measure. The “knn.reg” function was used in the “FNN” package [46]. The detailed information regarding the ML regression methods applied in this study is presented in Table 3.

Table 3: Hyperparameters of machine learning regression models.
3.3. Evaluation Metrics

To evaluate the predicted values of the regression methods, the actual and predicted values were compared. In this study, the R, RMSE, and MAE metrics were used to evaluate the prediction accuracy [47]. The model parameters were optimized for the highest R, lowest RMSE, and lowest MAE. All of them were calculated according to the following equations:

Here, N is the number of data points.

3.4. Feature Selection

Feature selection (reduction in irrelevant variables) is the preprocessing step in ML that selects the best subset from the original dataset by evaluating the properties according to the used algorithm [48]. The ReliefF algorithm was developed by Kira and Rendell [49]. It weights the features according to the relationship between the effect variables. Although this method was successfully applied to two classes of the datasets, it was not proved functional for the datasets with multiple classes. To solve this problem, in 1994, Kononenko developed the ReliefF algorithm that works for the multiclass datasets [50]. The algorithm determines the weights of the continuous and discrete attributes based on a distance between the instances.

4. Results and Discussion

The cross-correlation between the datasets representing the parameters D112 and D224 is depicted in Figure 2. The correlation coefficient provides information on the effect level and direction of the linear relationship between two variables. The Pearson correlation is used when the dataset has a normal distribution, whereas the Spearman correlation is applied when the normal distribution cannot be reached.

Figure 2: Correlation matrix of D112 (a) and D224 (b) datasets.

According to the correlation results of the D112 dataset, the response variable fc is highly correlated with the effect variable fcc (0.88). Moreover, the highest correlation is observed between the response variable S and the effect variable TA (−0.57 for basalt and 0.57 for limestone). According to the correlation results of the D224 dataset, the response variable fc is highly correlated with the effect variable fcc (0.91). Besides, the highest correlation is obtained between the response variable S and the effect variable TA (−0.55 for basalt and 0.55 for limestone).

Before the data analysis begins, the data must be checked in accordance with the normal distribution. In this study, the normality test was performed using the Shapiro–Wilk normality test with the Gaussian error [51]. In the Shapiro–Wilk normality test, when the probability is >0.05, the data are normally distributed, whereas when the probability is <0.05, the data demonstrate a nonnormal data distribution. The small W value in the Shapiro–Wilk normality test indicates that the sample is not normally distributed. The Shapiro–Wilk normality test results for the D112 and D224 datasets are presented in Table 4.

Table 4: Shapiro–Wilk normality test results for datasets.

According to the Shapiro–Wilk normality test results (Table 4), the D112 and D224 datasets are not normally distributed with the probability of the variables <0.05 and very high W values for both datasets. Furthermore, the box-plot graphs (Figure 3) prove that the dataset is not normally distributed. With the box-plot graph, it is possible to examine both ranges of the value and the numeric variable distribution.

Figure 3: Box-plot graphs for the raw and decimal normalized dataset.

In this study, four different normalization techniques, namely, min-max, decimal, sigmoid, and z-score were applied to four different datasets. As a result, the most successful method was determined. After the normalization of the datasets by these methods, their success rate was compared using the KNN regression method. The KNN regression method was chosen being distance-based and rapid in application. In this study, the k-values were selected at 3, 5, and 7. The results are provided in Table 5. According to the KNN regression results, the fc (D112, D224) and S (D112, D224) values were normalized by the decimal scaling and min-max normalization methods, respectively.

Table 5: The results of the normalization methods.

The results of the RF, SVM linear, SVM linear (SVMLin), SVM polynomial (SVMPoly), PLS, Bagging, DT, MLP, and FL models for the prediction of the compressive strength and the slump value are presented in Table 6.

Table 6: Metrics results of the different regression methods.

The reason for the selection of these regression methods was the successful employment of those prediction algorithms in published literature. As mentioned earlier, the datasets were randomly divided into the training (70%) and individual test sets (30%). Herein, the training and individual test sets consisted of 40 and 18 instances for the D112 dataset, respectively, and 39 and 17 instances for the D224 dataset, respectively. Prediction results for the models were obtained from the 10-fold cross-validation process. The performance of these regression methods was evaluated according to the R, RMSE, and MAE statistical criteria. R was employed to evaluate the good fit between the predicted and actual values. A combination of the R, RMSE, and MAE results was sufficient to reveal any significant differences between the predicted and actual values.

According to the statistical results of the regression method (Table 6), the FL regression model delivered the highest prediction accuracy for the prediction of the response variables fc and S according to the maximum aggregate sizes (D112 and D224). The FL model achieved the best prediction accuracy results among all the performance criteria according to seven benchmark models.

The prediction results obtained from the FL regression model and actual results are depicted in Figures 4 and 5. The prediction values for the compressive strength and slump are similar to the actual values.

Figure 4: Comparison between actual and predicted values of fc and S values using the FL model for the D112 dataset.
Figure 5: Comparison between actual and predicted values of fc and S values using the FL model for the D224 dataset.

To reduce the number of the effect variables, the ReliefF feature selection method was used to determine the high-level effect variables. As a result, the fcc, kk, C, and W/C effect variables were selected as they had a high-level effect on fc and S for the maximum aggregate size. The results of R, RMSE, and MAE obtained after applying the FL model to all the effect variables and reduced effect variables are presented in Table 7. This table also indicates that there is no significant change in the R, RMSE, and MAE results when the number of features was reduced from seven to four. Therefore, the FL model with fewer features can still make successful predictions.

Table 7: The results of the FL regression for the original dataset and selected subset.

The effect levels of the simultaneously controllable effect variables on the response variables exhibit some variations [2830]. Considering the selected variation intervals, the fcc, kk, C, and W/C variables had a significant effect level on the response variables for the maximum aggregate sizes. Cement strength (fcc), cement dosage (C), and water/cement (W/C) ratio tend to have a significant effect on the compressive strength. Furthermore, the fineness modulus (kk), which expresses the fineness and distribution of the mixture aggregate, is one of the essential variables that affects the concrete compactness. Moreover, the concrete compactness directly affects the compressive strength.

The workability of concrete is directly influenced by the cement properties (e.g., cement fineness), aggregate properties (e.g., roughness of the aggregate surface), and amount of mixing water. Particularly, it is not expected that the chemical additive variable does not have a significant effect on workability. The variation intervals of the chemical additive are negligible and do not show a significant effect on the workability of concrete. However, the variation intervals of the other effect variables can be considerable. Therefore, the predicted accuracy does not decrease due to the FA, CA, and TA variables, which do not have a significant effect on response variables in the selected variation intervals.

5. Conclusions

The goals of this study were (i) to determine the most successful normalization technique for the datasets, (ii) to obtain the prime regression method to predict the fc and S values, (iii) to choose the best subset using the ReliefF feature selection method, and (iv) to compare the regression results for the original and selected subsets.

To determine the effect levels of the effect variables on the response variables (i.e., fc and S) with precision, data were analyzed for normalization. If the data were not normally distributed, it was necessary to determine the most appropriate normalization method. In this study, the Shapiro–Wilk normality test results demonstrated that the datasets were not normally distributed. The most successful techniques for the determination of the fc (D112, D224) and S (D112, D224) values were the decimal scaling and min-max normalization methods, respectively. Therefore, as the variation ranges of the effect variables influencing the concrete properties varied substantially, it was necessary to preprocess the raw data for the estimation of the concrete properties.

Herein, seven different ML methods, such as DT, RF, SVM, PLS, ANN, bagging, and FL were experimented with to predict the fc and S values. According to the R, RMSE, and MAE statistical results, FL is the best regression method for the maximum aggregate size. Generally, the similarity between the actual and predicted values is high for the compressive strength (Figures 4 and 5). A minimal difference between the actual and predicted slump values indicates that the slump values are more sensitive to the experimental error, simultaneously uncontrollable effect variables, and variation intervals of the effect variables. The flexibility of the computational structure of the FL approximated the results instead of providing the exact results. In particular, the uncertainties in the problem-solving and decision-making processes can be clarified by the application of the FL. Thus, complicated problems can be solved, making the FL more functional than any other ML method.

In experimental designs, where the number of the simultaneously uncontrollable effect variables is high, it is crucial to reduce the number of the experiments to save costs and time. Therefore, the predicted values close to the actual values need to be obtained with the minimum number of the experiments. In this study, seven simultaneously controllable effect variables were reduced to four effect variables (i.e., fcc, kk, C, and W/C) using the RelifF feature selection method. The metric results obtained by the FL regression were similar for four and seven effect variables (Table 7). Therefore, the experimental designs with fewer effect variables are sufficient for estimating the concrete properties.

Data Availability

Previously reported “compressive strength and slump of concrete” data were used to support this study and are available in the author’s PhD thesis, report, and article. These prior studies (and datasets) are cited at relevant places within the text as references [2830].

Conflicts of Interest

The author declares that there are no conflicts of interest.

References

  1. D. C. Montgomery, Design and Analysis of Experiments, John Wiley & Sons, Hoboken, NJ, USA, 2017.
  2. M. Awad and R. Khanna, Efficient Learning Machines: Theories, Concepts, and Applications for Engineers and System Designers, Apress, New York, NY, USA, 2015.
  3. M. Hofmann and R. Klinkenberg, RapidMiner: Data Mining Use Cases and Business Analytics Applications, CRC Press, Boca Raton, FL, USA, 2013.
  4. B. Boukhatem, S. Kenai, A. Tagnit-Hamou, and M. Ghrici, “Application of new information technology on concrete: an overview/naujų informacinių technologijų naudojimas ruošiant betoną. Apžvalga,” Journal of Civil Engineering and Management, vol. 17, no. 2, pp. 248–258, 2011. View at Publisher · View at Google Scholar · View at Scopus
  5. P. Cihan, E. Gökçe, and O. Kalıpsız, “A review of machine learning applications in veterinary field,” Kafkas Univ Vet Fak Derg, vol. 23, no. 4, pp. 673–680, 2017. View at Publisher · View at Google Scholar · View at Scopus
  6. E. E. Ozbas, D. Aksu, A. Ongen, M. A. Aydin, and H. K. Ozcan, “Hydrogen production via biomass gasification, and modeling by supervised machine learning algorithms,” International Journal of Hydrogen Energy, vol. 44, no. 32, pp. 17260–17268, 2019. View at Publisher · View at Google Scholar · View at Scopus
  7. H.-G. Ni and J.-Z. Wang, “Prediction of compressive strength of concrete by neural networks,” Cement and Concrete Research, vol. 30, no. 8, pp. 1245–1250, 2000. View at Publisher · View at Google Scholar · View at Scopus
  8. S. Akkurt, G. Tayfur, and S. Can, “Fuzzy logic model for the prediction of cement compressive strength,” Cement and Concrete Research, vol. 34, no. 8, pp. 1429–1433, 2004. View at Publisher · View at Google Scholar · View at Scopus
  9. A. Öztaş, M. Pala, E. A. Özbay, E. Kanca, N. Caglar, and M. A. Bhatti, “Predicting the compressive strength and slump of high strength concrete using neural network,” Construction and Building Materials, vol. 20, no. 9, pp. 769–775, 2006. View at Publisher · View at Google Scholar · View at Scopus
  10. M. Pala, E. Özbay, A. Öztaş, and M. I. Yuce, “Appraisal of long-term effects of fly ash and silica fume on compressive strength of concrete by neural networks,” Construction and Building Materials, vol. 21, no. 2, pp. 384–394, 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. M. Ozturan, B. Kutlu, and T. Ozturan, “Comparison of concrete strength prediction techniques with artificial neural network approach,” Building Research Journal, vol. 56, no. 1, pp. 23–36, 2008. View at Google Scholar
  12. M. M. Alshihri, A. M. Azmy, and M. S. El-Bisy, “Neural networks for predicting compressive strength of structural light weight concrete,” Construction and Building Materials, vol. 23, no. 6, pp. 2214–2219, 2009. View at Publisher · View at Google Scholar · View at Scopus
  13. M. Sarıdemir, “Prediction of compressive strength of concretes containing metakaolin and silica fume by artificial neural networks,” Advances in Engineering Software, vol. 40, no. 5, pp. 350–355, 2009. View at Google Scholar
  14. A. M. Diab, H. E. Elyamany, A. E. M. Abd Elmoaty, and A. H. Shalan, “Prediction of concrete compressive strength due to long term sulfate attack using neural network,” Alexandria Engineering Journal, vol. 53, no. 3, pp. 627–642, 2014. View at Publisher · View at Google Scholar · View at Scopus
  15. P. Chopra, R. K. Sharma, and M. Kumar, “Artificial neural networks for the prediction of compressive strength of concrete,” International Journal of Applied Science & Engineering, vol. 13, pp. 187–204, 2015. View at Google Scholar
  16. P. Chopra, R. K. Sharma, and M. Kumar, “Prediction of compressive strength of concrete using artificial neural network and genetic programming,” Advances in Materials Science and Engineering, vol. 2016, Article ID 7648467, 10 pages, 2016. View at Publisher · View at Google Scholar · View at Scopus
  17. P. Chopra, R. K. Sharma, M. Kumar, and T. Chopra, “Comparison of machine learning techniques for the prediction of compressive strength of concrete,” Advances in Civil Engineering, vol. 2018, Article ID 5481705, 9 pages, 2018. View at Publisher · View at Google Scholar · View at Scopus
  18. S. Wu, B. Li, J. Yang, and S. Shukla, “Predictive modeling of high-performance concrete with regression analysis,” in Proceedings of the 2010 IEEE International Conference on Industrial Engineering and Engineering Management, pp. 1009–1013, Macao, China, December 2010. View at Publisher · View at Google Scholar · View at Scopus
  19. M. F. M. Zain and S. M. Abd, “Multiple regression model for compressive strength prediction of high performance concrete,” Journal of Applied Sciences, vol. 9, no. 1, pp. 155–160, 2009. View at Publisher · View at Google Scholar · View at Scopus
  20. J. Namyong, Y. Sangchun, and C. Hongbum, “Prediction of compressive strength of in-situ concrete based on mixture proportions,” Journal of Asian Architecture and Building Engineering, vol. 3, no. 1, pp. 9–16, 2004. View at Publisher · View at Google Scholar · View at Scopus
  21. İ. B. Topçu and M. Sarıdemir, “Prediction of compressive strength of concrete containing fly ash using artificial neural networks and fuzzy logic,” Computational Materials Science, vol. 41, no. 3, pp. 305–311, 2008. View at Publisher · View at Google Scholar · View at Scopus
  22. C. Başyigit, I. Akkurt, S. Kilincarslan, and A. Beycioglu, “Prediction of compressive strength of heavyweight concrete by ANN and FL models,” Neural Computing and Applications, vol. 19, no. 4, pp. 507–513, 2010. View at Google Scholar
  23. M. Wankhade and A. Kambekar, “Prediction of compressive strength of concrete using artificial neural network,” International Journal of Scientific Research and Reviews, vol. 2, pp. 11–26, 2013. View at Google Scholar
  24. P. Chopra, R. K. Sharma, and M. Kumar, “Predicting compressive strength of concrete for varying workability using regression models,” International Journal of Engineering & Applied Sciences, vol. 6, no. 4, p. 10, 2014. View at Publisher · View at Google Scholar
  25. M. Nikoo, F. T. Moghadam, and Ł. Sadowski, “Prediction of concrete compressive strength by evolutionary artificial neural networks,” Advances in Materials Science and Engineering, vol. 2015, Article ID 849126, 8 pages, 2015. View at Publisher · View at Google Scholar · View at Scopus
  26. A. Khashman and P. Akpinar, “Non-destructive prediction of concrete compressive strength using neural networks,” Procedia Computer Science, vol. 108, pp. 2358–2362, 2017. View at Publisher · View at Google Scholar · View at Scopus
  27. F. Khademi, M. Akbari, S. M. Jamal, and M. Nikoo, “Multiple linear regression, artificial neural network, and fuzzy logic prediction of 28 days compressive strength of concrete,” Frontiers of Structural and Civil Engineering, vol. 11, no. 1, pp. 90–99, 2017. View at Publisher · View at Google Scholar · View at Scopus
  28. A. Güner, Normal Betonun Alışılagelmiş Uygulama Özelliklerinin Kontrol Edilebilir Değişkenlere Göre Tepki Yüzeylerinin Belirlenmesi, TUBITAK-Project No: 109M748 Türkiye Bilimsel ve Teknolojik Araştırma Kurumu, Ankara, Turkey, 2011.
  29. M. T. Cihan, “Tepki yüzeyi yöntem bilgisinin beton uygulamasında kullanılabilirliğinin geliştirilmesi,” Fen Bilimleri Enstitüsü, Yıldız Teknik Üniversitesi, Istanbul, Turkey, 2012, Ph.D. thesis. View at Google Scholar
  30. M. T. Cihan, A. Güner, and N. Yüzer, “Response surfaces for compressive strength of concrete,” Construction and Building Materials, vol. 40, pp. 763–774, 2013. View at Publisher · View at Google Scholar · View at Scopus
  31. P. Royston, “Approximating the Shapiro-Wilk W-test for non-normality,” Statistics and Computing, vol. 2, no. 3, pp. 117–119, 1992. View at Publisher · View at Google Scholar · View at Scopus
  32. A. F. Dutka and H. H. Hansen, Fundamentals of Data Normalization, Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 1991.
  33. R. Ihaka and R. Gentleman, “R: a language for data analysis and graphics,” Journal of Computational and Graphical Statistics, vol. 5, no. 3, pp. 299–314, 1996. View at Publisher · View at Google Scholar · View at Scopus
  34. L. A. Shalabi, Z. Shaaban, and B. Kasasbeh, “Data mining: a preprocessing engine,” Journal of Computer Science, vol. 2, no. 9, pp. 735–739, 2006. View at Publisher · View at Google Scholar
  35. P. Cihan, O. Kalipsiz, and E. Gökçe, “Hayvan hastaliği teşhisinde normalizasyon tekniklerinin yapay sinir aği ve özellik seçim performansina etkisi,” Electronic Turkish Studies, vol. 12, 2017. View at Google Scholar
  36. E. Alpaydin, Introduction to Machine Learning, MIT Press, Cambridge, MA, USA, 2009.
  37. J. R. Quinlan, “Induction of decision trees,” Machine Learning, vol. 1, no. 1, pp. 81–106, 1986. View at Publisher · View at Google Scholar
  38. L. Breiman, J. Friedman, R. Olshen, and C. Stone, Classification and Regression Trees, vol. 37, Wadsworth International Group, Belmont, CA, USA, 1984.
  39. A. Liaw and M. Wiener, “Classification and regression by randomForest,” R News, vol. 2, pp. 18–22, 2002. View at Google Scholar
  40. V. Vapnik, The Nature of Statistical Learning Theory, Springer Science & Business Media, Berlin, Germany, 2013.
  41. S. Wold, A. Ruhe, H. Wold, and W. J. Dunn III, “The collinearity problem in linear regression. The partial least squares (PLS) approach to generalized inverses,” SIAM Journal on Scientific and Statistical Computing, vol. 5, no. 3, pp. 735–743, 1984. View at Publisher · View at Google Scholar
  42. J. M. Zurada, Introduction to Artificial Neural Systems, West publishing Company, St. Paul, MN, USA, 1992.
  43. L. Breiman, “Bagging predictors,” Machine Learning, vol. 24, no. 2, pp. 123–140, 1996. View at Publisher · View at Google Scholar
  44. L. A. Zadeh, “Fuzzy sets,” Information and Control, vol. 8, no. 3, pp. 338–353, 1965. View at Publisher · View at Google Scholar · View at Scopus
  45. O. Kramer, Dimensionality Reduction with Unsupervised Nearest Neighbors, Springer, Berlin, Germany, 2013.
  46. A. Beygelzimer, S. Kakadet, J. Langford, S. Arya, D. Mount, and S. Li, Package “FNN”, 2015.
  47. T. Chai and R. R. Draxler, “Root mean square error (RMSE) or mean absolute error (MAE)?—arguments against avoiding RMSE in the literature,” Geoscientific Model Development, vol. 7, no. 3, pp. 1247–1250, 2014. View at Publisher · View at Google Scholar · View at Scopus
  48. E. Alpaydin, Machine Learning: The New AI, MIT press, Cambridge, MA, USA, 2016.
  49. K. Kira and L. A. Rendell, “A practical approach to feature selection,” in Machine Learning Proceedings 1992, pp. 249–256, Elsevier, Amsterdam, Netherlands, 1992. View at Google Scholar
  50. I. Kononenko, “Estimating attributes: analysis and extensions of Relief,” in European Conference on Machine Learning, pp. 171–182, Springer, Berlin, Germany, 1994. View at Google Scholar
  51. J. P. Royston, “An extension of Shapiro and Wilk’s W test for normality to large samples,” Applied Statistics, vol. 31, no. 2, pp. 115–124, 1982. View at Publisher · View at Google Scholar