Abstract

Machine learning plays an important role in computational intelligence and has been widely used in many engineering fields. Surface voids or bugholes frequently appearing on concrete surface after the casting process make the corresponding manual inspection time consuming, costly, labor intensive, and inconsistent. In order to make a better inspection of the concrete surface, automatic classification of concrete bugholes is needed. In this paper, a variable selection strategy is proposed for pursuing feature interpretability, together with an automatic ensemble classification designed for getting a better accuracy of the bughole classification. A texture feature deriving from the Gabor filter and gray-level run lengths is extracted in concrete surface images. Interpretable variables, which are also the components of the feature, are selected according to a presented cumulative voting strategy. An ensemble classifier with its base classifier automatically assigned is provided to detect whether a surface void exists in an image or not. Experimental results on 1000 image samples indicate the effectiveness of our method with a comparable prediction accuracy and model explicable.

1. Introduction

Machine learning plays an important role in computational intelligence. Many learning classifiers (e.g., support vector machine, fuzzy k-nearest neighboring, and neural networks with fuzzy systems) and intelligent algorithms have been used for computer-aided systems including adaptive force control of machining processes [1], fuzzy control of energy equipment [2], ultraprecision cascade control of mechanical device [3], and even object detection (e.g., obstacle classification [4]). Object classification in civil engineering is also a case in point.

Surface voids or bugholes, which are derived from the migration of an entrapped air bubble to the interface between fresh concrete and formwork [5] are considered to be one of the most seriously encountered defects on the concrete surface [6]. Visually, they correspond to scattered small pits and craters on the concrete surface after the process of formwork removal [7]. The existence of surface voids leads to many flaws, which are listed as follows. A certain amount of bugholes on concrete surface may leave an unaesthetic impression [8]. Reinforcements inside may be exposed and corroded due to the surface voids [5]. Excessive voids may reduce the adhesion properties of the fiber-reinforced plastic materials applied to the concrete surface [9]. Premature degradation of reinforced concrete structures may occur due to salt accumulation in surface voids [10].

Traditional methods for classification of surface voids are based on manual inspection of the concrete surface [1113] or on manual comparison between the concrete surface and a set of standard surface photographs of reference samples [6, 8]. Such methods are not only time consuming and costly but also labor intensive and inconsistent. Hence, it makes the automatic methods for recogniton of surface voids mainstream for assessing the inspection result of the concrete surface, which are less time consuming, nonexpensive, technology intensive, and objective, comparatively.

In order to get a better evaluation of the concrete surface, many image processing technologies and machine learning methods have been utilized. As to classification of concrete surface voids, there are mainly three different kinds of automatic classification approaches. The first kind of the methods refers to image thresholding or filtering for segmentation of concrete surface voids. Liu and Yang [14] used a pixel-level gray threshold segmentation method for classification of surface voids. Zhu and Brilakis [15] created a spatial spot filter convolved with the concrete surface to detect surface voids. da Silva and Štemberk [16] employed Winer’s filter to reduce the noise and a morphological filter to enhance the image contrast, which provided optimized segmentation results of concrete surface voids. Although these methods can obtain the areas or the edges of concrete surface voids rapidly, the accuracy of classification results is still constrained by the complexity of the heterogeneous concrete surface background and varying lighting conditions.

Therefore, prevailing research studies convert to the extraction of texture features from an image of concrete surface, considering the insensitivity of texture to illumination. In addition, different classification models (namely, classifiers) for discriminating whether there are bugholes or not on the concrete surface are always presented. Typically, Hoang and Nguyen [17] employed the Gabor filter and gray-level run lengths to generate a 108-dimensional texture feature derived from a concrete surface. Besides, a support vector machine (SVM) with its parameters optimized by employing an adaptive differential evolution with linear population size reduction was utilized to classify images of concrete surfaces into ones with surface voids and the others without surface voids. Anyway, a complex hybrid of different texture features and classifiers needs to be tried to obtain better classification accuracies corresponding to certain images of the concrete surface.

In order to simply raise the accuracy of the bughole classification, a deep learning-based convolution neural network (CNN) [18] needs to be considered. Yao et al. [19] extended the CNN by designing the inception modules to detect bugholes on concrete surfaces. In addition, they [20] presented an instance-level method for classification of concrete surface bugholes based on Mask R-CNN. These research studies demonstrate a great capability of the CNN in identifying concrete surface voids accurately. However, training a CNN often needs a large number of training images and a large computational overhead. Besides, CNN models commonly lack of interpretation to classification results. In order to represent a compromise between the interpretability and high accuracies of the bughole classification, traditional methods that put forward feature selection before classifiers are to be re-examined.

In this paper, we propose a variable selection method for automatic classification of surface voids from concrete images. Variables which are considered as the components of the 108-dimensional texture feature [17] are selected using a presented cumulative voting strategy. After that, ensemble classification is made on the selected variables to distinguish the images of concrete surfaces with bugholes and those without surface voids. The selected variables support better interpretability. As to ensemble classification, it provides high classification accuracies. The remaining parts of this paper are as follows: in Section 2, we indicate the related texture feature and present a variable selection strategy; in Section 3, the used data are described, and corresponding experimental results are shown and explained; in Section 4, discussion is provided; and the conclusion and prospect are given in Section 5.

2. Methods

2.1. Related Texture Feature

In order to support the surface void classification process, texture feature which is insensitive to illumination is always considered. Following Hoang and Nguyen’s research work [17], the Gabor filter and gray-level run-length methods are employed for feature extraction.

Commonly, surface voids are regarded as abnormal regions of a concrete surface with regular texture. A Gabor filter is considered to be an effective approach for texture discrimination [21]. The response of the symmetric Gabor filter is expressed aswhere represents the frequency at which Gabor filter responds most strongly along the x-axis. and denote the spatial scaling coefficient along the x- and y-axis, respectively. The corresponding frequency transformation iswhere , , and are expressed as , , and , respectively. It has been stated that the tuning parameters of the Gabor filter including the orientation angles and the radial frequencies must be specified in order to better recognize texture [22]. As suggested in the previous work of Hoang and Nguyen [17], the orientation angles including 0°, 45°, 90°, and 135° can be used. As to the radial frequency , it can be set to , , , …, , where represents the number of pixels and is a power of 2.

Gray-level run-length refers to a pattern of gray-intensity pixels in a particular direction from a reference point [23]. Given a certain direction in an image (e.g., 0°, 45°, 90°, and 135°), a run-length matrix is composed of the run-length of gray level . Based on the run-length matrix, Hoang and Nguyen [17] employed a variety of statistics to describe texture in the image. In turn, these statistics include short-run emphasis (SRE), long-run emphasis (LRE), gray-level nonuniformity (GLN), run-length nonuniformity (RLN), run percentage (RP), low gray-level run emphasis (LGRE), high gray-level run emphasis (HGRE), short-run low gray-level emphasis (SRLGE), short-run high gray-level emphasis (SRHGE), long-run low gray-level emphasis (LRLGE), and long-run high gray-level emphasis (LRHGE), which are listed as follows:where and denote the number of gray levels and pixels, respectively. and represent the total number of runs and the maximum run length, respectively.

As stated in the previous work of Hoang and Nguyen [17], four orientation angles (i.e., 0°, 45°, 90°, and 135°) and four radial frequencies (i.e., , , , and ) are employed to obtain 16 filtered images. As to each filtered image, four statistics representing the mean, standard deviation, skewness, and entropy of the Gabor filter response are calculated. That is,where represents the Gabor filter response at pixel . and are the height and width of the image, respectively. is the first-order histogram of the Gabor filter response. As a result, 64 components of the texture feature are derived from Gabor filtering. As to gray-level run-length, four orientations (i.e., 0°, 45°, 90°, and 135°) of the 11 statistics shown in equation (3) form 44 components of the texture feature. Together, they constitute the 108-dimensional texture feature for classification of surface voids. The extracted feature is entirely used to classify images of concrete surfaces into ones with surface voids and the others without surface voids. In fact, it may be only parts of the feature are considered to be effective for further classification.

2.2. The Variable Selection Strategy

Here, we introduce a cumulative voting strategy for selection of variables [24] corresponding to the 108 components of the extracted feature. The proposed variable selection strategy is divided into seven steps each of which is framed and labeled in a dashed box, as is shown in Figure 1.

At the first step, samples representing concrete surface images with or without bugholes are randomly divided. That is, 90% of the samples are randomly chosen to constitute a training group, while the left samples compose a testing group.

At the second step, a base classifier is automatically assigned from seven base classifiers including support vector machine (SVM) decision tree classifier (DTC), k-nearest neighbor (kNN), linear discriminant analysis (LDA), logistic regression (LR), multilayer perceptron (MLP), and naive Bayesian (NB). In each round , 70% of training samples are randomly selected for training each base classifier in 108-dimensional feature space. The remaining 30% of training samples are used to calculate the classification error rate , which is expressed aswhere , , , and denote the number of true positive, true negative, false positive, and false negative, respectively. Positive samples refer to images with surface voids, while negative samples correspond to images without surface voids. The base classifier which keeps the lowest classification error rate is automatically assigned in this round.

Score accumulation is made at the third step. Permutations are performed after the automatic assignment of a base classifier in round . As to each variable , only one-time permutation of the component values from the 30% of left samples is made. The corresponding classification error rate is expressed as . Accordingly, a score representing the importance of the variable is denoted as . After rounds of resampling, training, and scoring, the accumulated score of the variable is expressed as .

Variables are reordered at the fourth step. A 2D scatter plot is made with its x- and y-axis corresponding to the variable indices and the accumulated scores, respectively. If the accumulated scores of the variables are all relatively low, all the variables but not the significant ones selected using previously proposed clustering approach [25] are to be used at the following steps.

Ensemble classification is performed at the fifth step. rounds of resampling and training are made to establish ensemble classifiers in each dimension according to variables incrementally added in a descending order according to the accumulated scores. At each round of resampling, the base classifier with the lowest classification error rate is trained. This procedure is repeated from one to all the 108 dimensions, with a variable keeping a lower accumulated score added at each time.

Variable selection is made at the sixth step. In each dimension, the established ensemble classifier is applied to the testing samples. The accuracy () is calculated and expressed as

Accordingly, a line chart is obtained with its x- and y-axis corresponding to the variable indices in the descending order and the corresponding in different dimensions. Therefore, a dimension threshold can be made when keep almost the same with dimension incrementally increasing. That makes variables which are helpful to recognize surface voids from concrete surface images selected out from the 108-dimensional feature.

At the seventh step, evaluation metrics are made to estimate the effectiveness of the selected variables. In addition to , we also choose three widely used quantitative measurements. That is,

3. Results

3.1. Image Samples of Concrete Surface

We use a set of 1000 image samples capturing the texture of concrete structures provided by Hoang and Nguyen [17] for comparison, which is downloaded from the repository of github (https://github.com/NhatDucHoang/L-SHADE-SVM-SVD). In total, these 1000 images include 500 positives having surface voids and 500 negatives without surface voids, each of which keeps an image size of pixels (see Figure 2). A 108-dimensional feature is extracted from each image. Z-score normalization is made on each feature derived from an image. Then, the 1000 samples are randomly divided. 90% of the samples are randomly chosen as a training group for model construction, and the remaining 10% are regarded as a testing group for evaluating the model performance. Sample division has been made 20 times, in order to diminish the effect on evaluating the predictive capability due to random sample selection. Our method is developed in Python3.8 with skleanV0.24 and numpyV1.19. The corresponding program runs in a PC (Core i7-6700, 8 G RAM, 256 GB solid-state drive). A previously proposed variable selection tool [26] can be alternatively used.

3.2. Results of Base Classifier Selection

In each time of sample division, we make 10000 rounds of resampling and training. In each round, a base classifier keeping the lowest classification error rate is automatically assigned. Default parameters of the seven base classifiers are used. After 10000 rounds, the proportion of selected base classifiers is illustrated in Figure 3. It can be seen that LR is automatically selected for 4722 times. kNN and SVM come in second and third places with 27.06% and 25.58% of the 10000 round automatic selection of base classifiers. DTC, MLP, LDA, and NB are automatically selected in a descending order with their selected times staying in single digits. The experimental results indicate that LR, kNN, and SVM are the appropriate classifiers for discrimination between positive samples and negative ones in 108-dimensional feature space.

3.3. Results of Score Accumulation and Variable Ordering

As to the selected base classifier in each round, one-time permutation on each variable of the 108-dimensional feature is made, and a score representing the importance of the variable is calculated. In each round of resampling, training, and scoring, the score of each variable is accumulated. The accumulation is ended after 10000 rounds. Then, the 108 variables are reordered according to their accumulated scores in a descending order. As shown in Figure 4, the accumulated scores are all relatively low with the highest score being 0.008. In that case, all the variables have to be considered.

3.4. Results of Ensemble Classification and Variable Selection

We make an incremental strategy on all the variables. That is, variables are added one by one to establish features from one dimension to 108 dimensions following their importance ranging in a descending order. In each dimension, an ensemble classifier is built after 1000 rounds of resampling and training. Then, the established ensemble classifier is applied to the 10% of independent samples for testing. Thus, the accuracies in each dimension are obtained as shown in Figure 5. That is, a line chart is obtained with its x- and y-axis representing variables in the descending order, and the corresponding accuracies are calculated using equation (6) in different dimensions. It can be seen in Figure 5 that it is the first 20 variables with descending accumulated scores that form a 20-dimensional feature comparable to the feature with 108 components for effective classification of surface voids in concrete images.

3.5. Classification Results of the Selected Variables

In order to illustrate the effectiveness of the selected 20 variables, experiments are made as shown in Figure 6. Figures 6(a)6(d) illustrate the classification accuracies of the selected 20 variables, any 20 variables extracted from the 21th to the 108th components, any 20 variables extracted from the 108 components, and all the 108 variables, respectively. It can be seen that the selected 20 variables keep higher mean accuracies on 30% of the left samples in each round of resampling, i.e., 0.9114, and a comparable accuracy on 10% of the independent samples, i.e., 0.94.

We also make the corresponding blot plots as shown in Figure 7. Here, I, II, III, and IV correspond to the experimental results shown in Figures 6(a)6(d), respectively. It can be seen that the selected 20 variables can obtain a more stable accuracy at a high level.

3.6. Classification Results after 20 Rounds of Sample Division

In order to diminish the effect of random sample selection on evaluating the predictive capability, we repeat the whole procedure shown in Figure 1 for 20 times. The accuracy, precision, Recall, and F1 measure are calculated as expressed in equations (6) and (7), in order to make a quantitative comparison with the experimental results of L-SHADE-SVM-SVD [17] on each 10% of the independent testing set. The experimental results are illustrated in Table 1 and Figure 8, respectively. It can be seen that the selected 20 variables using our variable selection strategy obtain better classification results on automatic classification of concrete surface voids among images. As seen in Table 1, our method on all the 108 variables keeps the highest accuracy, precision, and F1-measure mean values. Besides, our method on the selected 20 variables keeps the highest recall mean value and comparable other values. Yet, L-SHADE-SVM-SVD using all the 108 variables keeps most of the lowest standard deviations. This phenomenon can also be seen in Figure 8.

Moreover, we record the selected 20 variables by adding one point to each component. After 20 times of sample division, a histogram representing the counts of every variable is obtained and shown in Figure 9. The variables with higher counts indicate better interpretation to classification results. It can be seen that the important variables focus on the first 64 components of the 108 ones.

4. Discussion

Experimental results have indicated the effectiveness of variable selection from image texture feature. In this section, we will further discuss three important facts derived from the experimental results.

Firstly, we wonder whether only some components but not the whole feature may work. Actually, it has been confirmed by experimental results shown in Figure 5 and Table 1. In Figure 5, it can be seen that the selected 20 variables keep a high accuracy comparable to that of the 108-dimensional feature. This phenomenon is further emphasized after the comparison between the first two columns in Table 1.

Secondly, it needs to be discussed whether various classifiers and their parameters are to be tried in order to get better classification accuracies. Unlike L-SHADE-SVM-SVD, our approach only utilized the default parameters of seven classifiers without any optimization. Anyway, better classification results are shown using our method (see Table 1 and Figure 8). Besides, it can be indicated from the pan chart shown in Figure 3 that LR and kNN may be the more proper classifiers getting at least the same or even better classification results as L-SHADE-SVM-SVD does.

Last but most importantly, feature interpretability needs to be discussed. In Figure 9, the counts of each variable derived from an accumulation of first 20 selections in each round are labeled using a histogram. Interestingly, the labeled variables with positive counts only appear in the first 64 components of the 108-dimensional texture feature, which indicates the texture feature expressed in equation (3) to be redundant. In order to confirm this phenomenon, L-SHADE-SVM-SVD is performed using the 64 components derived from Gabor Filter response only. The corresponding experimental results are shown in Figure 8 (group IV), which indicate that there is no need to use gray-level run lengths. In addition, variables with high counts in Figure 9 are considered to be interpretable. Anyway, each variable represents which one of the statistics shown in equation (4) with what certain orientation angle and radial frequency is still unclear due to lack of information in the provided image samples of the concrete surface.

5. Conclusions

In this paper, we propose a variable selection method for automatic classification of concrete surface voids. The image texture feature deriving from the Gabor filter and gray-level run lengths is employed. A variable selection strategy with seven steps is presented and utilized on the 108 components of the feature. Using the 1000 provided image samples, important variables are automatically selected to build ensemble classifiers for accurate classification of concrete surface voids. Feature interpretability is also discussed. The Gabor Filter is viewed as a stable source to provide texture feature. In the next stage, the major task is to discover further interpretability of the statistics with various orientation angles and radial frequencies derived from the Gabor Filter response.

Data Availability

Data provided by Hoang and Nguyen [17] for comparison can be downloaded from the repository of github (https://github.com/NhatDucHoang/L-SHADE-SVM-SVD).

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This research was financially supported by the Natural Science Foundation of Heilongjiang Province (no. LH2020F002).