Abstract

The paved cracks are one of the major concerns for scientists and engineers in road maintenance and damage evaluation study. Digital image processing applications have been applied in road surface inspection, classification, and decomposition of paved roads. This paper has tested and proposed the process to evaluate the road cracks and their possible solution as we know that the key issues for analysis are enhancement and segmentation of image along with edge detection to attain the promising results we have gained and discussed under the heading of simulations for the experimental and numerical of crack detection. Using MATLAB, we examine the various gray-level image using better techniques based on their computational capability. The method is based upon one of the histogram modification techniques, which is coupled with the segmentation method and the crack edge detection. At last, three feature methods are used, namely, Harris, MSERF, and SURF, to wind up our research.

1. Introduction

The maintenance of roads is an important aspect where the first step of maintenance is to identify road clutter and its documentation. Disaster troubles are defects that are visible on the road surface; appropriate assessments of roads will enable better opportunities for the allocation of resources and create better service conditions. Road surfaces such as cracks, disassembly, and surface deformation can be affected by various types of difficulties [1]. Image segmentation and feature extraction cannot be ignored in DIP [2]. Currently, there are different constraints in the investigation process, such as recording, analysis, and survey of data. [35]. Engineers have now realized the importance of this information to measure road quality [6]. Traditionally, pedestrian inspectors were used to collect road surface data as they walked or drove along the road to evaluate its difficulty, and then reports were generated; therefore, the entire process was time-consuming and costly. The entire job has to be done in fast-moving traffic, which makes it more difficult. This situation will jeopardize the safety of the members concerned [7]. There are various test methodologies used to check the quality of the road, such as ultrasonic testing, infrared detection, and image processing. There is also a method called WiseCrax, which uses infrared imaging to detect cracks [8]. With the passage of time, smarter, intelligent, and better methods are being developed to detect microcracks. These developments are still in progress and remarkable research is done on asphalt pavement progress [9]. The literature proposed a crack width detection method for extracting crack images from road surface disturbances and calculating crack, an analysis method based on escaping images to break the walking image to improve the reliability of locking walking cracking [10].

The automatic real-time detection systems currently in use focus on low entry rates and soldering difficulties, but still, there is no precise algorithm for identification and classification perfectly [11]. Despite the fact that there are some automatic inspection systems currently in use, systems with surface-scan cameras have low differential distortion problems and the resolution of dynamically acquired images is not satisfactory [12]. Filtration is commonly done to reduce noise, and, in this process, median filtering is most commonly used for the detection of crack pretreatment techniques these days because the median filter is more sensitive than extreme values. Moore et al. [13] used the median filtering technique to improve road imaging. Median filtering can be explained in various ways, like salt-and-pepper and so on; it adds black-and-white pixels randomly in a grayscale image [14]. The segmentation method is selected due to the histogram modification technique. In this technique, the clip is continuously repeated. During each repetition, pixels are added in the background until there are only the remaining features. At this point, the threshold can be automatically determined to separate the distress feature from the background [15]. In the research, we introduce 4 segments, following the abstract. The first heading is about histogram equalization. The second heading is about the numerical model and its implementation. Then, we have result sections that verify the performed experiments, and at the end, we have to conclude our research under the conclusion and references. Below Figure 1 shows the standard algorithm flowchart for a random case.

2. Histogram Equalization Framework Design

2.1. Image Preprocessing

During the preprocessing, the recorded image data prevents errors associated with the geometry and brightness values of the pixels. Error correction techniques are used with mathematical models [16]; sometimes, due to the strict subimaging system and the lighting conditions, while capturing the images, the contrast and brightness of images obtained from conventional and digital cameras become low [17]. Therefore, enhancement of the image is being used widely for the extraction of features, analysis, and displaying of an image [18]. Figure 2 shows the processing flow of the image processing along with the communication directly with different segments. Figure 2 explains which segment is linked directly with the knowledge base and which is communicating indirectly with the base.

In the preprocessing section for the sake of better contrast between road and non-road pixels, mathematical morphology and the Gabor filter were proposed. Some of the images have no road pixels because that dataset is built through remote sensing [19]. It was used simultaneously in the algorithm in various studies [2022].

This section is primarily focused on the initial processing shown in Figure 3 by applying the Gabor filter. The dataset was completed by me to obtain the automatic pavement detection through a vehicle. The pavement image data collection system consists of three parts. Figure 1 shows the system composition. The crack collection camera adopts the high-definition linear array CCD camera, which vertically illuminates the pavement surface image, and the corresponding line laser provides auxiliary lighting for the camera to store in the hard disk. Out of the acquired sample of images, cracked and uncracked images are gained, and the vehicle that has a speed limit of between 50 and 60 km/h was used. Multiple images have been analyzed and only a set of images were selected to be processed. The image set that has been used and analyzed has the following statistics shown in Table 1.

In Table 1, default parameters were used during capture and other steps like a transmission from the camera to the base station, and so on. A good number of scientists and researchers proposed different kinds of approaches (e.g., NSST, guided image filter, and structural-proper kernel histograms) [23]. Several researchers like Singh and Kaur have several methodologies for automatic road extraction [24]. They have presented image sources, advantages, and disadvantages, basic extraction algorithms, and statistics results. Image preprocessing is independent of transmission although being processed by image acquisition, capturing, storing, disposing of, and compressing.

2.2. Object Primary Attributes

The convolution in the algorithm is the calculation of the incident image, which would be explained in a later section. The physical meaning is to estimate the change of luminance in the image by calculating the weighted average of the pixel and the surrounding area and by removing change L(x, y) and only retaining the S(x, y) attributes. The objective of attribution is to identify regions between the grayscale and processed image (binarized image), such as shape, edges, and similarity coincidence. The spatial method is based on higher-order probability distribution or/and the correlation between pixels [25]. In local, the threshold adopts the value on each pixel to the local image characteristics. In this method, a different fixed point is selected for each pixel in the image [26].

2.3. Image Enhancement through Noise Filtering

Image enhancement is an important aspect of image processing [27]. For getting the required information, each specific character of an image will be emphasized. This is the actual purpose of the image enhancement because the information is not extracted by itself but is just emphasized. Here in the image recognition, the enhancement is observed in contrast, acuity, edge enhancement, pseudocoloring, filtering, and noise; it also enlarges the image to observe more keenly [28]. Out of several current image enhancement procedures, filtering techniques have become very prevalent over the years. It is considered appropriate for addressing the problem of noise removal and edge enhancement [29]. The next portion is explained in Figure 4 for converging methods for bihistogram and edge enhanced equalization.

Figure 4 is showing the converging methods for bihistogram and edge enhanced equalization. In the mentioned flowchart, PLs are calculated for each subhistogram once histogram segmentation is completed to retain brightness. PLs of each subhistogram are calculated. After that, CDF is founded and the linear coefficient of GF of each pixel for the input image is also derived. To avoid overenhancement, histogram adjustment is done with the PLs. The CDF is next computed for each subhistogram, followed by the GF linear coefficients for each pixel of the input image. Finally, the edge-enhanced pictures are generated while noise amplification is suppressed using the linear coefficients and the CDFs. Our proposed algorithm is unique because during histogram equalization we face brightness inconsistency, overenhancement, and undesired noise amplification, and to alleviate these shortcomings, we used GF and BHEPL.

The image processed here as a research experiment is homogeneous, not heterogeneous, so the level will be unchanged and remain constant. These image pixels are characterized as very narrow peaks to represent the histogram. Uniformity is the result of improper illumination on the scene. In the end, images are mixed and unable to understand easily because of the uncertainty of the human factor. It is due to one narrow gray level in the image; this has a wider range of gray minor levels. Jitprasithsiri suggested that median filtering is applied as a pavement image enhancement technique [30]. In common situations, the contrast stretching method is specifically designed. A vibrant network that is available for different stretching techniques has been developed to stretch a narrow range [31]. Different stretching techniques have been developed to stretch a narrow range in a dynamic network that is available [32]. Unwanted information from the image is filtered out through noise filtering. Most of these features are interactive and are also used to generate various types of sound from images. For common circumstances, the suited method is the contrast stretching method. Noise filtering declares all the unwanted information as noise and eliminates it from the image. Noise filtering can be used for generating various types of features from images and most of these features are interactive [33], as shown in Figure 5 below for original and filtered images after applying a noise filter.

In Figure 5, the original image and 5(b) are representing the adjusted image after applying the noise filter. Histograms are widely used for image enhancement, which reflects the image function. Image features can be modified by changing the histogram (e.g., equal histogram). The equation for a histogram is a nonlinear stretch that redistributes pixel values such that each value in the range has approximately the same number of pixels. The result is close to the average histogram. Therefore, the contrast at the peak increases, and the contrast at the tail decreases. In image enhancement, the proposed filter is the median filter because it reduces the noise very significantly. This noise reduction is a normal preprocessing step to improve subsequent processing results. Median filtering is very common in digital image processing because in some cases it maintains edges and so on [34]. The working way of the median filter is to replace every entry with the median value of the adjacent entry by the signal input of the entry.

2.4. Numerical Modelling and Implementation Algorithm

As it is known that every image is not uniform, some images will be brighter on the corners, but some will be on the edges. And maybe some images will be overall less bright [35]. So, enhancement is the best tool and usually, through this procedure, the contrast shoot at peak and tail got lessened. In the equalization, the pixel is redistributed in such a way that every point of the image approximately gets the same number of pixels in the given range. I(x, y) is the true input image with the coordinate pixels. n(x, y) is representing the noise image, which is added with the input function, and the image we gained can be represented by

The above mention set of parameters has two parts: the I(x, y) part that is the initial image and the n(x, y) part that is an unwanted signal representing noise. As that histogram modeling is used for continuous process function rather than a discrete function, the alternate means that on every pixel there is a certain value within the interval of 0 and 1. The function represents the gray level input image with the coefficient of and input pixel values on the coordinates . The processed image will be expressed by . So, mathematically, we can write it as follows.

In the above equation, represents the transfer function for the input image. The modified equation will be shown as as , Then, the equation will be written as follows.where is the input image and is the transfer function of the input image; basically the input is being shifted to output and it is shown mathematically as above in equation (3). As for histogram equalization, the density level is single and monotonically increased, so the overall equation is as follows.

In equation (4), the state of the image has been changed from A to B ; in other words, the input is shifted to processed images with the transformation function. So, the region densities tense to the input side and on outside . In terms of histogram equalization, it is shown as follows.

Now in terms of frequency, the overall equation will be like this, in which the left-hand side shows the frequency taken on the input image and the right-hand side shows the integral within a limit from i = 0 from the maximum interval of mentioned in the following equation.

As can be simply used as cumulative probability, equation (6) will be yielded into the following equation.

For finding the inverse transfer function if S that is shown by processed image is the transfer function of , then the formula will be shown as if it proved the condition of a single value and monotonically increased in the intensity level of ; then the transfer function will be converted intowhere r is the gain of the inverse transform function of its input value. Thus, a comparison change g0 between u and could be calculated by combining a pseudoinverse of with . This direct method is not adequate because it does not guarantee that in the end, will be the same or similar to , specifically if the initial statement v = g(u) is not accurately right, that has to be expected in real situations. Therefore, the histogram of u and is represented in the discrete case by the following governing equations.

In equation (9), u and are the pixel values at point x. The point lies on i, j and the overall gain will be divided on sigma. In the above equation, there is a summation sign that sums up all the variables used in the image histogram with omega and gou composition. There is also limitation; for example, the minimum value of k is 1 to the maximum value of N. keep in mind that during summation x belongs to omega. The whole numerical sets mentioned above from (9) to (15) reflect the generalized solution methods for the histogram.

2.5. Image Segmentation through Iteration Point

In this part of image processing, every single image is divided into two main parts for segmentation: the background and the foreground. The endpoint or segmentation will stop when the goals are achieved by pointing or isolating the foreground from the image. The isolation processes of the foreground will lead us to the isolation of the background automatically. Achieving the automatic crack sealing of image processing, any researcher cannot ignore the importance of image segmentation [36]. In this part of image processing, the iteration point is used, and the image is having different segments explained under the subheadings.

2.6. Automatic Thresholding

The effective and reliable method is autothresholding, where the background noise is significantly minimized. Accomplish by utilizing a feedback loop to optimize the thresholding value before converting the original grayscale image into binary. Because of the abundant research [37], a lot of methods for segmentation have been presented [38]. Two scientists, namely, Kaur and Singh, did their contribution to present different segmentation algorithms. They described the structural, stochastic, spectral, or hybrid techniques [39]. Thresholding is a technique that designates fixed point. If the image density on that specific point is less, then we consider that part of the image as black, but if the density of the picture is greater than the value of that fixed point, that part of the image is surely not black, so the alternate meaning is white. If X represents the fixed point and stands for density, then the white and black will be described as in equations (16) and (17).

Simply put, the whole image is divided into two gray levels: one is for representing the foreground and the other is for background pixels.

The general formula in equation (18) for is the processed image that represents the grayscale of the pixel .

Equation (18) reflects the original image enhancement by Т and output gain is . As thresholding is a mapping of the binary set, the thresholding value will be at the coordinates, not a pixel, which is constant for every pixel of the image. Now, there are two ways if T defines the (x, y) on a certain field, this is known as template operation. But if T defines the (x, y) on each point, this is known as the point of operation [40]. As regards, the segmentation of the multireed solution image based on a graph-theoretic approach is used. Best thresholding can be improved through correlation, as it is known that the image might be the combination of different objects called heterogeneous images. We can find different objects and shapes like a kite, trees, wind, boy, and water in the same image. While considering the case of precise, accurate, and improved genetic algorithms, we divide the problem statement into two main classes under the umbrella of image segmentation.(1)Parameter selection for the better result gained by the segmentation process.(2)Involvement of region labeling of pixel-level segmentation.

In the above two mentioned algorithms, the first algorithm part is widely used because it requires the optimized parameters to be utilized. Besides this, there are so many other methods through which we can easily do the segmentation process. In the segmentation process, we have to divide the image into different segments through thresholding using different techniques like watershed techniques and iteration thresholding, and so on [41], in the iteration thresholding we use to find the minimum and maximum gray values to find the initial thresholding value, that is, Т(i) for the functional variable in the following equation.

The above equation (19) shows the iteration thresholding mathematical equation in which shows the output gained as a result of iteration thresholding, where Zmax represents maximum gray value and Zmin represents minimum gray value. Table 2 illustrates the overall features of extraction methods.

2.7. Crack Detector Methods and Their Classification

Different algorithms are mentioned in Table 2 and three like MSER [42], Laplacian of Gaussian [43], unitary transforms [44], and various methods have been used by the researchers for image feature extraction [44] and the most convenient and reliable is Harris and Stephens detector [45] mentioned in Table 3. In the Harris detector, intensity will be changed for shift [u, ] [46].

In the above equation (20), the shifted intensity is shown by In the abovementioned equation, the submission and x, y is noticed with the shifting intensity. stands for the intensity of the specified pixels with the shift of u and as shown in equation (21). In the following equation, the intensity of the specified pixel is subtracted from the shifted intensity pixel value.

Nearly constant patches will be close to the value of 0 and larger than zero for distinctive patches, which are identified already. So we need patches where E(u, ) is larger to be started. The automatic clustering technique has been used by Porter and Canagarajah [47, 48] for the extraction of image features.

3. Simulations for the Experimental and Numerical Crack Detection

Every technique has its benefits and drawbacks. It is depending on the requirement of the research, resources, and desired output. Just like for noise filters, we can use different filters like low-pass filter, high-pass filter, or mean median [49].

3.1. Implemented Software

All the techniques and processes were done in the well-known software MATLAB because this software gives us the best environment for image processing simulation. Processing the images as the input after processing gaining the output is very easy and inexpensive with the Image Processing Toolbox [44]. One of the key points of using this software is the controlling features, as the control of input and the program can be stopped at every stage if needed [45]. This software is very approachable and friendly because while editing, we do not need to rewrite and run all the programs from the beginning as we do in other programming software like C or C++. So, with this, the researcher saved a lot of time and energy while achieving a better outcome than others like C or C++. For this experiment, NVIDIA GPU has been used, which speeds up the processing with no need to consume and is also suitable for using GUI (graphical user interface) to enable command control parameters. GPU is also essential for processing all the above techniques [46]. The device specification on which these experiments are performed is given in Table 4 below.

4. Description of the Experiment

For the experimental purpose here, the Dell laptop was used, with the Central Processing Unit of Core i7. The system model is 8565U having a 1.80 GHz and 1.99 GHz Intel processor. The useable RAM of 7.82 GB and the operating system of 64 bit. MATLAB 2015a is the best option for simulation and analysis of gained data that are images. Different images were captured from the camera with different specifications, but the processed image in MATLAB 2015a was having a size of 318 × 307 × 3, which acquires a total size of 292878 bytes with unit 8.

5. Experimental Results and Analysis

The digital image constitutes every finite element with a particular location, worth, and value, which specifies the location known as an image element, pixel, or picture element [7]. So, in the image enhancement part, the brightness of the pixel is increased for a better understanding of machines or human visualization. In a classical way, histogram equalization of an image is analyzed independently from its contrast [429]. It comprises applying an extraordinary difference change that straightens the histogram. It can be done either identically or powers the repartition capacity to be as direct as could reasonably be expected. Histogram adjustment is not an appropriate answer for difference invariant image analysis. The mode of the neighbor is called the “window,” that is, sliding, entering through the entry, exceeding the entire signal. For 1D signals, the most obvious window is just a few entries before and after, while for 2D or more dimensions, the more complex window pattern can be a “square” or “cross” pattern [50].

Figure 6 shows the origination of the experimental image that is taken from the camera as shown in Figure 6(b); its mathematical histogram is dragged while 6(c) is the graphical representation of the original image. Later in this research, it will be compared to the current research and previous literature.

While the image is visualized it becomes crystal clear that the background pixel varies from point to point. So, in the section of preprocessing, the uniformity tried has been observed by making the pixel uniform within a range. So, Figures 7(a) to 7(c) depict different stages and changes of approximate background and also its subtraction from original image 7(c), so that it can be added up and the variance of the pixel can be visualized and experienced. For creating an approximate background according to the composed algorithm, the morphological process is used with its suitable radius. For better visualization effect in Figure 7 in colored parametric surfaces, explain and sort out the study and study its mathematical functions in the region of the rectangle.

6. Adaptive Histogram Adjustment and Modification

In this part of the research, the subhistograms are adjusted and modified using histogram equalization [51]. For each subhistogram, segmentation is used. This method is effective for noise removal and better visual contrast.(1)Assume A × G is filtering (smoothing).(2)A × (dG) contains the high frequency.(3)A + lA × (dG) = A × ((1 +l) dlG) = A × S(l) amplifies fine details in the image.(4)The parameter l controls the quantity of amplification.

After propagating the approximate background and subtracting it from the original image, the output image has been adjusted and represented in Figure 8, in which 8(a) is showing adjusted image graphical representation, (8(b)) tells us about the adjusted image histogram, and 8(c) is the plotting representation of the adjusted image. The next processing step is the adaptation of the image that is proposed.

Figures 9(a) to 9(c) are presenting adapted histogram graphical representation, adapt histogram, and adapted image histogram, respectively. While analyzing Figure 9above, it became very clear that the pixel is contrasted. After subtracting the approximate background, adjusting the image, and finally, the adopted image is gained. After it is compared, the huge difference is been noticed between pro and pre adapted image. The method application yields the processed image for us equalized histogram, as shown in Figure 10(d).

The output gain in Figure 10 of the whole histogram is shown and well explained in Figures 10(a) to 10(d). It reflects the final histogram that is distributed according to the pixel value. According to the image, the pixel value on every spot irrespective of center, upper, or lower side is almost the same as the pixel values are equalized and uniformly distributed shown in the above figures.

In Figure 11, the comparison between the different feature extractions like Harris, SURF, and MSERF is shown as was already discussed in Table 3. It comprises an extraordinary difference change that straightens the histogram trend. It can be done either identically or powers the repartition capacity to be as direct as could reasonably be expected. For a few of the cases, the histogram adjustment may not be an appropriate answer for difference invariant image analysis [48]. A minor region of the image, either dark or bright, influenced the result of the histogram equalization. We observe the scenes of images with different angles and concentrations. That led us to become insensitive towards local contrast changes in scenes [52]. On the other hand, it is questioned that histogram equalization is unpredictable sometimes because it could deeply enlarge the noise amplitude in poorly contrasted regions [53], so they can produce unwanted noise textures. Histogram equalization cannot permit complete control of the noise application. It is risky if used as an input of the image analysis device.

7. Conclusions

This article provides an understandable explanation of different stages of digital image processing along with their implementation with the help of software through different variations. Initially, image processing is done, and processed images are enhanced using image enhancement. It validates the numerical and experimental model for the paved crack analysis through the software simulation for every stage and the degree of precision and improvements. The results lie in significant agreement compared with the existing models; they are also verified in the simulation result. In the implemented algorithms, it is found that the features are extracted in our very unique way. The results strongly support the conclusion that the noise parameters are effectively released with median filtering. Different feature extractors are used like Harris, MSERF, and SURF for features extraction. Regardless of the effectiveness and robustness of the executed techniques in MATLAB, there is some margin of improvement while studying crack detection for dynamic design. In the future, we can also implement using other algorithms and do a comparison with implemented algorithms.

Data Availability

The comparative and experimental data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

There are no conflicts of interest regarding the publication of this paper.