Abstract

Fabric defect detection is a crucial quality control step in the textile manufacturing industry. In this article, a machine vision system based on the Sylvester Matrix-Based Similarity Method (SMBSM) is proposed to automate the defect detection process. The algorithm involves six phases, namely, resolution matching, image enhancement using Histogram Specification and Median–Mean-Based Sub-Image-Clipped Histogram Equalization, image registration through alignment and hysteresis process, image subtraction, edge detection, and fault detection by means of the rank of the Sylvester matrix. The experimental results demonstrate that the proposed method is robust and yields an accuracy of 93.4%, a precision of 95.8%, and computational speed of 2275 ms.

1. Introduction

Quality is an important aspect in the production line of the textile industry. Thus, fault detection in fabric quality control is an essential requirement of the textile industry. To minimize the manual labor in this endeavor, image analysis and processing techniques are widely used in the industry to automate the defect detection and classification process.

The defects that occur frequently on the fabric pattern limit the manufacturers who are able to recover only 45-65% of their profits from the off-quality goods [1, 2]. Hence, the defect detection process in the textile industry needs to satisfy high expectations of nearly 100% detection accuracy. Therefore, any other methods that are adopted should be able to perform real time defect detection with agility and accuracy. The main challenges encountered include the plethora of types and zones of defects to be detected, as well as the very fine variations that are present between the defects.

In many textile companies, the workers perform the fabric quality control process through human visual examination. As such, quality control is totally observer dependent, and it lacks uniformity. Further, the fabric quality control process is highly demanding for a human observer because the type of defects present will vary from fabric to fabric, according to the dynamic nature of the production process.

In this paper, we have generalized and automated fabric quality control, using the Sylvester Matrix-based defect detection algorithm, which can easily detect even very fine defects on fabrics, by comparing the input image with the reference image. Substantial literature sources are available, related to the algorithmic developments in the textile industry to detect defects [38]. As specified in [9, 10], an automated defect detection and classification system will certainly enhance the product quality, and result in heightened productivity. The autocorrelation method is one among the robust algorithms for detecting defects in both patterned and unpatterned fabrics [11]. Gabor Wavelet Network (GWN) was chosen as an effective technique to extract texture features from the textile fabrics. Depending upon the features extracted, an optimal Gabor filter was designed for defect detection [1216]. Reference [17] presents the wavelet subwindow and gray level cooccurrence matrix for defect detection, and the Mahalanobis distance to categorize each wavelet subwindow as either defective or nondefective. Local homogeneity and neural network-based defect detection algorithms are presented in [18]. A design that includes both hardware and software, and which uses the Otsu and Golden image subtraction methods, was proposed in [19] to reveal the defects. Its performance on a variety of defects validated the accuracy of the method developed.

In [20], the approach proposed was the fusion analysis for surface detection, which included a combination of the global and local features for the detection process by extracting and classifying the energy characteristics from the images. Based on the genetic elliptical Gabor filter, a novel method of defect detection was proposed in [21]. After being tuned by the genetic algorithm, the Gabor filter was applied to a variety of samples which show differences in type, shape, size, and background.

The Elo rating system was designed to inspect by making fair matches between the partitions from the images [22]. It was estimated to have 97% accuracy with the use of 336 patterned images. The particle analyzer method in [23] reveals higher performance compared to the other traditional methods, as it drives the analysis towards a predefined region of interest (ROI), and defines a particle as consisting of a minimum number of pixels. Moreover, a huge number of classes having large intraclass diversity continues to pose a major issue in the Feed Forward Neural Network (FFNN) and Support Vector Machine (SVM) dependent inspection methodologies, as all the classifiers require training regarding the known classes of fabric defects [24, 25].

The principal deficits present in the available literature are the overall lower accuracy and the substantial time for making decisions. The methods described in [26, 27, 28] are able to achieve accuracy levels of only 90%, 90.6%, and 90.8%, respectively. The processing time of the algorithms revealed in [29, 30] stays high at 5.2 s and 5.9 s, respectively. Furthermore, the methods proposed in [31, 32] fail to give acceptably accurate performances, while detecting the finer defects in the fabrics.

In this paper, we describe a novel defect detection method which has fast processing and high accuracy, to detect even the very fine defects in the fabrics by comparing the reference and test images. In this method, all the images used are in RGB scale with identical resolution. First, image enhancement is done on every test image to ensure a better contrast image and thus facilitate defect detection. Later, image registration ensures that all the test images are in proper alignment. After this step, image subtraction is done to crosscheck the input against the reference image to detect any type of defects. If a positive rating is noted for the presence of defects, then edge detection is applied to both the reference and test images, to enable tracing even the finer details. Finally, the Sylvester Matrix-Based Similarity Method (SMBSM) is used to identify the defects in the fabrics. The method proposed works with 2275 ms computational speed and 93.4% average accuracy.

2. Proposed Methodology

In this research, an automated fault detection technique is proposed to lessen the degree of human interaction required for fault inspection in fabrics. Three types of fault inspection algorithms exist, namely, the referential, nonreferential, and hybrid approaches [33]. The algorithm presented here is based on the referential approach, in which a reference image is employed to find defects in the test image. The proposed system is depicted in Figure 1.

It is well known that the performance of any image comparison algorithm is highly dependent upon the capturing condition of the input image. However, our system can analyze the images in the face of different capture conditions, in terms of contrast, distortion, and alignments, due to the image preprocessing techniques adopted. In the system proposed, the test image () and the reference image () are in the RGB format with identical resolution.

2.1. Image Enhancement

Image enhancement aims at improving the quality of the test image captured under different lighting conditions. In the algorithm proposed, first the Histogram Specification (HS) improves the contrast level of the test image based on the reference image. Second, the Median–Mean-Based Sub-Image-Clipped Histogram Equalization (MMSICHE) algorithm was adopted as the processing technique to achieve the objective of preserving brightness, as well as image information content (entropy) together with control over the enhancement rate. This method circumvents excessive enhancement and provides images having natural enhancement, with the assurance that the test images taken under different lighting conditions will be accurately preprocessed to detect defects.

2.1.1. Histogram Specification (HS)

The histogram specifications are used to rectify the contrast levels of the input test image against the reference image; i.e., if the contrast level of the input image is low in comparison to the reference, a correction will be applied to raise the contrast level and vice versa in the event of high contrast inputs [34].

The histogram of the intensity levels of both the reference and test images will be in the range of . The and are the number of pixels having intensity in the reference image and input test image, respectively, where . The inverse transformation, as defined in (1), maps to and shows the corresponding intensity values of the transformed test image : where and are the row and column dimensions of the images, respectively.

2.1.2. Median–Mean-Based Sub-Image-Clipped Histogram Equalization (MMSICHE) for Contrast Enhancement

This method represents the MMSICHE algorithm which consists of three steps: median and mean calculation, histogram clipping, and histogram subdivision and equalization. MMSICHE further enhances the image quality of the transformed test image .

The median of the image is shown to have an intensity value , where the cumulative density function is around [35]. Based on the median value, two mean intensity values, the mean of the lower histogram and the mean of the upper histogram , are calculated for two individual subhistograms. The corresponding values for , , and , as shown in Figure 2, are calculated as in [35, 36] before the histogram clipping process.

Histogram clipping is done to control the degree of enhancement, to ensure that the resultant image is natural in appearance, matching that of the input image, as close as possible. The clipping threshold () is calculated as in [28, 35].

The image histogram is divided equally into four bins as shown in Figure 2. The subdivision process produces the four subimages , , , and ranging from the gray level 0 to , to , to , and to , respectively. In the next step of MMSICHE, based on the pixel distribution, all the four subhistograms are equalized individually and independently, using either (2), (3), (4), or (5) for the independent fine tuning: where is the clipped histogram. , , , and are the total number of pixels in the subimages , , , and , respectively.

The final step is to integrate all the subimages, , , , and , into one complete image for further analysis.

2.2. Image Registration

Image registration aims at finding the best transformation, which will align both the reference and input images. More precisely, it is used to identify a correspondence function, or mapping, that takes each spatial coordinate from the reference image and returns the coordinate for the test image. The transformation adopted involves two stages, namely, the geometric transformation and image resampling.

The geometric transformation [37] adopted is expressed as follows: where is the point coordinate of the test image and is the corresponding point coordinate of the reference image. The transformation used in (6) has six degrees of freedom (DOF), where and relate to the translation of the signals, and , , , and , are used to calculate the scaling and shearing between the two images.

With this transformation, a correspondence map is established between the pixels in the preprocessed test image and that of the gray scale reference image , and the registered test image is generated.

2.3. Image Subtraction

Image subtraction is done to obtain the differential mapping between the reference image and the preprocessed test image . As image subtraction aims at identifying the presence of defects in the input, it will produce a binary decision which will be “1” if defects are present, and “0” otherwise. If defects are detected, the fabric is then transferred to the edge detection and fault detection stages, where the exact location and details of these defects in the fabric are identified. In the case nothing is detected, the fabric is labeled “defect-free”.

In the algorithm proposed, the absolute difference is calculated in a pixel-wise subtraction process, as shown:

The output of may include a few erroneous pixels due to the uncorrected noise or misalignment between the two images. The double threshold approach is thus defined as eliminating the nonrelevant pixels which belong to the area between and , as shown in Figure 3. The hysteresis process is then performed, where a weak pixel is transformed into a strong one, if and only if at least one strong pixel is present within its neighborhood, as depicted in Figure 3.

2.4. Sparse Banded Filter Matrices (SBFM) for Edge Detection

In the algorithm proposed, Sparse Banded Filter Matrices (SBFM) [38] enables the detection of the edge information in both the test and reference images. SBFM comprises two major stages, namely, implementation of the zero-phase high-pass Butterworth filter using the SBFM matrix, and edge extraction. This edge detection method facilitates the finer details to be detected, of both the gray scale reference image and registered test image significantly, thus ensuring the detection of even considerably insignificant defects on the test image during the fault detection stage.

Implementation of the zero-phase noncausal recursive high-pass filters based on banded matrices was introduced in [38] to identify the edge information from the images.

The matrix form of the first-order Butterworth high-pass filter can be expressed as follows: where and are the banded matrices of size and , respectively, with representing the length of the input signal. and are defined as follows:

Furthermore, the transfer function of the zero-phase noncausal higher-order high-pass Butterworth filter can be expressed as follows:

where and are the filter order and cut-off frequency, respectively.

According to (10), the frequency response is maximally flat at , and the frequency response is of unity gain at . Therefore, this is a zero-phase digital filter. The zero-phase high-pass Butterworth filter shown in (10) can be implemented using (8). Then, and can be defined as the banded sparse matrices of size and , respectively.

The sparse banded high-pass filter proposed is then applied row-wise and column-wise to extract the vertical and horizontal edges, respectively, as in [39], to detect all the edges of the test image processed and the gray scale reference image , while producing their corresponding edge extractions as and , respectively.

2.5. Sylvester Matrix-Based Similarity Method (SMBSM) for Fault Detection

The Sylvester matrix () is associated with two univariate polynomials with the coefficients in a commutative ring [40]. This matrix helps to determine the common roots of the characteristic polynomial of the two images being compared. Hence, the similarity measure between the two images represents the rank or nullity of the matrix .

and are 2D subimages of and such that and square matrices. Their characteristic polynomials can be obtained by evaluating and . These characteristic polynomials can be stated as follows:

The Sylvester matrix of and can be defined as follows: where and are the Toeplitz matrices whose entries are the coefficients of and , respectively [41] and can be defined as follows:

The nullity and rank of the matrix show the degree of closeness of the characteristics of and . For similar images, the nullity value is equal to the number of columns in a matrix and is zero for the totally dissimilar images. However, the value of rank, , is zero for similar images and equal to the number of the columns in a matrix for those images that are totally dissimilar. In the event of small defects, the rank too will be small, and the number is seen to rise as the defect intensity increases. Thus, rank can be used in a defect intensity function to visualize the defective region, as it is in direct proportion to the defect intensity. Hence, to reach the final labeling in our method, we adopt the rank

3. Results and Discussion

In this section, the findings of the simulation are presented. The algorithm proposed exhibits a significant improvement over the existing methods for defect detection in fabrics, as it is successful in identifying directional defects, under varying conditions of illumination.

The model was assessed in terms of robustness and stability using two datasets KTH-TIPS-I and KTH-TIPS-II [42]. The fabric dataset includes around 500 samples, captured under different conditions of illumination and contrast settings with skews, creating a challenge for defect detection. Table 1 lists three samples, including the reference () and test () images of the dataset. Further, right at the beginning, all the input images are resized to .

First, the image enhancement method described in Section 2.1 is applied to enhance the histogram of the test image, as shown in Table 2. The intermediate outputs, post application of the image enhancement technique, are depicted in Table 3. Next, the image registration process presented in Section 2 is applied to align the test image coordinates with the corresponding reference image alignments, as revealed in Table 3. This preprocessing is done to circumvent any inaccuracies at the image subtraction stage, in which the double threshold values and are utilized, after manual tuning of the algorithm.

In the textile industry, all the defects need to be detected with 100% accuracy. Keeping this objective in focus, the first priority is to minimize the rate of the false-positives which should be as negligible as possible. If false-negatives occur, then the fabric needs to be reinspected or discarded, even if the product is defect-free. In this experiment, the rates of the false-positives and false-negatives for the two datasets considered, KTH-TIPS-I and KTH-TIPS-II, are 4.2% and 0%, respectively. This occurred because of the carefully fine-tuned double threshold values used during the subtraction stage. After being subjected to image subtraction, all of the test images (I1T, I2T, and I3T) are identified as defective fabrics. Hence, these images are moved onto the next stage, namely, edge detection.

In the final stage of the algorithm, edge detection, as shown in Table 4, is performed using SBFM for the reference and test images, which are identified as positively defective, during the image subtraction process. The advantage of having this stage is to enhance all the minor details of the images, which will assist in detecting even the very fine defects present in the fabric.

The input parameters, selected by manual tuning, for the sparse banded high-pass filter design, including degree, cut-off frequency, and the length of the input sequence, are set as 3, 0.9, and 1024, respectively. The input image size of is subjected to zero padding, in order to match the sequence length (each row/column of the input image) given as the input to the filter design. From the analysis, it is clear that the edge extraction using SBFM provides more detailed results in the test and reference images, even the finer details, and the discontinuity in the edges extracted is less because the input parameters for this filter design are well tuned and matched. This is evident in the images, as revealed in Table 4.

A comparison of the similarity between the two images is performed using SMBSM. SMBSM is evaluated using window sizes of , , and pixels, with the minimum simulation time for the fault detection process being given by the window size. The reference and test images are first divided into small subwindows of and then compared against the coinciding location of the reference image. For each subwindow, the Sylvester matrix () is computed, and its rank is used to determine the defects on the selected subwindow of the test image compared to the reference image. The process is repeated for all the pixels of the entire image to detect faults, as shown in Table 4.

To calculate the accuracy of the algorithm that we proposed, the Binary Similarity Measure () is used as given in (16), which detects the dissimilarities between two binary images based on a modified Hamming Distance measure [43]. The values of range from 0, distinct-dissimilarity, to 1, perfect similarity. The actual and detected faults on the test image are represented in the binary scale and , respectively, and the values of the test images are listed in Table 5. where the symbol represents the logical exclusive-OR operator, and and are the row and column dimensions of the binary images of the actual and detected faults ( and ).

The computational speed of the algorithm proposed is about 2275 , making it superior to the existing ones. Furthermore, our algorithm performed with 93.4% accuracy, 95.8% precision, and 100% recall, on average. The proposed method works significantly well, even when the test images are taken under different conditions of illumination and have skews. The experiments presented here demonstrate the superiority of the proposed method.

4. Conclusions

In this paper, a method to identify defects in fabrics has been proposed, based on the Sylvester Matrix-Based Similarity Method (SMBSM). This method is capable of handling misalignment and varying illuminations of the test images, captured under different conditions, as image enhancement improves the quality of the test image and image registration ensures proper alignment between the reference and test images. Edge detection is guaranteed to identify even very fine defects on fabrics during fault detection. Visual and quantitative results on the two datasets presented have demonstrated that the proposed method is superior and robust. In the future, more experiments will be conducted to further improve the accuracy of this method, and to assure that it is fast enough for defect detection in real time.

Data Availability

The dataset used in this research is freely available through “M. Fritz, B.C. E. Hayman, and J.O. Eklundh, THE KTH-TIPS Database. (Online) Available at: https://www.csc.kth.se/cvap/databases/kth-tips/download.html.

Conflicts of Interest

The authors declare that there is no conflict of interest.