Abstract

A significant attention of researchers has been drawn by automated textile inspection systems in order to replace manual inspection, which is time consuming and not accurate enough. Automated textile inspection systems mainly involve two challenging problems, one of which is defect classification. The amount of research done to solve the defect classification problem is inadequate. Scene analysis and feature selection play a very important role in the classification process. Inadequate scene analysis results in an inappropriate set of features. Selection of an inappropriate feature set increases the complexities of the subsequent steps and makes the classification task harder. By taking into account this observation, we present a possibly appropriate set of geometric features in order to address the problem of neural network-based textile defect classification. We justify the features from the point of view of discriminatory quality and feature extraction difficulty. We conduct some experiments in order to show the utility of the features. Our proposed feature set has obtained classification accuracy of more than 98%, which appears to be better than reported results to date.

1. Introduction

The importance of quality control in industrial production is increasing day by day. Textile industry is not an exception in this regard. The accuracy of manual inspection is not enough due to fatigue and tediousness. Moreover, it is time consuming. High quality cannot be maintained with manual inspection. The solution to the problem of manual inspection is automated, that is, machine-vision-based textile inspection system. Automated textile inspection systems have been drawing a lot of attention of the researchers of many countries for more than a decade. Automated textile inspection systems mainly involve two challenging problems, namely, defect detection and defect classification. A lot of research has been done addressing the problem of defect detection, but the amount of research done to solve the classification problem is inadequate.

Defect classification involves multiple problem areas, since classification process is composed of several steps. Scene analysis and feature selection is one of the important steps of classification process. Inadequate scene analysis results in an inappropriate feature set. Selection of an inappropriate set of features increases the complexities of the subsequent steps and makes the classification task harder. Selecting an appropriate set of features to solve a classification problem is very difficult. In an appropriate feature set, the discriminatory qualities of the features are high and the number of features is small. Moreover, an appropriate set of features takes into account the difficulties lying in the feature extraction process and also result in acceptable performance [1].

Bangladesh, as a developing country, places special focus on export, through which a lot of foreign exchange is earned. Bangladesh textile industry has been the major export sector and a very good source of foreign exchange. Over 75% of the total export of Bangladesh during 2009-10 was from the textile sector [2]. Bangladesh mainly exports knit products, namely, shirts, T-shirts, pullovers, and so forth, and woven products, namely, trousers, shirts, blouses, and so forth, to other countries. USA and EU countries are the main importers of these products [3]. The major strengths of Bangladesh textile industry are the cheap labor cost, low energy price, and good-quality products. The textile industry of Bangladesh should increase productivity as well as quality in order to sustain or increase current level of performance in the highly competitive global market. That means Bangladesh textile industry should improve quality in the production process at as much lower cost as possible. The quality of textile products is severely affected by defects. Failure to detect defects early is costly in terms of time, money, and consumer satisfaction. So, early and accurate detection of defects in fabrics is an important aspect of quality improvement. Human visual inspection and automated inspection are compared in Table 1 from [4]. Moreover, it has been estimated in [5] that the price of textile fabric is reduced by 45–65% due to defects.

In this paper, we present a possibly appropriate set of geometric features in order to address the problem of defect classification. We justify the features in terms of their discriminatory qualities considering the difficulties which lie in the feature extraction process. We use statistical approach to extract the features. We conduct experiments with counterpropagation neural network (CPN), which is operationally similar to learning vector quantization network but much different from backpropagation network, in order to demonstrate the utility of the geometric features. We have found a very promising result.

The rest of the paper is organized as follows. Section 2 describes current state of solution to address the problem of textile defect inspection, and Section 3 describes researchable issue addressed in this paper and its scope. In Section 4, the defects are analyzed and the features are presented and justified describing our approach to extract the features. Section 5 describes how we apply our feature extraction process and then what we find. Demonstration of the utility of the features is presented in Section 6. In Section 7, we have reviewed machine-vision-based textile detection and classification results to develop an understanding about the merits of our proposed feature set. Finally, we give conclusion with limitations of our work and scope for future work in Section 8.

2. Literature Review

The reduction of wastage, higher price of fabrics due to the presence of fewer defects, requirement of less labor, and other benefits make the investment in an automated textile defect inspection system economically very attractive. The development of a fully automated web inspection system requires robust and efficient defect detection and classification algorithms. The inspection of real textile defects is particularly challenging due to the large number of textile defect classes, which are characterized by their vagueness and ambiguity.

A number of attempts have been made for automated, that is, machine-vision-based textile defect inspection [523]. Most of them have concentrated on defect detection, where few of them have concentrated on classification. There have been deployment of mainly three defect-detection techniques [6, 24], namely, statistical, spectral, and model-based. A number of techniques have been deployed for classification. Among them, neural network, support vector machine (SVM), clustering, and statistical inference are notable. Scene analysis, that is, defect analysis, and feature selection are basically relevant to the works [69, 1214, 21], which have dealt with multiclass problem, that is, categorizing defects distinctly.

Statistical inference is used for classification in [16, 17]. Cohen et al. [16] have used statistical test, that is, likelihood-ratio test for classification. They have implemented binary classification, that is, categorization of only the defective and defect-free. Campbell et al. [17] have used hypothesis testing for classification. They also have implemented classification of only defective and defect-free classes. Binary classification, that is, categorization of only defective and defect-free fabrics, does not serve the purpose of textile-defect classification. Murino et al. [8] have used SVMs for classification. They have used features of three types. They have extracted features from grayscale histogram, shape of defect, and cooccurrence matrix. Some of the features are such that the feature extraction process has become complex. The basic SVM scheme is designed for binary classification problem. They implemented SVMs with 1-vs-1 binary decision tree scheme in order to deal with multiclass problem, that is, distinct categorization of defects. Campbell et al. [15] have used model-based clustering, which is not suitable enough for real-time systems like automated textile inspection systems. Neural networks have been deployed as classifiers in a number of papers. Different learning algorithms have been used in order to train the neural networks. Backpropagation learning algorithm has been used in [6, 9, 12, 13]. Saeidi et al. [6] have trained their neural network by backpropagation algorithm so as to deal with multiclass problem, that is, categorizing defects distinctly, but they have worked in the frequency domain for defect detection. That means that they have used spectral technique, that is, Gabor transform, for defect detection. Karayiannis et al. [9] have used a neural network trained by backpropagation algorithm in order to solve multiclass problem, that is, distinct categorization of defects. They have used statistical texture features, but analysis of defects and justification of features have not been properly done. Kuo and Lee [12] have used a neural network trained by backpropagation algorithm so as to deal with multiclass problem, that is, categorizing defects distinctly. They have used maximum length, maximum width, and gray level of defects, as features, but analysis of defects and justification of features have been done a little. Moreover, the number of features they used was too small. They have found good classification accuracy because the sample size was also small. Mitropulos et al. [13] have trained their neural network by backpropagation algorithm so as to deal with multiclass problem, that is, distinct categorization of defects. They have used first- and second-order statistical features, but defect analysis and feature justification have not been properly done. Moreover, the number of features they used was small. Since the sample size was also small, their approach worked. Resilient backpropagation algorithm has been used in [7, 21] to train neural network. The neural network has been capable of dealing with multiclass problem, that is, categorizing defects distinctly. They have used the area, number of parts, and sharp factor of defect, as features, but defect analysis has not been done. They have justified the features a little. Moreover, the number of features they used was too small. It worked because the sample size was also small. There is huge probability that the approaches described in [7, 12, 13, 21] will not achieve desired result when the sample size is very large. Shady et al. [14] have used learning vector quantization (LVQ) algorithm in order to train their neural networks. Their neural networks have been implemented in order to handle multiclass problem, that is, categorizing defects distinctly. They have separately worked on both spatial and frequency domains for defect detection. That means they have separately used statistical technique and spectral technique, that is, Fourier transform, for defect detection. In case of statistical technique, the row and column vectors of images have been computed using a grid measuring scheme. Statistical features, for example, mean, median, and so forth, are extracted from the row and column vectors. They have done defect analysis a little, but justification of features has not been done. Kumar [10] has used two neural networks separately. The first one has been trained by backpropagation algorithm. The network has been designed for binary classification, that is, categorization of only the defective and defect-free. He has shown that the inspection system with this network is not cost effective. So he has further used linear neural network and trained the network by least mean square error (LMS) algorithm. The inspection system with this neural network is cost effective, but cannot also deal with multiclass problem. Binary-classification ability, that is, inability to deal with multiclass problem, does not serve the purpose of textile-defect classification. Karras et al. [11] have also separately used two neural networks. They have trained one neural network by backpropagation algorithm. The other neural network used by them is Kohonen’s self-organizing feature maps (SOFM). They have used first- and second-order statistical-texture features for both neural networks. Both of the networks used by them have been capable of handling binary classification problem, that is, categorization of only defective and defect-free. Categorization of only defective and defect-free fabrics does not serve the purpose of textile-defect classification.

3. Researchable Issues and Scope Identification

The development of a machine vision system involves several steps as shown in Figure 1. Each step has effects on the performances of the subsequent steps. Weak design and implementation of a step make the subsequent steps complicated, which results in harder development of the system. So each step has a lot of importance in the development of a machine vision system. The development of an automated, that is, machine-vision-based textile inspection system, being a machine vision system, also involves the steps shown in Figure 1. There are a lot of researchable issues in each step. In this paper, we mainly focus on the first step, that is, scene analysis and feature selection. The task of scene analysis and feature selection is very challenging and requires a lot of effort. Selection of an inappropriate feature set increases the complexities of the subsequent steps, which makes the system development harder, especially the task of classification. In the beginning of the automated textile inspection system development process, a large number of scenes of various defective and defect-free fabrics of different colors should be analyzed. Each of the defects that occurred in fabrics should be properly analyzed. That means analyses should be made in terms of the defects’ appearance and nature, which is challenging enough. This will facilitate selection of the features for classification. Each of the features should be properly justified in terms of their discriminatory qualities and complexities to extract them, which is also very challenging. This results in an appropriate feature set, which will make the system’s performance good.

4. Approach and Methodology

We are to address the automated textile defect inspection problem. Many possible approaches are investigated in order to accomplish our task. Finally, we have found the approach, shown in Figure 2, optimal. Our approach starts with an inspection image of knitted fabric, which is converted into a grayscale image. Then the image is filtered in order to smooth it and remove noises.

The gray-scale histogram of the image is formed and two threshold values are calculated from the histogram. Using these threshold values, the image is converted into a binary image. This binary image contains object (defect) if any exists, background (defect-free fabric), and some noises. These noises are removed using thresholding. Then a feature vector is formed calculating a number of features of the defect. This feature vector is inputted to an artificial neural network, which is trained earlier with a number of feature vectors, in order to classify the defect. Finally, it is outputted whether the image is defect-free, or defective with the name of the defect.

4.1. Defect Analysis

Defect analysis is a very important part of our approach to automated textile defect inspection problem, which has been done earlier than all other parts. Defect analysis helps understand the defects properly, and give clues to appropriate feature. In this paper, we have dealt with four types of defects, which frequently occur in knitted fabrics in Bangladesh, namely, color yarn, hole, missing yarn, and spot. All of the defects are shown in Figure 3. All of them are discussed here.

(i)Color yarn: Figure 3(a) shows the defect of color yarn. Color yarn is one of the smallest and sneakiest defects that occur in knitted fabrics in Bangladesh. It appears in a shape, close to a small rectangle of one color, on a fabric of another color. It becomes little blurred in its captured image.(ii)Hole: Figure 3(b) shows the defect of hole. Hole is one of the most severe defects that occur in knitted fabrics in Bangladesh. It appears in a shape, close to a circle of the color of the background, on a fabric of another color. Its size varies from small to medium. The shape of the defect of hole can become a little distorted, for example, oval, if inappropriate viewpoint is chosen by positioning the camera improperly. The color of the background is another issue. In some cases, background color can become close to the color of fabric.(iii)Missing yarn: Figure 3(c) shows the defect of missing yarn. Missing yarn is also one of the most severe defects that occur in knitted fabrics in Bangladesh. It appears as a thin striped shade of the color of fabric. It is usually long. It is of two types, namely, vertical and horizontal. Proper lighting is required in order to clearly capture the image of the defect of missing yarn.(iv)Spot: Figure 3(d) shows the defect of spot. Spot is one of the most eccentric defects that occur in knitted fabrics in Bangladesh because of its appearance and nature. It does not appear in any specific shape. It usually appears in a scattered form of one color on a fabric of another color. Moreover, its size varies widely, that is, from medium to large. It becomes little blurred in its captured image in some cases and does not become such blurred in other cases. A camera of high resolution and proper lighting is required in order to clearly capture the image.

4.2. Terminology

We have adopted some special words for the ease of explanation and interpretation of our automated textile defect inspection problem. We are going to use them in the rest of the paper. Figure 4 shows the words along with picture.(i)Inspection image: inspection image or image is the image to be inspected.(ii)Defective region: defective region is the maximum connected area of defect in an inspection image.(iii)Defect-free region: defect-free region is the maximum connected area in an inspection image, which does not contain any defect.(iv)Defect window: defect window is the rectangle of minimum area, which encloses all defective regions in an inspection image.

4.3. An Appropriate Set of Geometric Features

An appropriate set of geometric features are selected for classifying the defects. Geometric features describe different discriminatory geometric characteristics of the defect in the inspection image. The geometric features selected for classifying the defects are computationally simple to extract. Their discriminatory qualities are also high. According to the discussion of Section 4.1, each of these geometric features is discussed and justified here, and is shown in Figure 5.

(i)Height of defect window, 𝐻DW: it is one of the noticeable discriminatory characteristics of the defects. According to the discussion of the Section 4.1, height of defect window of vertical missing yarn and horizontal missing yarn should be large and small, respectively. Height of defect window of color yarn should also be small. Height of defect window of hole should vary from small to medium, whereas spot’s should vary from medium to large. Figure 6 shows the typical values of height of defect window of all defect types. Important part of 512×512-pixel image is shown in Figure 6 rather than showing the entire image for the sake of space.(ii)Width of defect window, 𝑊DW: it is also one of the noticeable discriminatory characteristics of the defects. According to the discussion of the Section 4.1, width of defect window of horizontal missing yarn and vertical missing yarn should be large and small, respectively. Width of defect window of color yarn should also be small. Width of defect window of hole should vary from small to medium, whereas spot’s should vary from medium to large. Figure 7 shows the typical values of width of defect window of all defect types. Important part of 512 × 512-pixel image is shown in Figure 7 rather than showing the entire image for the sake of space.(iii)Height to width ratio of defect window, 𝑅𝐻/𝑊: size of defect window gives a clue to a discriminatory characteristic of the defects, namely, height-to-width ratio of the defect window. That is, 𝑅𝐻/𝑊=𝐻DW𝑊DW.(1) According to the discussion of the Section 4.1, 𝑅𝐻/𝑊 should be much greater than 1 for vertical missing yarn, much less than 1 for horizontal missing yarn. For color yarn, 𝑅𝐻/𝑊 should be less than 1 whereas 𝑅H/W should be close to 1 for hole. 𝑅𝐻/𝑊 can be anything, that is, less than, greater than, or equal to 1, for spot. Figure 8 shows the typical values of 𝑅𝐻/𝑊 for all types of defect. Important part of 512×512-pixel image is shown in Figure 8 rather than showing the entire image for the sake of space.(iv)Number of defective regions, 𝑁DR: it represents a distinguishing characteristic of spot from other defects’. According to the discussion of the Section 4.1, the number of defective regions for spot is more than 1 in most of the cases, whereas the number of defective regions for all other defect is 1. Figure 9 shows the typical values of number of defective regions for all types of defect. Important part of 512×512-pixel image is shown in Figure 9 rather than showing the entire image for the sake of space.(v)Total area of defective Regions, TADR: size is a noticeable discriminatory characteristic of the defects. Size is measured as the total area of defective regions of the defects. That is, if area of defective region is represented by 𝐴DR, then TADR=𝑁DR1𝐴DR.(2) Total area of defective regions is fully independent of defect shape. According to the discussion of the Section 4.1, the total area of defective regions of color yarn should be small. The total area of defective regions of hole should vary from small to medium whereas missing yarn’s should vary from medium to large. The total area of defective regions of spot should also vary from medium to large. Figure 10 shows the typical values of total area of defective regions of all defect types. Important part of 512×512-pixel image is shown in Figure 10 rather than showing the entire image for the sake of space.(vi)Relative total area of defective regions, RTADR: shape and size of the defect within the defect window give clues to some noticeable discriminatory characteristics of the defects. One of these discriminatory characteristics of the defects is relative total area of defective regions. It is the total area of defective regions relative to the area of defect window, that is, RTADR=TADR𝐴DW=𝑁DR1𝐴DR𝐻DW×𝑊DW.(3) Depending on the variation in shape and size of the defect and in height and width of the defect window, there should also be some variation in the relative total area of defective regions for all types of defects. Since hole is of almost circular shape, for this type of defects, values of relative total area of defective regions should converge around some point. Again, color yarn and missing yarn have rectangular shape, for them, values of relative total area of defective regions should converge around some other points. The difficult case is with spot because of its eccentric appearance and nature. Since spot does not appear in any specific shape, rather in a scattered form, and its size varies widely, for it, values of relative total area of defective regions should fluctuate. Figure 11 shows the typical values of relative total area of defective regions for all types of defects. Important part of 512×512-pixel image is shown in Figure 11 rather than showing the entire image for the sake of space.(vii)Relative centroid of defective regions, RCDR: shape and size of the defect within the defect window give clues to some noticeable discriminatory characteristics of the defects. One of these discriminatory characteristics of the defects is relative centroid of defective regions. In fact, it is composed of two characteristics, namely, the 𝑥-coordinate and 𝑦-coordinate of centroid of defective regions relative to the up-most and left-most point of the defect window. We consider the image pixels as points in the (𝑥, 𝑦)-plane, where the top-left-corner pixel of the image is the origin. By translating the origin to the top-left-corner pixel of the defect window, the centroid of defective regions is computed. It is the relative centroid of defective regions. That means if there are in total 𝑛 pixels in the defective regions whose old coordinates are (𝑥1,𝑦1),(𝑥2,𝑦2),,(𝑥𝑛,𝑦𝑛) and new coordinates are (𝑥1,𝑦1),(𝑥2,𝑦2),,(𝑥𝑛,𝑦𝑛), respectively, and the old coordinates of the top-left-corner pixel of the defect window are (𝛼, 𝛽), as shown in Figure 5, then RCDR=𝑛𝑖=1𝑥𝑖𝑛,𝑛𝑖=1𝑦𝑖𝑛=𝑛𝑖=1𝑥𝑖𝛼𝑛,𝑛𝑖=1𝑦𝑖𝛽𝑛.(4) Depending on the variation in shape and size of the defect, there should also be some variation in the relative centroid of defective regions for all types of defects. Since hole is of almost circular shape and its size varies from small to medium, for this type of defects, values of relative centroid of defective regions should converge around some point. Again, color yarn has rectangular shape and small size, for it, values of relative centroid of defective regions should converge around another point. Although missing yarn has rectangular shape, its height is large enough for vertical missing yarn and width is large enough for horizontal missing yarn. So, values of relative centroid of defective regions should converge around some other point for this type of defects. The difficult case is also here with spot because of its eccentric appearance and nature. Since spot does not appear in any specific shape, rather in a scattered form, and its size varies from medium to large, for it, values of relative centroid of defective regions are not expected to converge around any point. Figure 12 shows the typical values of relative centroid of defective regions for all types of defects. Important part of 512×512-pixel image is shown in Figure 12 rather than showing the entire image for the sake of space.

5. Research Findings

We start with an inspection image of knitted fabric of size 512 × 512 pixels, which is converted into a gray-scale image. In order to smooth the image and remove noises, it is filtered by a 7 × 7 low-pass filter convolution mask. Then gray-scale histogram of the image is formed. From this histogram, two threshold values, 𝜃𝐿 and 𝜃𝐻, are calculated using histogram peak technique [25]. Using the two threshold values 𝜃𝐿 and 𝜃𝐻, the image with pixels 𝑝(𝑥,𝑦) is converted into a binary image with pixels 𝑏(𝑥,𝑦), where𝑏(𝑥,𝑦)=1,if𝜃𝐿𝑝(𝑥,𝑦)𝜃𝐻0,otherwise.(5) This binary image contains object (defect) if any exists, background (defect-free fabric), and some noises. These noises are smaller than the minimum defect intended to detect. In our approach, we intend to detect a defect of minimum size 3 mm × 1 mm. So, any object smaller than the minimum-defect size in pixels is eliminated from the binary image. If the minimum-defect size in pixels is 𝜃MD and an object with pixels 𝑜(𝑥,𝑦) is of size 𝑆𝑜 pixels, then𝑜(𝑥,𝑦)=1,if𝑆𝑜𝜃MD0,otherwise.(6) Then a number of features of the defect are calculated, which forms the feature vector corresponding to the defect in the image. Figure 13 shows the stepwise changed images. We have applied our approach on one hundred 512 × 512-pixel color images of knitted fabrics, and it has worked well for every image. We have got the values of features like we argued in Section 4.3.

6. Demonstration of Utility Research Findings

We have deployed a CPN in order to classify the defects. We have found very promising result. The features discussed in Section 4.3 contain so much distinguishing information that we have been able to successfully classify the defects with only the first four features, namely 𝐻DW, 𝑊DW, 𝑅𝐻/𝑊, and𝑁DR. This happens, because the values of same feature converge to a particular point and these particular points of the discussed features are distant enough from each other. We will obviously need a subset or all of the features when the sample size becomes very large. We worked with 6 types of samples having total population of 100, among them 33 are defect free samples. Distribution of these samples in different categories is shown in Table 2. We also considered variations of colors among samples of each type of samples. For example, among 16 vertical missing color yarn samples there are samples of 7 different colors. These seven colors are Demitasse, Navy, Green, Pink, Red, White, and Stone. Such color variations among samples for taking into consideration of real life scenario more closely increased the complexity of detection and classification of defects.

The CPN deployed contains four computing units in the input layer, twelve computing units in the hidden layer and six computing units in the output layer. Each computing unit in the output layer corresponds to each defect type, considering vertical and horizontal missing yarn separately as well as defect-free class. The extracted features are of values of different ranges. For example, the maximum value can be 512 for 𝐻DW or 𝑊DW, whereas 𝑁DR’s can be much less than 512. This causes imbalance among the differences of feature values for defect types and makes the classification task difficult. According to our context, scaling of features shown in (6) is made in order to have proper balance among the differences of feature values for defect types. If 𝐻DW,𝑊DW, 𝑅𝐻/𝑊, and𝑁DR represent the scaled values of the features 𝐻DW, 𝑊DW, 𝑅𝐻/𝑊, and𝑁DR, respectively, then 𝐻DW=𝐻DW𝑊512×100,DW=𝑊DW𝑅512×100,𝐻/𝑊=100×𝑅𝐻/𝑊,𝑁DR=500𝑁DR1×10999.(6) The feature vectors are split into two parts. One part consisting of 53 feature vectors is for both testing and training the CPN and the other part consisting of the rest of the feature vectors is for testing only. The target values are set 1 and 0 s for the corresponding class and the rest of the classes respectively. The CPN is trained on condition that maximum number of training cycle is 1000000, large enough to find solution and maximum tolerable error is less than 10−3. 0.3 and 0.01 are used as the learning constants for phase I and phase II, respectively. Training is completed in 196 cycles with error 9.72712 × 10−4. Then the CPN is tested with all the feature vectors of both parts. A good accuracy of 98.99% is achieved. Then all feature vectors are again split into two parts. The first fifty percent of the part for training comes from the previous part for training and the rest fifty percent comes from the previous part for only testing. All other feature vectors form the new part for only testing. The CPN is trained and tested with this newly split feature vectors. In this way, the CPN is trained and tested 5 times in total. Good accuracy is found every time. Detection and classification performance of different types of defects observed in experiment number V is shown in Table 3. Table 4 summarizes the results of obtained in all five experiments. In each experiment, the network size and network parameter, that is, learning constants have been found empirically. It should be noted that 33 defect free samples have subtle variations in color, texture, and other aspects, but those variations should be tolerated as defect free. Spot type defects have wide variation as shown in Figure 3(d), and for this reason our approach failed in some cases to classify them correctly. Underlying cause for such variations is that spot may be caused by variety of reasons such as sticky dirt and oil marks. A large number of samples having spot-type defects, which are originating in different environments, should be used.

7. Comparative Performance Analysis

In order to assess merits of our proposed feature set in classifying textile defects, let us compare some recently reported relevant research results. It is to be noted that assumptions taken by researchers in collecting samples and reporting results of their research activities in processing those samples will have serious implications on our attempt of comparative performance evaluation. The review of literature revels that most of research reports are limited to the demonstration of concepts of machine vision approach to detection and classification of textile defects without the support of adequate numerical results and their comparison with similar works. Moreover, the absence of use of common database of samples of textile defects makes it difficult to have a fair comparison of merits of different algorithms. Similar observation has been shared by Kumar in his recently published survey on computer-vision-based fabric defect detection [24]. Kumar has also mentioned in his survey conclusion that although the last few years have shown some encouraging trends in textile defect detection research, systematic/comparative performance evaluation based on realistic assumptions is not sufficient. Despite such limitations, we have made an attempt to review numerical results related to textile defect detection and classification to assess comparative merits of our work.

Abouelela and his fellow researchers have reported that their proposed algorithm has been tested to successfully defects 91% textile defects [22]. It has been reported by Murino and his fellow team members that their algorithm has achieved on average 92% accuracy in classifying textile defects [8]. Although for certain types of defects, the classification accuracy is much lower than this average performance. Examples of classification accuracy for different types of defects as reported in [8] are shown in Table 5.

The research findings reported in [7] have mentioned the achievement of 80% textile defect detection accuracy. The performance of Gabor filter in detecting textile defects as reported in [6] is shown in Table 6. It appears that detection accuracy by the use of Gabor filter as reported in [6] is not satisfactory. Work done on defect detection and classification on Web textile fabric using multiresolution decomposition and neural networks has reported 85% accuracy.

The detection and classification of defects in knitted fabric structures as reported in [14] appears to be very much similar to our work. This work has reported approximately 90% accuracy in defect detection performance.

In [10], Mr. Kumar has reported the development of feed-forward neural network- (FFN-) based approach for textile defect segmentation. It has been mentioned that several attempts to reduce the computational requirements yielded successful results. It was also reported that the tests conducted on different types of defect and different styles of fabrics showed that FFN-based technique was efficient and robust for a variety of textile defects. But due to the unavailability of reporting of numeric results, closer performance comparison could not have been done.

Kumar, in a comprehensive survey [24] on computer-vision-based fabric defect detection, has found that more than 95% accuracy appears to be industry benchmark. In this survey, it has been reported by Mr. Kumar in reviewing 150 papers that a quantitative comparison between the various defect detection schemes is difficult as the performance of each of these schemes have been assessed/reported on the fabric test images with varying resolution, background texture, and defects.

With respect to such observation, our obtained accuracy of more than 98% appears to be quite good. As we have mentioned before, due to the lack of uniformity in the image data set, performance evaluation, and the nature of intended application, it is not prudent to explicitly compare merits of our approach with other works. Therefore, it may not be unfair to claim that our proposed features have enough distinguishing information to detect and classify textile defects with very encouraging accuracies.

8. Conclusion and Future Work

In this paper, we have presented a possibly appropriate feature set so as to solve the textile defect classification problem. We have justified the features in terms of distinguishing qualities. We have used a statistical feature extraction technique to extract them. We have obtained their values like we anticipated earlier. The utility of the features has been demonstrated with a CPN model in order to classify defects with almost 99% accuracy, which appears to be far better than reported results to date.

We have found that the first four features are sufficient to successfully classify the defects for this sample size, which is not so large. Moreover, during acquiring images, lighting was not good enough and the captured images’ quality was not high. Work is in progress to use a subset or all of the features presented in order to successfully classify the defects for a sample of a very large number of high-quality images.