Abstract

A novel inspection sensor by using an edge feature description (EFD) algorithm based on a support vector machine (SVM) is proposed for industrial inspection of images. This method detects and adaptively segments blurred images by using the proposed algorithm, which uses EFD to effectively classify blurred samples and improve the conventional methods of inspecting blurred objects; the algorithm selects and optimally tunes suitable features. The proposed sensor applies a suitable feature-extraction strategy on the basis of the sensing results. Experimental results demonstrate that the proposed method outperforms the existing methods.

1. Introduction

Vision-based techniques are being increasingly used in industrial inspection. Effective inspection of blurred objects has for long been a challenge, and object blurring is one of the prime causes of poor performance in vision-based inspection. Object blurring occurs mainly due to the movement of the object and defocussing and shaking of the camera. Several deblurring methods have been proposed to address object motion [1] and camera shakes [2]. Xu et al. [3] proposed an optical flow-based model to simultaneously correct both types of blurs. However, a basic assumption is required in the optical flow algorithm: the pixel intensity or color does not change when the pixel flows from one image to another. Therefore, a technique that effectively recognizes blurred objects without any assumptions is necessary. This paper proposes a feature description-based sensor and an adaptive image segmentation method for inspecting blurred objects. Seeded region growing (SRG) [4] is one of the several image segmentation techniques currently available. The proposed method comprises an edge feature description (EFD) and a support vector machine (SVM) that use the regions produced by the adaptive SRG-based algorithm for effectively categorizing the objects.

Edge detection is an essential preprocessing technique in vision-based applications. Evaluations and comparisons of edge detection techniques are found in the literature [5]. Some frequently used methods detect edges on the basis of abrupt changes in the gray level [6, 7]; however, orienting the edges by using these methods is difficult. Recently, Liu and Fang [8] proposed an ant colony optimization- (ACO-) based approach for edge detection. They adopted a user-defined threshold in the pheromone update process to suppress noises in the detected image. Silva Jr. et al. [9] modified the gravitational edge detection technique [10] with a nonstandard neighborhood configuration [11] to reduce the speckle noise in synthetic aperture radar images. For real-time edge detection, Khan et al. [12] integrated a range sensor on field programmable gate arrays (FPGA) and successfully executed image normalization along with edge detection for real-time video processing. In manufacturing, blurred edge detection is also a critical issue. Thus, the EFD-based method is proposed to deal with the blurred edge detection for industrial inspection of blurred images.

Several relevant studies have explored vision-based inspection for industrial applications [13, 14]. Gracia et al. [13] developed an inspection system and process for herb flowers. Weyrich et al. [14] developed an industrial vision-based automatic inspection system for welded nuts on support hinges. In addition, image segmentation is a fundamental problem that must be addressed in vision-based inspection. Aiteanu et al. [15] proposed content-based threshold adaptation for segmenting images to discover the optimal threshold value. However, inaccuracies in this procedure distort the results. Thus, this paper proposes an adaptive method based on SVM assessments and an EFD algorithm in the region growing sequence. This strategy segments images without computing the assessment function for every pixel added in the region.

Considering previous studies, the proposed method used adaptive region growing (ARG) segmentation and an SVM combined with an EFD algorithm for inspection in manufacturing. The proposed method contributes the following to the literature. During industrial inspection, the proposed sensor senses blurred objects and based on the sensing results suitably tunes the selection of a feature-extraction strategy. In object classification, the EFD-based inspection sensor effectively recognizes blurred objects without any assumptions. Finally, this EFD-based algorithm improves the conventional methods of inspecting blurred objects.

The remainder of this paper is organized as follows. Section 2 describes the proposed EFD-based method including ARG segmentation and EFD algorithm and then introduces the EFD-based algorithm for industrial inspection. Section 3 presents the experimental results obtained using various samples and comparisons of existing methods. The final section presents the conclusion.

2. Proposed Method

2.1. ARG Segmentation and EFD Algorithm

An ARG-based algorithm uses a set of initial seeds and groups neighboring pixels with each initial seed within the growth regions when the pixels satisfy the selection criterion, which is expressed aswhere is the normalized gray level of the th pixel in one of the regions and is the adjustable threshold in the region, which depends on the result of the proposed recognition. For inclusion in one of the regions, a pixel must be eight-connected to at least one pixel in that region. The regions are merged if a pixel is connected to more than one region.

This paper proposes a new method for the EFD of a segmented image. According to the 3 × 3 mask depicted in (2)edge pixels in the segmented images typically belong to one of eight possible edge patterns in (3)–(10).

Pattern 1

Pattern 2

Pattern 3

Pattern 4

Pattern 5

Pattern 6

Pattern 7

Pattern 8In the edge pattern, nine pixels can be divided into two groups, and , respectively. For Edge Patterns 1–4, feature vector , where , , and , was used for edge description. For Edge Patterns 5–8, two feature vectors and , where , , and , were used for edge description. For example, a value of 1 is set for initial seeds in the ARG-based algorithm. The values of the pixels in and are 1 and 0, respectively. Thus, for Edge Patterns 1, 2, 3, and 4, is (3, 3, 0), (2, 2, 2), (0, 3, 3), and (2, 2, 2), respectively, and for Edge Patterns 5, 6, 7, and 8, the and values are (1, 2, 3) and (3, 2, 1), (3, 2, 1) and (3, 2, 1), (3, 2, 1) and (1, 2, 3), and (1, 2, 3) and (1, 2, 3), respectively. This approach is summarized as follows. (1) Calculate in a binary image. (2) Record if is (3, 3, 0), (2, 2, 2), or (0, 3, 3). (3) Record and calculate if is (1, 2, 3) or (3, 2, 1). (4) Record if is (1, 2, 3) or (3, 2, 1).

After all pixels in an image were processed using the aforementioned procedure, the edge was classified using feature vectors and . An SVM was used for the classification [16]. The edge descriptor from the feature description is expressed aswhere represents the seven coefficients of the normalized edge numbers from Edge Patterns 1, 2 (4), 3, 5, 6, 7, and 8 and ranges from 0 to 1. Table 1 illustrates edges (1–8) in the descriptors , , and . The edges were normalized as the seven coefficients, and , , and were , , and , respectively.

For SVM classifiers, two parameters, parameter and the RBF kernel parameter , must be optimized. Parameter is a user-specified positive parameter used for controlling the tradeoff between SVM complexities. This study adopted the hold-out procedure for determining the two parameters, in which the samples were classified into training samples, on which the classifiers were trained, and other samples, on which the classifier accuracy was tested. Table 2 presents the class labels of the samples. The edge descriptor extracted from each image was used as the data set input. In this study, 140 data sets of each sample were used. The SVM was trained and tested using these data sets. Forty data sets were randomly selected as the training samples, and the others were used for evaluating the SVM classifier accuracy. Table 3 records the testing accuracy at various combinations of the two parameters. Superior testing accuracy was realized using and in the SVM. Table 4 lists the classification results for different sample sizes. Sample sizes of 140, 280, and 400 samples in each class are apparently unrelated to the classification results.

2.2. EFD-Based Algorithm for Industrial Inspection

This section describes the inspection sensor and the EFD-based approach. As an example of industrial inspection, the EFD method was applied to inspect eyeglass lenses. The tested technology determines the degree of the lenses. This study used the EFD-based inspection sensor for effectively recognizing blurred objects. Inspecting object blurring in the manufacturing process is difficult because the inspector may fail to focus the blurred image on the target panel. Therefore, developing an inspection sensor that bridges the gap between the blurred image and the inspector is essential.

Figure 1 is the block diagram of the inspection sensor. The proposed sensor senses blurred objects and applies a suitable feature-extraction strategy on the basis of the sensing results. The sensor’s procedure is summarized as follows. (1) Input the sample images from the image queue. (2) Determine whether the image satisfies the condition for blurred-image processing. (3) Perform EFD-based extraction. (4) Perform SVM classification. (5) Determine whether any image remains in the image queue. As shown in Table 2, for inspecting 200 Class B sample images, and were set to 0.78 and , , and , respectively; and , respectively, represent the optimal thresholds and the optimal edge descriptors for Class B samples. The input images were converted to 1024 × 768 pixel images with an 8-bit gray level. The 256 gray levels were normalized in the range 0-1, and the images were segmented using ARG. Step (2) determines image processing according to the following condition:where is the displacement of the sensing device and is the threshold of the displacement (0.5 mm in this study). If the sample image satisfies this condition, Step (3) commences; otherwise, discrete wavelet transform- (DWT-) based processing is performed. The DWT-based processing includes DWT feature extraction and SVM classification. The procedure of the DWT-based processing is as described in a previous vision-based study [17]. The procedure is complete when no image remains in the image queue. and were reset and other samples were inspected similarly. The inspection is complete when the image queue is empty.

Figure 2 depicts the experimental setup. The target panel, installed on a platform, indicated the degrees of the lens. The lens was mounted on the support frame of a sensing telescope 10.67 m from the target panel. Data on the test lens were listed in Table 2. The lens was selected from the 100 validation samples. During inspection, an eyeglass lens of unknown degree was mounted on the telescope, and the surface light from the platform illuminated the target panel. To produce camera shakes, a spring-dashpot system was positioned below the telescope, and vibrations were manually induced. An accelerometer, placed behind the telescope, measured the strength of the vibrations and forwarded them to an industrial computer, which converted the signals to displacements and processed the image signal generated using a charge-coupled device (CCD) camera and the displacement information. Conventionally, the telescope lens is manually focused on the target panel. The proposed method processes the telescopic images captured using the CCD camera according to object-sensing results—an approach that quickly determines the degrees of the lens to solve the object-blur problem when focusing the sensing telescope.

As shown in Figure 3, the proposed algorithm applies the following steps to automatically obtain suitable and .

Step 1. Input images are tested using a given number of descriptors , .

Step 2. A value of 1 is set for initial seeds in the gray image, and the threshold values are set in the range 0.01–0.99 sequentially, and , .

Step 3. ARG segmentation is implemented.

Step 4. EFD-based extraction is implemented.

Step 5. Images are classified using SVM.

Step 6. The recognition rate of each adjustable threshold is determined for the given image; it is defined as follows:where is the number of correctly classified images during the test run and is the total number of test data sets (in this study, was 100). If the recognition rate exceeds a given value , Step 7 commences; otherwise, Steps 26 are repeated.

Step 7. The process terminates if the sample images with all cases of the given number of descriptors have already been tested; otherwise, Steps 16 are repeated. In addition, the algorithm stops if does not satisfy the condition in Step 6.

For example, Step 1 inputs the sample images and descriptors , , and . Step 2 sequentially sets in the range 0.01–0.99, and the ARG segmentation is implemented (Step 3). Step 4 extracts the edge features, which are classified in Step 5. When is 0.9 (i.e., a 90% accuracy rate), Steps 26 are repeated until exceeds 0.77. Step 7 determines whether to stop the algorithm in the , , and cases. The final value of the threshold in this test was 0.78. Thus, the algorithm automatically obtained suitable and as 0.78 and , respectively.

3. Experimental Results and Discussion

3.1. Results Obtained Using the EFD-Based Sensor

First, the study employed a general classification to test the proposed method. Table 5 presents the class labels of wrench samples used for the general classification. The edge descriptors (, , and ) of the wrench samples were listed in Table 6. , , and were , , and , respectively. For the test case, 40 blurry wrench images were randomly selected as training samples, and 100 blurry wrench images were used for the classification. Figure 4 shows an example of the blurred-image segmentation using the EFD-based method. Edges (1–8) of the segmentation are and the corresponding edge descriptors are . Based on the edge descriptors (Table 6), the blurry image in Figure 4 belongs to samples . Table 7 lists the classification results obtained using the EFD-based method and demonstrates an average accuracy rate of 93%.

Then, the experiment was conducted to test the accuracy and performance of the proposed method in manufacturing. As depicted in Figure 2, an experiment setup detail for the test mainly includes target panel, a telescope, a CCD camera, and industrial computer. Figures 5 and 6 show the blurred-image segmentation yielded by the EFD-based and DWT-based inspection methods, respectively. Figure 5 shows that the images with optimal edge descriptors processed using the proposed method are visibly different and can be classified on the basis of their differences. However, the segmented images obtained using DWT are nearly identical (Figure 6). Figure 7 displays the segmentation results for samples A, B, C, and D with the given descriptors , , . The most suitable descriptors are because the corresponding images are visibly different. Tables 8 and 9 list the classification results obtained using the proposed sensor with and without camera shakes, respectively. The EFD yielded more accurate results (an average accuracy rate of 93%) than the DWT did for samples A, B, C, and D in the inspection with camera shakes. However, the DWT was more appropriate in the other cases (Table 9). The results demonstrate that the proposed sensor can select a suitable feature-extraction strategy depending on camera shakes.

Table 10 illustrates the selection thresholds of the sample images with a given number of descriptors . The thresholds for the descriptors are selected automatically by the SVM. The most suitable descriptors are obtained from among several descriptors. This study employed the leave-one-out cross-validation [18] with various thresholds to verify the selection threshold of the proposed algorithm. Table 11 shows that samples A, B, C, and D had the smallest MSEs: 0.1017, 0.1269, 0.1429, and 0.1351, respectively. The thresholds = 0.70, 0.80, 0.80, and 0.50 were the optimal selections for samples A, B, C, and D in the inspection because these values yielded the highest accuracy rate (Table 10).

This study used the computational complexity [19] for quantifying the amount of time required for the proposed algorithm. The time cost function quantifies the amount of time required for an algorithm used in binary search tree operations and is given bywhere is the logarithmic time required by an algorithm for all -sized inputs of size in the big- notation, which excludes coefficients and lower-order terms. Table 12 reports the running time growth rates of the proposed algorithm, including running the EFD-based extraction and SVM. Under given parameters (, in the blurred-image scenario and = 0.82, 5-level DWT, in other scenarios) and 200 sample images (40 images of Class B including 25 blurred images) in the image queue, the sensor demonstrated an accuracy of 100% in inspecting Class B samples with response times within 13 μs of .

Classifiers based on artificial neural networks (ANNs) and Bayes classifier were also investigated for comparison. Similar to the SVM-based experiment, 40 data sets were randomly selected as training samples, and 100 data sets were used for validation. The tested ANN was a three-layer neural network with three neurons ( descriptors) in the input layer, five neurons in the hidden layer, and four neurons (the four sample types) in the output layer. The Bayes classifier is given bywhere is a set of descriptors of the edge descriptor from a sample. Subsequently, the mean vectors and covariance matrix of the coefficients for the th sample were derived. The samples are identified as the class by minimizing the calculated value of . Table 13 lists the time cost function and accuracy rates of the experiments. The accuracy rates for the SVM, ANN, and Bayes classifiers are 93%, 83%, and 79%, respectively, demonstrating that the SVM outperforms the other classifiers.

3.2. Comparison of Existing Methods

To compare the performances of distinct conventional methods, nonuniform image deblurring schemes by Xu et al. [3] and Whyte et al. [20] were compared with the proposed method. Xu et al. [3] proposed the nonuniform point spread function on the basis of the optical flow estimation model for removing blurring caused by camera shakes. Whyte et al. [20] proposed a parametrized geometric model for camera shakes and applied it for deblurring within the framework of existing camera shake removal algorithms. Similar to the earlier experiments, 40 blurry images (real camera shake blur) were randomly selected as training samples, and 100 blurry images were used for validation. The same block diagram (Figure 1) used in the proposed method was employed. (1) Input the 100 blurry images from the image queue. (2) Perform preprocessing, image deblurring (using the model [3, 20]), and ARG segmentation. (3) Perform DWT-based extraction. (4) Perform SVM classification. (5) Determine whether any image remains in the image queue.

The 100 blurry images of wrenches from the image queue were tested for a general classification. Figure 8 presents the experiment results. The segmented image using the model [3, 20] produces details (Figures 8(b) and 8(c)). However, after using the EFD-based method, the edge of the wrench becomes clear (Figure 8(d)). Table 14 lists the time cost function and accuracy rates of the classification. The accuracy rates were 89% for Xu et al. [3], 90% for Whyte et al. [20], and 93% for the proposed method. The time cost function for this study is lesser than that in the other two methods because the EFD-based method classifies blurry images without image deblurring. Thus, the proposed method outperforms the other methods.

4. Conclusion

This paper proposes a novel, adaptive inspection sensor for industrial inspection of images. The proposed EFD-based sensor can sense blurred objects and tune the selection of the feature-extraction strategy according to the sensing results. In object classification, the EFD-based algorithm selects and optimally tunes suitable features. Unlike other recognition methods, the EFD-based method combined with image deblurring directly uses the edge descriptors in the ARG to recognize blurred objects. The experimental results demonstrated that the proposed method recognizes diverse blurry samples efficiently at a recognition rate of 93%.

Conflict of Interests

The author has no conflict of interests to declare regarding the publication of this paper.