Journal of Sensors

Journal of Sensors / 2016 / Article

Research Article | Open Access

Volume 2016 |Article ID 6506249 |

Tsun-Kuo Lin, "A Novel Edge Feature Description Method for Blur Detection in Manufacturing Processes", Journal of Sensors, vol. 2016, Article ID 6506249, 10 pages, 2016.

A Novel Edge Feature Description Method for Blur Detection in Manufacturing Processes

Academic Editor: Jesus Corres
Received21 Jul 2015
Revised25 Sep 2015
Accepted28 Sep 2015
Published15 Dec 2015


A novel inspection sensor by using an edge feature description (EFD) algorithm based on a support vector machine (SVM) is proposed for industrial inspection of images. This method detects and adaptively segments blurred images by using the proposed algorithm, which uses EFD to effectively classify blurred samples and improve the conventional methods of inspecting blurred objects; the algorithm selects and optimally tunes suitable features. The proposed sensor applies a suitable feature-extraction strategy on the basis of the sensing results. Experimental results demonstrate that the proposed method outperforms the existing methods.

1. Introduction

Vision-based techniques are being increasingly used in industrial inspection. Effective inspection of blurred objects has for long been a challenge, and object blurring is one of the prime causes of poor performance in vision-based inspection. Object blurring occurs mainly due to the movement of the object and defocussing and shaking of the camera. Several deblurring methods have been proposed to address object motion [1] and camera shakes [2]. Xu et al. [3] proposed an optical flow-based model to simultaneously correct both types of blurs. However, a basic assumption is required in the optical flow algorithm: the pixel intensity or color does not change when the pixel flows from one image to another. Therefore, a technique that effectively recognizes blurred objects without any assumptions is necessary. This paper proposes a feature description-based sensor and an adaptive image segmentation method for inspecting blurred objects. Seeded region growing (SRG) [4] is one of the several image segmentation techniques currently available. The proposed method comprises an edge feature description (EFD) and a support vector machine (SVM) that use the regions produced by the adaptive SRG-based algorithm for effectively categorizing the objects.

Edge detection is an essential preprocessing technique in vision-based applications. Evaluations and comparisons of edge detection techniques are found in the literature [5]. Some frequently used methods detect edges on the basis of abrupt changes in the gray level [6, 7]; however, orienting the edges by using these methods is difficult. Recently, Liu and Fang [8] proposed an ant colony optimization- (ACO-) based approach for edge detection. They adopted a user-defined threshold in the pheromone update process to suppress noises in the detected image. Silva Jr. et al. [9] modified the gravitational edge detection technique [10] with a nonstandard neighborhood configuration [11] to reduce the speckle noise in synthetic aperture radar images. For real-time edge detection, Khan et al. [12] integrated a range sensor on field programmable gate arrays (FPGA) and successfully executed image normalization along with edge detection for real-time video processing. In manufacturing, blurred edge detection is also a critical issue. Thus, the EFD-based method is proposed to deal with the blurred edge detection for industrial inspection of blurred images.

Several relevant studies have explored vision-based inspection for industrial applications [13, 14]. Gracia et al. [13] developed an inspection system and process for herb flowers. Weyrich et al. [14] developed an industrial vision-based automatic inspection system for welded nuts on support hinges. In addition, image segmentation is a fundamental problem that must be addressed in vision-based inspection. Aiteanu et al. [15] proposed content-based threshold adaptation for segmenting images to discover the optimal threshold value. However, inaccuracies in this procedure distort the results. Thus, this paper proposes an adaptive method based on SVM assessments and an EFD algorithm in the region growing sequence. This strategy segments images without computing the assessment function for every pixel added in the region.

Considering previous studies, the proposed method used adaptive region growing (ARG) segmentation and an SVM combined with an EFD algorithm for inspection in manufacturing. The proposed method contributes the following to the literature. During industrial inspection, the proposed sensor senses blurred objects and based on the sensing results suitably tunes the selection of a feature-extraction strategy. In object classification, the EFD-based inspection sensor effectively recognizes blurred objects without any assumptions. Finally, this EFD-based algorithm improves the conventional methods of inspecting blurred objects.

The remainder of this paper is organized as follows. Section 2 describes the proposed EFD-based method including ARG segmentation and EFD algorithm and then introduces the EFD-based algorithm for industrial inspection. Section 3 presents the experimental results obtained using various samples and comparisons of existing methods. The final section presents the conclusion.

2. Proposed Method

2.1. ARG Segmentation and EFD Algorithm

An ARG-based algorithm uses a set of initial seeds and groups neighboring pixels with each initial seed within the growth regions when the pixels satisfy the selection criterion, which is expressed aswhere is the normalized gray level of the th pixel in one of the regions and is the adjustable threshold in the region, which depends on the result of the proposed recognition. For inclusion in one of the regions, a pixel must be eight-connected to at least one pixel in that region. The regions are merged if a pixel is connected to more than one region.

This paper proposes a new method for the EFD of a segmented image. According to the 3 × 3 mask depicted in (2)edge pixels in the segmented images typically belong to one of eight possible edge patterns in (3)–(10).

Pattern 1

Pattern 2

Pattern 3

Pattern 4

Pattern 5

Pattern 6

Pattern 7

Pattern 8In the edge pattern, nine pixels can be divided into two groups, and , respectively. For Edge Patterns 1–4, feature vector , where , , and , was used for edge description. For Edge Patterns 5–8, two feature vectors and , where , , and , were used for edge description. For example, a value of 1 is set for initial seeds in the ARG-based algorithm. The values of the pixels in and are 1 and 0, respectively. Thus, for Edge Patterns 1, 2, 3, and 4, is (3, 3, 0), (2, 2, 2), (0, 3, 3), and (2, 2, 2), respectively, and for Edge Patterns 5, 6, 7, and 8, the and values are (1, 2, 3) and (3, 2, 1), (3, 2, 1) and (3, 2, 1), (3, 2, 1) and (1, 2, 3), and (1, 2, 3) and (1, 2, 3), respectively. This approach is summarized as follows. (1) Calculate in a binary image. (2) Record if is (3, 3, 0), (2, 2, 2), or (0, 3, 3). (3) Record and calculate if is (1, 2, 3) or (3, 2, 1). (4) Record if is (1, 2, 3) or (3, 2, 1).

After all pixels in an image were processed using the aforementioned procedure, the edge was classified using feature vectors and . An SVM was used for the classification [16]. The edge descriptor from the feature description is expressed aswhere represents the seven coefficients of the normalized edge numbers from Edge Patterns 1, 2 (4), 3, 5, 6, 7, and 8 and ranges from 0 to 1. Table 1 illustrates edges (1–8) in the descriptors , , and . The edges were normalized as the seven coefficients, and , , and were , , and , respectively.

Descriptoredge 12 (4) 3 5 6 7 8

1127 517 922358335106 81
1179 6721051307281217231
571021 61768132153141

For SVM classifiers, two parameters, parameter and the RBF kernel parameter , must be optimized. Parameter is a user-specified positive parameter used for controlling the tradeoff between SVM complexities. This study adopted the hold-out procedure for determining the two parameters, in which the samples were classified into training samples, on which the classifiers were trained, and other samples, on which the classifier accuracy was tested. Table 2 presents the class labels of the samples. The edge descriptor extracted from each image was used as the data set input. In this study, 140 data sets of each sample were used. The SVM was trained and tested using these data sets. Forty data sets were randomly selected as the training samples, and the others were used for evaluating the SVM classifier accuracy. Table 3 records the testing accuracy at various combinations of the two parameters. Superior testing accuracy was realized using and in the SVM. Table 4 lists the classification results for different sample sizes. Sample sizes of 140, 280, and 400 samples in each class are apparently unrelated to the classification results.

Class labelsSamples

ADegrees of ±1°
BDegrees of ±2.5°
CDegrees of ±4°
DDegrees of ±5.5°


3 8787888989898998
5 8889899089909089
7 88 89909190919190
9 89 90908990929290
11 89 89908992939292
13 88 89899093939392
15 88 90909192939290
17 88 89909191929190

Sample sizeTrainingValidationAccuracy rates (%)


2.2. EFD-Based Algorithm for Industrial Inspection

This section describes the inspection sensor and the EFD-based approach. As an example of industrial inspection, the EFD method was applied to inspect eyeglass lenses. The tested technology determines the degree of the lenses. This study used the EFD-based inspection sensor for effectively recognizing blurred objects. Inspecting object blurring in the manufacturing process is difficult because the inspector may fail to focus the blurred image on the target panel. Therefore, developing an inspection sensor that bridges the gap between the blurred image and the inspector is essential.

Figure 1 is the block diagram of the inspection sensor. The proposed sensor senses blurred objects and applies a suitable feature-extraction strategy on the basis of the sensing results. The sensor’s procedure is summarized as follows. (1) Input the sample images from the image queue. (2) Determine whether the image satisfies the condition for blurred-image processing. (3) Perform EFD-based extraction. (4) Perform SVM classification. (5) Determine whether any image remains in the image queue. As shown in Table 2, for inspecting 200 Class B sample images, and were set to 0.78 and , , and , respectively; and , respectively, represent the optimal thresholds and the optimal edge descriptors for Class B samples. The input images were converted to 1024 × 768 pixel images with an 8-bit gray level. The 256 gray levels were normalized in the range 0-1, and the images were segmented using ARG. Step (2) determines image processing according to the following condition:where is the displacement of the sensing device and is the threshold of the displacement (0.5 mm in this study). If the sample image satisfies this condition, Step (3) commences; otherwise, discrete wavelet transform- (DWT-) based processing is performed. The DWT-based processing includes DWT feature extraction and SVM classification. The procedure of the DWT-based processing is as described in a previous vision-based study [17]. The procedure is complete when no image remains in the image queue. and were reset and other samples were inspected similarly. The inspection is complete when the image queue is empty.

Figure 2 depicts the experimental setup. The target panel, installed on a platform, indicated the degrees of the lens. The lens was mounted on the support frame of a sensing telescope 10.67 m from the target panel. Data on the test lens were listed in Table 2. The lens was selected from the 100 validation samples. During inspection, an eyeglass lens of unknown degree was mounted on the telescope, and the surface light from the platform illuminated the target panel. To produce camera shakes, a spring-dashpot system was positioned below the telescope, and vibrations were manually induced. An accelerometer, placed behind the telescope, measured the strength of the vibrations and forwarded them to an industrial computer, which converted the signals to displacements and processed the image signal generated using a charge-coupled device (CCD) camera and the displacement information. Conventionally, the telescope lens is manually focused on the target panel. The proposed method processes the telescopic images captured using the CCD camera according to object-sensing results—an approach that quickly determines the degrees of the lens to solve the object-blur problem when focusing the sensing telescope.

As shown in Figure 3, the proposed algorithm applies the following steps to automatically obtain suitable and .

Step 1. Input images are tested using a given number of descriptors , .

Step 2. A value of 1 is set for initial seeds in the gray image, and the threshold values are set in the range 0.01–0.99 sequentially, and , .

Step 3. ARG segmentation is implemented.

Step 4. EFD-based extraction is implemented.

Step 5. Images are classified using SVM.

Step 6. The recognition rate of each adjustable threshold is determined for the given image; it is defined as follows:where is the number of correctly classified images during the test run and is the total number of test data sets (in this study, was 100). If the recognition rate exceeds a given value , Step 7 commences; otherwise, Steps 26 are repeated.

Step 7. The process terminates if the sample images with all cases of the given number of descriptors have already been tested; otherwise, Steps 16 are repeated. In addition, the algorithm stops if does not satisfy the condition in Step 6.

For example, Step 1 inputs the sample images and descriptors , , and . Step 2 sequentially sets in the range 0.01–0.99, and the ARG segmentation is implemented (Step 3). Step 4 extracts the edge features, which are classified in Step 5. When is 0.9 (i.e., a 90% accuracy rate), Steps 26 are repeated until exceeds 0.77. Step 7 determines whether to stop the algorithm in the , , and cases. The final value of the threshold in this test was 0.78. Thus, the algorithm automatically obtained suitable and as 0.78 and , respectively.

3. Experimental Results and Discussion

3.1. Results Obtained Using the EFD-Based Sensor

First, the study employed a general classification to test the proposed method. Table 5 presents the class labels of wrench samples used for the general classification. The edge descriptors (, , and ) of the wrench samples were listed in Table 6. , , and were , , and , respectively. For the test case, 40 blurry wrench images were randomly selected as training samples, and 100 blurry wrench images were used for the classification. Figure 4 shows an example of the blurred-image segmentation using the EFD-based method. Edges (1–8) of the segmentation are and the corresponding edge descriptors are . Based on the edge descriptors (Table 6), the blurry image in Figure 4 belongs to samples . Table 7 lists the classification results obtained using the EFD-based method and demonstrates an average accuracy rate of 93%.

Class labelsSamples

E203 mm (8′′)
F 254 mm (10′′)
G 305 mm (12′′)

Descriptoredge 12 (4) 3 5 6 7 8

4112553 3512568 994810069333
4622895 398308910545346112217
4993432 437499312482624115603


E93 34
F 492 4
G 4 3 93

Then, the experiment was conducted to test the accuracy and performance of the proposed method in manufacturing. As depicted in Figure 2, an experiment setup detail for the test mainly includes target panel, a telescope, a CCD camera, and industrial computer. Figures 5 and 6 show the blurred-image segmentation yielded by the EFD-based and DWT-based inspection methods, respectively. Figure 5 shows that the images with optimal edge descriptors processed using the proposed method are visibly different and can be classified on the basis of their differences. However, the segmented images obtained using DWT are nearly identical (Figure 6). Figure 7 displays the segmentation results for samples A, B, C, and D with the given descriptors , , . The most suitable descriptors are because the corresponding images are visibly different. Tables 8 and 9 list the classification results obtained using the proposed sensor with and without camera shakes, respectively. The EFD yielded more accurate results (an average accuracy rate of 93%) than the DWT did for samples A, B, C, and D in the inspection with camera shakes. However, the DWT was more appropriate in the other cases (Table 9). The results demonstrate that the proposed sensor can select a suitable feature-extraction strategy depending on camera shakes.


B 1/492/81 3/8 4/7
C 1/5 5/993/80 1/6
D 2/7 3/6 1/594/82


B 0/091/94 4/3 5/3
C 0/0 6/592/95 1/0
D 3/1 6/5 0/091/94

Table 10 illustrates the selection thresholds of the sample images with a given number of descriptors . The thresholds for the descriptors are selected automatically by the SVM. The most suitable descriptors are obtained from among several descriptors. This study employed the leave-one-out cross-validation [18] with various thresholds to verify the selection threshold of the proposed algorithm. Table 11 shows that samples A, B, C, and D had the smallest MSEs: 0.1017, 0.1269, 0.1429, and 0.1351, respectively. The thresholds = 0.70, 0.80, 0.80, and 0.50 were the optimal selections for samples A, B, C, and D in the inspection because these values yielded the highest accuracy rate (Table 10).

ABCDAccuracy rates (%)



0.850.4153 0.25350.23640.5768
0.800.3844 0.12690.14290.5124
0.750.3124 0.17430.29130.4779
0.700.1017 0.24950.33950.4063
0.650.1836 0.30780.39570.3392
0.600.2953 0.35690.45660.2974
0.550.3228 0.40550.48930.2051
0.500.3958 0.49570.52380.1351

This study used the computational complexity [19] for quantifying the amount of time required for the proposed algorithm. The time cost function quantifies the amount of time required for an algorithm used in binary search tree operations and is given bywhere is the logarithmic time required by an algorithm for all -sized inputs of size in the big- notation, which excludes coefficients and lower-order terms. Table 12 reports the running time growth rates of the proposed algorithm, including running the EFD-based extraction and SVM. Under given parameters (, in the blurred-image scenario and = 0.82, 5-level DWT, in other scenarios) and 200 sample images (40 images of Class B including 25 blurred images) in the image queue, the sensor demonstrated an accuracy of 100% in inspecting Class B samples with response times within 13 μs of .


Classifiers based on artificial neural networks (ANNs) and Bayes classifier were also investigated for comparison. Similar to the SVM-based experiment, 40 data sets were randomly selected as training samples, and 100 data sets were used for validation. The tested ANN was a three-layer neural network with three neurons ( descriptors) in the input layer, five neurons in the hidden layer, and four neurons (the four sample types) in the output layer. The Bayes classifier is given bywhere is a set of descriptors of the edge descriptor from a sample. Subsequently, the mean vectors and covariance matrix of the coefficients for the th sample were derived. The samples are identified as the class by minimizing the calculated value of . Table 13 lists the time cost function and accuracy rates of the experiments. The accuracy rates for the SVM, ANN, and Bayes classifiers are 93%, 83%, and 79%, respectively, demonstrating that the SVM outperforms the other classifiers.

Classifiers (μs)Accuracy rates (%)

SVM 10 93
ANN 21 83
Bayes classifier 12 79

3.2. Comparison of Existing Methods

To compare the performances of distinct conventional methods, nonuniform image deblurring schemes by Xu et al. [3] and Whyte et al. [20] were compared with the proposed method. Xu et al. [3] proposed the nonuniform point spread function on the basis of the optical flow estimation model for removing blurring caused by camera shakes. Whyte et al. [20] proposed a parametrized geometric model for camera shakes and applied it for deblurring within the framework of existing camera shake removal algorithms. Similar to the earlier experiments, 40 blurry images (real camera shake blur) were randomly selected as training samples, and 100 blurry images were used for validation. The same block diagram (Figure 1) used in the proposed method was employed. (1) Input the 100 blurry images from the image queue. (2) Perform preprocessing, image deblurring (using the model [3, 20]), and ARG segmentation. (3) Perform DWT-based extraction. (4) Perform SVM classification. (5) Determine whether any image remains in the image queue.

The 100 blurry images of wrenches from the image queue were tested for a general classification. Figure 8 presents the experiment results. The segmented image using the model [3, 20] produces details (Figures 8(b) and 8(c)). However, after using the EFD-based method, the edge of the wrench becomes clear (Figure 8(d)). Table 14 lists the time cost function and accuracy rates of the classification. The accuracy rates were 89% for Xu et al. [3], 90% for Whyte et al. [20], and 93% for the proposed method. The time cost function for this study is lesser than that in the other two methods because the EFD-based method classifies blurry images without image deblurring. Thus, the proposed method outperforms the other methods.

Methods (μs)Accuracy rates (%)

Xu et al. [3] 25 89
Whyte et al. [20] 33 90
This study 11 93

4. Conclusion

This paper proposes a novel, adaptive inspection sensor for industrial inspection of images. The proposed EFD-based sensor can sense blurred objects and tune the selection of the feature-extraction strategy according to the sensing results. In object classification, the EFD-based algorithm selects and optimally tunes suitable features. Unlike other recognition methods, the EFD-based method combined with image deblurring directly uses the edge descriptors in the ARG to recognize blurred objects. The experimental results demonstrated that the proposed method recognizes diverse blurry samples efficiently at a recognition rate of 93%.

Conflict of Interests

The author has no conflict of interests to declare regarding the publication of this paper.


  1. A. Levin, “Blind motion deblurring using image statistics,” Advances in Neural Information Processing System, vol. 19, pp. 841–848, 2006. View at: Google Scholar
  2. Y.-W. Tai, P. Tan, and M. S. Brown, “Richardson-Lucy deblurring for scenes under a projective motion path,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 8, pp. 1603–1618, 2011. View at: Publisher Site | Google Scholar
  3. Y. Xu, X. Hu, and S. Peng, “Blind motion deblurring using optical flow,” Optik, vol. 126, no. 1, pp. 87–94, 2015. View at: Publisher Site | Google Scholar
  4. R. Adams and L. Bischof, “Seeded region growing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 6, pp. 641–647, 1994. View at: Publisher Site | Google Scholar
  5. C.-C. Kang and W.-J. Wang, “A novel edge detection method based on the maximizing objective function,” Pattern Recognition, vol. 40, no. 2, pp. 609–618, 2007. View at: Publisher Site | Google Scholar
  6. R. C. Gonzales and R. E. Woods, Digital Image Processing, Addison-Wesley, New York, NY, USA, 2nd edition, 1992.
  7. D. Marr and E. Hildreth, “Theory of edge detection,” Proceedings of the Royal Society B: Biological Sciences, vol. 207, no. 1167, pp. 197–217, 1980. View at: Publisher Site | Google Scholar
  8. X. Liu and S. Fang, “A convenient and robust edge detection method based on ant colony optimization,” Optics Communications, vol. 353, pp. 147–157, 2015. View at: Publisher Site | Google Scholar
  9. G. P. Silva Jr., A. C. Frery, S. Sandri, H. Bustince, E. Barrenechea, and C. Marco-Detchart, “Optical images-based edge detection in synthetic aperture radar images,” Knowledge-Based Systems, vol. 87, pp. 38–46, 2015. View at: Publisher Site | Google Scholar
  10. C. Lopez-Molina, H. Bustince, J. Fernandez, P. Couto, and B. D. Baets, “A gravitational approach to edge detection based on triangular norms,” Pattern Recognition, vol. 43, no. 11, pp. 3730–3741, 2010. View at: Publisher Site | Google Scholar
  11. X. Fu, H. You, and K. Fu, “A statistical approach to detect edges in SAR images based on square successive difference of averages,” IEEE Geoscience and Remote Sensing Letters, vol. 9, no. 6, pp. 1094–1098, 2012. View at: Publisher Site | Google Scholar
  12. T. M. Khan, D. G. Bailey, M. A. U. Khan, and Y. Kong, “Real-time edge detection and range finding using FPGAs,” Optik, vol. 126, no. 17, pp. 1545–1550, 2015. View at: Publisher Site | Google Scholar
  13. L. Gracia, C. Perez-Vidal, and C. Gracia, “Computer vision applied to flower, fruit and vegetable processing,” International Scholarly and Scientific Research & Innovation, vol. 5, no. 6, pp. 345–351, 2011. View at: Google Scholar
  14. M. Weyrich, Y. Wang, J. Winkel, and M. Laurowski, “High speed vision based automatic inspection and path planning for processing conveyed objects,” in Proceedings of the 45th CIRP Conference on Manufacturing Systems, pp. 442–447, Athens, Greece, May 2012. View at: Publisher Site | Google Scholar
  15. D. Aiteanu, D. Ristic, and A. Gräser, “Content based threshold adaptation for image processing in industrial application,” in Proceedings of the 5th International Conference on Control and Automation (ICCA ’05), pp. 1022–1027, Budapest, Hungary, June 2005. View at: Google Scholar
  16. D. Qu, W. Li, Y. Zhang et al., “Support vector machines combined with wavelet-based feature extraction for identification of drugs hidden in anthropomorphic phantom,” Measurement, vol. 46, no. 1, pp. 284–293, 2013. View at: Publisher Site | Google Scholar
  17. T. K. Lin, “An adaptive vision-based method for automated inspection in manufacturing,” Advances in Mechanical Engineering, vol. 2014, Article ID 616341, 7 pages, 2014. View at: Publisher Site | Google Scholar
  18. S. Chen, X. Hong, C. J. Harris, and X. Wang, “Identification of nonlinear systems using generalized kernel models,” IEEE Transactions on Control Systems Technology, vol. 13, no. 3, pp. 401–411, 2005. View at: Publisher Site | Google Scholar
  19. A. Schonhage, “Equation solving in terms of computational complexity,” in Proceedings of the International Congress of Mathematicians, pp. 131–153, Berkeley, Calif, USA, August 1986. View at: Google Scholar | MathSciNet
  20. O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Non-uniform deblurring for shaken images,” International Journal of Computer Vision, vol. 98, no. 2, pp. 168–186, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH

Copyright © 2016 Tsun-Kuo Lin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.