Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2016, Article ID 6848360, 13 pages
http://dx.doi.org/10.1155/2016/6848360
Research Article

A Dynamic Feature-Based Method for Hybrid Blurred/Multiple Object Detection in Manufacturing Processes

Department of Information Technology and Communication, Shih Chien University, No. 200, University Road, Neimen, Kaohsiung 84550, Taiwan

Received 22 March 2016; Accepted 9 May 2016

Academic Editor: Zhike Peng

Copyright © 2016 Tsun-Kuo Lin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Vision-based inspection has been applied for quality control and product sorting in manufacturing processes. Blurred or multiple objects are common causes of poor performance in conventional vision-based inspection systems. Detecting hybrid blurred/multiple objects has long been a challenge in manufacturing. For example, single-feature-based algorithms might fail to exactly extract features when concurrently detecting hybrid blurred/multiple objects. Therefore, to resolve this problem, this study proposes a novel vision-based inspection algorithm that entails selecting a dynamic feature-based method on the basis of a multiclassifier of support vector machines (SVMs) for inspecting hybrid blurred/multiple object images. The proposed algorithm dynamically selects suitable inspection schemes for classifying the hybrid images. The inspection schemes include discrete wavelet transform, spherical wavelet transform, moment invariants, and edge-feature-descriptor-based classification methods. The classification methods for single and multiple objects are adaptive region growing- (ARG-) based and local adaptive region growing- (LARG-) based learning approaches, respectively. The experimental results demonstrate that the proposed algorithm can dynamically select suitable inspection schemes by applying a selection algorithm, which uses SVMs for classifying hybrid blurred/multiple object samples. Moreover, the method applies suitable feature-based schemes on the basis of the classification results for employing the ARG/LARG-based method to inspect the hybrid objects. The method improves conventional methods for inspecting hybrid blurred/multiple objects and achieves high recognition rates for that in manufacturing processes.

1. Introduction

Vision-based inspection has been studied and applied in manufacturing processes. The aim of vision-based inspection is to classify objects or products on the basis of vision features instead of manual inspection in industrial quality control. Several methods have been proposed to address industrial inspection. Weyrich et al. [1] proposed a vision-based methodology for quality grading and sorting of oranges. Chen et al. [2] applied a frequency-based filtering method to extract defects for touch panel inspection. Lin and Tsai [3] designed a filtering mask in the frequency domain to successfully remove background patterns and detect defects. Rebhi et al. [4] used local homogeneity and discrete cosine transform for texture defect detection. Wong et al. [5] proposed stitching defect detection and classification by using wavelet transform and back-propagation neural network. The preceding inspections include three techniques used, respectively, in spatial domain filtering [4], frequency domain filtering [2, 3], and spatial/frequency domain analysis [5]. However, distinct objects are required for the inspections. Effective vision-based inspection aims to not only simply classify distinct objects but also dynamically apply suitable feature-based schemes to classify different object types (e.g., blurred or multiple objects). Blurred or multiple objects are common causes of poor performance in conventional vision-based inspection systems. Vision-based inspection for classifying hybrid blurred/multiple object images has long been a challenge. For inspecting hybrid objects, the selection of suitable vision features is critical; however, selecting such features is difficult. Therefore, the current study proposes a technique that dynamically selects suitable feature-based schemes as a solution to the inspection of hybrid objects. The proposed algorithm employs a dynamic feature-based strategy and SVMs to dynamically select suitable feature-based schemes for effectively inspecting hybrid objects.

The contributions of this study are summarized as follows. During industrial inspection, the proposed technique employs a dynamic feature-based strategy on the basis of SVM results and suitably tunes the selection of feature-based schemes for inspecting hybrid blurred/multiple objects. In object classification, the dynamic strategy effectively recognizes objects in manufacturing regardless of blurred or multiple samples. Finally, the dynamic selection algorithm applies suitable feature-based schemes to improve the conventional methods for inspecting hybrid objects.

The remainder of this paper is organized as follows. Section 2 reviews related work, and Section 3 presents the proposed algorithm for dynamically inspecting hybrid objects. Section 4 presents the experimental results derived from applying the proposed algorithm to various samples in addition to providing a comparison of various existing methods. The final section offers the conclusions of this study.

2. Related Work

This section describes the existing approaches, which include feature-based schemes and image segmentations, related to the proposed method and finally addresses the differences between the proposed method and the existing approaches.

Recent studies have investigated feature-based methods in image processing and computer vision. Discrete wavelet transform (DWT) is a frequently used technique because of its satisfactory feature extraction engendered by its space-frequency localization and multiresolution characteristics. Huang et al. [6] proposed a face recognition method by employing a 2D-DWT and new patch strategy. They showed that the method outperformed the traditional 2D-DWT method and a state-of-the-art patch-based method. Kumar et al. [7] used a DWT-based method to detect and extract text from a document image. Zhang et al. [8] proposed a 3D-DWT-based approach to detect Alzheimer’s disease (AD) and mild cognitive impairment (MCI) on the basis of structural volumetric magnetic resonance images. They validated the effectiveness of 3D DWT and reported that their approach had the potential to facilitate early diagnosis of AD and MCI. For object classification, moment invariants (INVs) are often used as feature vector sets [9] for designing and applying machine learning technique. Diao et al. [10] presented a method of deriving INVs under similarity transformation. They tested the derived INVs for bird, face, school desk, and bush models. Recently, spherical wavelet transform (SWT) has been used to solve many geometry processing problems. Laga et al. [11] presented a new 3D content-based retrieval method that is based on SWT. Görgel et al. [12] employed a local seed region growing- (LSRG-) SWT hybrid scheme for mammographic mass detection and classification. Moreover, Zimbres et al. [13] proposed an SWT-based method to search for a magnetically induced alignment in the arrival directions of ultra-high-energy cosmic rays. According to the previous studies, single-feature-based methods can perform effectively in detecting objects exhibiting simple features. For example, SWT is superior to DWT in detecting breast masses because SWT fits the geometric structure of spherical masses. Regarding objects exhibiting different features, the single-feature-based methods might fail to exactly extract the features. Therefore, to resolve this problem, the current study proposes a feature-based strategy to dynamically select DWT, SWT, INV, and edge-feature-descriptor (EFD) schemes when concurrently detecting hybrid blurred/multiple objects. The EFD scheme was employed for effectively detecting blurred objects [14].

Region growing-based image segmentation techniques have been studied in recent years. Zhang et al. [15] proposed a bidirectional region growing segmentation algorithm for medical image segmentation, and the algorithm could obtain satisfactory segmentation results, regardless of the original medical images containing noises. Lázár and Hajdu [16] presented a retinal vessel segmentation method that is based on directional response vector similarity and region growing. They tested the method on three publicly available image sets, proving its accuracy to be comparable to that of a human observer. Rouhi et al. [17] employed an automated region growing technique and cellular neural network method for benign and malignant breast tumor classification. They used different classifiers to evaluate the performance of these methods, revealing that they exhibited promising performance in image segmentation. Classification is also a fundamental problem that must be addressed in vision-based inspection. SVM-based classification methods have been widely used in various applications, such as face detection, handwriting recognition, chemical pattern classification, and fault diagnosis. SVM performs satisfactorily in situations involving a small sampling size and high dimension, and it exhibits high accuracy and favorable generalization capabilities [18]. Therefore, this paper proposes a dynamic feature-based strategy that is based on SVM assessments and ARG/LARG segmentations for inspecting hybrid blurred/multiple objects. This method can effectively recognize objects in manufacturing, regardless of blurred or multiple samples.

Considering previous studies, the existing methods including the feature-based methods (DWT, INV, and SWT), region growing segmentation, and SVM classification are similar to those used in this study. However, the proposed method entails solving the detection problem by using suitable multifeature-based schemes rather than single-feature-based one as distinct from the existing methods. The study employed a dynamic feature-based strategy combined with ARG/LARG-based classification for effectively performing hybrid blurred/multiple object discrimination in manufacturing processes. This dynamic strategy selects the DWT, SWT, INV, and EFD schemes to solve the extraction problem inherent in single-feature-based methods when concurrently detecting hybrid blurred/multiple objects. This study also applied the proposed blurred/multiple object detection system that employs suitable inspection schemes combined with ARG/LARG-based classification methods for inspecting single/multiple objects to minimize inspection times. Finally, this study quantitatively compared inspection methods in the manufacturing field.

3. Proposed Algorithm

This section describes the dynamic feature-based method including the SVM algorithm and feature-selection algorithm and then introduces the hybrid blurred/multiple object detection system for inspecting hybrid objects.

3.1. SVM Algorithm

This paper proposes a selection algorithm to dynamically select suitable feature-based schemes for effectively inspecting hybrid blurred/multiple objects. As illustrated in Figure 1, an inspection image comprises local subregions , each of which has corresponding thresholds as well as one object feature of , , , and . = (0,0, 0,1, 1,0, 1,1) represents the index of . The object features of , , , and are coded as , , , and , respectively. The proposed feature-based method involves first using a set of initial seeds in the local growth regions . Within these local subregions, each initial seed is considered to be surrounded by a group of pixels if the following selection criterion is satisfied:where is the normalized gray level of the th pixel in one of the local subregions. For each local growth region, inclusion in a local subregion requires each pixel to be eight-connected to at least one pixel in the same subregion. Local subregions can be merged when pixels are connected to multiple subregions. The threshold in the local subregions can be determined using an iteration method [19]. iteration method is executed from initial input values of until the stopping condition is satisfied (e.g., = 101 satisfies the stopping condition). The thresholds , , , and are then determined for the local subregions , , , and , respectively. These procedures involved in this approach can be summarized as follows: input the initial values of ; execute iteration method; determine whether iteration satisfies the stopping condition; and record the thresholds of for the local subregions of .

Figure 1: Local subregions of and the corresponding thresholds as well as one object feature of , , , and in an inspection image.

Figure 2 illustrates six test types for classifying the hybrid objects in this study. The object features for the six test types are coded as follows: for the object in , the corresponding features for types I, II, III, IV, V, and VI are coded as , , , , , and , respectively; for the object in R0,1, the corresponding features are coded as , , , , , and ; for the object in R1,0, the corresponding features are coded as , , , , , and ; and for the object in R1,1, the corresponding features are coded as , , , , , and .

Figure 2: Different types for hybrid blurred/multiple object detection.

Figure 3 presents the classification of the object features of , , , and through the SVM algorithm. The conventional SVM is a tool for solving two-class problems. The common approaches for applying it to a multiclass problem involve converting the multiclass problem into several binary-class problems. The SVM algorithm requires only N-1 SVMs for an N-class problem, reducing the computation time during inspection. Figure 3 shows the structure of the SVMs, namely, SVM 1, SVM 2, and SVM 3; each SVM was trained to function differently. SVM 1 divides all samples into two classes, and ; SVM 2 and SVM 3 divide and , respectively, into , , , and . The SVMs continue until all the samples have been identified, after which the classification is stopped.

Figure 3: SVM algorithm.

For every SVM classifier, two parameters, namely, parameter and the RBF kernel parameter γ, must be optimized. Parameter is a user-specified positive parameter used for controlling the trade-off between SVM complexities. This study adopted the hold-out procedure for determining the two parameters; in this procedure, the samples were classified into training samples, on which the classifiers were trained, and other samples, on which the classifier accuracy was tested.

The EFD of an inspection image was used for the SVM classification. According to the 3 × 3 mask depicted in Figure 4(a), edge pixels in the inspection images typically belong to one of eight possible edge patterns (Figures 4(b) and 4(c)). In the edge pattern, nine pixels can be divided into two separate groups, namely, G0 and G1. For Edge Patterns 1–4, feature vector x = (, , ), where = + + and = + + = + + , is used for edge description. For Edge Patterns 5–8, two feature vectors and = (, , ), where = + + ,   = + + , and = + + , are used for edge description. A value of 1 is set for initial seeds in the local growth regions. The values of the pixels in G0 and G1 are 1 and 0, respectively. Therefore, for Edge Patterns 1, 2, 3, and 4, is (3, 3, 0), (2, 2, 2), (0, 3, 3), and (2, 2, 2), respectively; moreover, for Edge Patterns 5, 6, 7, and 8, and values are (1, 2, 3) and (3, 2, 1), (3, 2, 1), and (3, 2, 1), (3, 2, 1) and (1, 2, 3), and (1, 2, 3) and (1, 2, 3), respectively. The procedures of this approach are summarized as follows: calculate in an inspection image; record if is (3, 3, 0), (2, 2, 2), or (0, 3, 3); record and calculate if is (1, 2, 3) or (3, 2, 1); and record if is (1, 2, 3) or (3, 2, 1).

Figure 4: (a) 3 × 3 mask, (b) patterns 1–4, and (c) patterns 5–8.

After all image pixels were processed using the aforementioned procedure, the edge was classified using feature vectors and . The edge descriptor , , from the feature description represents the seven coefficients of the normalized edge numbers from Edge Patterns 1, 2, 3, 5, 6, 7, and 8, and ranges from 0 to 1. In the SVM classification, each test type comprised 280 sample images. The edge descriptor extracted from each image was used as the data set input. The SVM was trained and tested using these images. Eighty images were randomly selected as the training samples, and the remaining images were used for evaluating the SVM classifier accuracy. Table 1 presents the testing accuracy at various combinations of the two parameters. High testing accuracy was realized when C = 211 and = 2−5 were used in the SVM. Table 2 lists the classification results for different sample sizes, indicating that sample sizes of 280, 400, and 800 in each test type are apparently unrelated to the classification results.

Table 1: Accuracy test for different combinations of the parameter and the RBF kernel parameter , (rows): value () of , (columns): value () of .
Table 2: Classification results using different sample sizes for each class.
3.2. Dynamic Feature-Based Method and Feature-Selection Algorithm

This section describes the dynamic feature-based method and then introduces the feature-selection algorithm used to dynamically select suitable feature-based schemes for effectively inspecting the hybrid objects. Figure 5 displays a flow diagram of the proposed algorithm and the relation between the dynamic feature-based method and feature-selection algorithm. As shown in Figure 5, the feature-selection algorithm operates in the method for dynamically selecting suitable feature-based schemes to extract features. The proposed algorithm applies the following steps to obtain a suitable scheme.

Figure 5: Dynamic feature-based method.

Step 1. Input the inspection images with the classified object features from the SVM algorithm.

Step 2. Determine whether the object features are or features.

Step 3. Implement LARG segmentation if the object features are or features; otherwise, implement ARG segmentation. The LARG and ARG segmentations are described in previous vision-based studies [19, 22].

Step 4. Implement the feature-selection algorithm to extract features.

Step 5. Classify images using SVM/SVMs (Figure 3) on the basis of the LARG/ARG segmentation.

Step 6. Determine the recognition rate of each adjustable threshold for the given image; it is defined as follows:where is the number of correctly classified images during the test run and is the total number of test data sets (in this study, N was 200). If the recognition rate exceeds a given value , execute Step 7; otherwise, repeat Steps 36.

Step 7. Terminate the process and obtain a suitable feature-based scheme. In addition, if any fails to satisfy the condition in Step 6, stop the process.

For example, Step 1 inputs type VI sample image (Figure 2) with local subregions and the corresponding object features . Step 2 determines the object features, and the LARG segmentation is implemented (Step 3). Step 4 employs different feature-based schemes to extract features, which are classified using SVM in Step 5. When is 0.9 (i.e., a 90% accuracy rate), Steps 36 are repeated until the recognition rate exceeds 0.9. Step 7 determines whether to stop the process. Thus, the method automatically obtained a suitable feature-based scheme for effectively inspecting the hybrid objects.

Figure 6 illustrates the operation of the feature-selection algorithm in dynamically selecting suitable feature-based schemes to extract features. The operating procedures of this algorithm are summarized as follows. Determine whether the object features are or features. Implement EFD scheme if the object features are or features; otherwise, implement the DWT, SWT, and INV schemes sequentially (Table 3). Implement SVM/SVMs on the basis of the LARG/ARG segmentation.

Table 3: Different feature-based schemes for , 1, 2 used sequentially in the feature-selection algorithm based on the object features , , , and .
Figure 6: Feature-selection algorithm.

For DWT feature extraction, an image signal is decomposed into various scales at different levels of resolution. The relationships among the DWT coefficients can be expressed as follows:where and are approximation and detail coefficients at -level decomposition, respectively. For the process of the decomposition, the initialization of is the original image signal. For SWT feature extraction, the relationship of SWT coefficients can be expressed as follows:where and are the approximation and detail coefficients at -level decomposition, respectively, and and are spherical polar angles. For INV feature extraction, a segmented image with gray pixel values at pixel is denoted as , and the central moment can be expressed aswhere (, ) are the coordinates of the segmented image centroid. The seven Hu-type INVs derived from the central moments of the second or third order ( + = 2 or 3) are and were used as features.

3.3. Hybrid Blurred/Multiple Object Detection System

This section describes the experimental setup and proposed hybrid blurred/multiple object detection system. The technology used in manufacturing classifies hybrid blurred/multiple objects. However, this study further modified the conventional inspection process in manufacturing to effectively classify the hybrid objects in a single image. Detecting hybrid objects has long been a difficult task in manufacturing because single-feature-based methods might fail to precisely extract features when concurrently detecting the hybrid objects. Therefore, this study proposes a hybrid blurred/multiple object detection system as a solution to inspecting single hybrid images.

As an example of industrial inspection, the dynamic feature-based method was applied to inspect eyeglass lenses. Figure 7 presents the experimental setup. A signal processing unit was proposed to obtain and transmit signals (Figure 7(a)). To obtain image signals, an industrial computer triggered four charge-coupled device (CCD) cameras through Wi-Fi to acquire synchronous images of multiple objects. The synchronous image information was transmitted from the CCD cameras to the industrial computer equipped with a frame grabber for capturing a single image composed of the synchronous images. To obtain vibration signals, an accelerometer measured the strength of the vibrations from four vibration sensors through Wi-Fi and forwarded them to the industrial computer, which converted the signals to displacements. To produce blurred images, signals were transmitted from the industrial computer to the four vibration triggers through Bluetooth, and the triggers then induced vibrations to shake the camera.

Figure 7: Experimental setup: (a) signal processing unit and (b) inspection device.

Figure 7(b) shows one of the four inspection devices as an example. The lens was mounted on the support frame of a sensing telescope 10.67 m from the target panel. The target panel, installed on a platform, indicated the degrees of the lens. A spring-dashpot system was positioned below the telescope to shake the camera, and the vibration trigger induced vibrations. The vibration sensor, placed on the telescope, sensed the vibrations and forwarded them to an accelerometer through Wi-Fi. The image signal generated using a CCD camera was transmitted through Wi-Fi to the industrial computer, which processed the signal and the displacement information. During inspection, an eyeglass lens of unknown degree was mounted on the telescope, and the surface light from the platform illuminated the target panel. Conventionally, four telescope lenses are manually focused on the target panel individually. The proposed method processes the four inspection devices to quickly determine the degrees of the lenses to solve the hybrid blurred/multiple problem when concurrently focusing the sensing telescopes.

Figure 8 illustrates a block diagram of the proposed hybrid blur/multiple object detection system. This system senses hybrid blurred/multiple objects and applies suitable feature-based schemes on the basis of the detection results. The operating procedures of this system are summarized as follows. Acquire synchronous images of multiple objects and input the sample images from the image queue, convert the input images and perform EFD-based extraction, classify object features using SVMs, implement ARG/LARG segmentation, apply feature-based schemes, perform SVM/SVMs classification, and determine whether any image remains in the image queue. As shown in Figure 2, for inspecting 200 type VI synchronous images of eyeglass lenses with local subregions , Step 2 converts the input images to 1366 × 768 pixel images with an 8-bit gray level. The 256 gray levels are normalized in the range 0-1, and the images are extracted using EFD-based method. Step 3 uses SVMs to classify the corresponding object features of the subregions as . Step 4 implements LARG segmentation on the basis of or features. Step 5 applies the schemes for the corresponding subregions . Step 6 performs SVM classification on the basis of the LARG segmentation results. The procedure is complete when no image remains in the image queue, thus completing the inspection, and other sample types are inspected similarly.

Figure 8: Proposed hybrid blurred/multiple object detection system.

4. Experimental Results and Discussion

This section describes the general classification results obtained using the proposed algorithm and applied to detecting hybrid blurred/multiple objects in manufacturing processes. Experiments were conducted to test the accuracy and performance of the proposed algorithm. The major results from each experiment included hybrid object detections, effectiveness of the proposed system, and accuracy and performance of the proposed algorithm. The results revealed that the proposed algorithm can be used as a hybrid blurred/multiple object inspection tool for dynamically selecting suitable feature-based schemes for inspections. The proposed system, which senses hybrid blurred/multiple objects and applies suitable feature-based schemes, could effectively classify hybrid objects in the local subregions of inspection images and solve the problem associated with concurrently inspecting hybrid objects in a single inspection image. Moreover, this study determined that the proposed algorithm outperformed existing methods.

4.1. General Classification Results Obtained Using the Dynamic Feature-Based Method

In this study, the proposed algorithm was tested in a general classification. Table 4 presents the classes of ratchet samples used for the general classification. For the test case, 80 hybrid ratchet images were randomly selected for types I–VI (Figure 2) separately as training samples, and 200 hybrid images for each type were used for the classification. Figures 9(b)9(e) depict type VI ratchet image segmentation results yielded by the dynamic feature-based, DWT-based, SWT-based, and INV-based methods, respectively. The segmented image obtained using the dynamic feature-based method produced clear and continuous contours either for distinct objects (R0,0, R1,0) or for blurred objects (R0,1, R1,1). The method applied schemes for the corresponding subregions . However, the segmented images obtained using the single-feature-based schemes produced discontinuous contours for distinct objects (using SWT and INV) and noises for blurred objects (using DWT, SWT, and INV). The results demonstrate that the proposed algorithm attained the clearest and most continuous contours among the compared schemes. The single-feature-based methods could not detect the hybrid blurred/multiple objects because they might fail to precisely extract features from the subregions. The dynamic feature-based method can concurrently inspect the hybrid objects in a single inspection image.

Table 4: Classes of the samples used in the experiments.
Figure 9: (a) Example of type VI ratchet image and the image segmentations for this image obtained using the (b) dynamic feature-based, (c) DWT-based, (d) SWT-based, and (e) INV-based methods.

Table 5 presents the selection thresholds for type VI ratchet images for the dynamic feature-based method with the suitable descriptors obtained from the study [14]. The thresholds were selected automatically by this method. This study employed leave-one-out cross-validation (LOO-CV) with various thresholds to verify the selection threshold of the method. Table 6 shows that the smallest mean squared error (MSE) was 0.1042. The thresholds were the optimal selections for type VI ratchet images in the inspection because these values yielded the highest accuracy rate (Table 5).

Table 5: Selected thresholds for subregions of type VI ratchet images with -level DWT and the descriptors .
Table 6: LOO-CV MSE of approximation coefficient with , , , and for type VI ratchet images.

The segmented images obtained using DWT were nearly identical for distinct objects (Figure 9(c)). The classification accuracy rates of the DWT-based method and the proposed algorithm were further compared, and Figure 10 displays the comparison results. The proposed algorithm yielded more accurate results than DWT did for samples II–VI in the hybrid blurred/multiple object inspection with camera shakes. The proposed algorithm was more appropriate for inspecting single/multiple blurred objects in the cases of samples II and IV. The results demonstrate that the proposed algorithm can apply suitable feature-based schemes to inspect hybrid blur/multiple objects in a single inspection image.

Figure 10: Comparison of the accuracy rates (%) associated with the proposed algorithm and DWT-based method.

This study compared the proposed algorithm with hybrid DWT-based methods combined with distinct deblurring schemes. The image deblurring schemes presented by Whyte et al. [20] and Xu et al. [21] were used for the comparisons. Figure 11 presents the deblurring results of these two schemes. The image segments illustrated in Figures 11(a) and 11(b) were obtained using the scheme of [20], with a nonuniform kernel, and the scheme of [21], with five iterations for updating the latent image and point spread function, respectively. Deblurring the image by using the scheme of [20] produced clear object contours. Therefore, the hybrid scheme of DWT combined with the deblurring [20] was coded as DWT-DEB and was employed for inspecting samples I–VI. Figure 12 displays the classification results obtained using the proposed algorithm and the hybrid DWT-DEB. The results demonstrate that the proposed algorithm yielded more accurate results than the DWT-DEB method did, especially for multiple blur inspection in the cases of IV.

Figure 11: Image segmentations for the local subregions of type VI ratchet image by using the deblurring schemes by (a) Whyte et al. [20] and (b) Xu et al. [21].
Figure 12: Comparison of the accuracy rates (%) associated with the proposed algorithm and the hybrid DWT-DEB method.
4.2. Hybrid Blurred/Multiple Object Detection in Manufacturing Processes

This section describes the test for the availability of the hybrid blurred/multiple object detection system (Figure 8). The experiment setup is detailed as follows. As displayed in Figure 7, the experiment setup for the test included the four inspection devices and the signal processing unit. The distance between the target panel and the sensing telescope was 10.67 m. Data on the test lenses are listed in Table 4. The lenses were selected from the 200 validation samples for each type. During the inspection, lenses with unknown degrees of curvature were mounted on the telescope, and the surface light from the platform illuminated the target panel.

To evaluate the DWT and DWT-DEB-based methods, the same block diagram (Figure 8) used in the proposed method was employed. Input the 200 validation samples for each type from the image queue. Convert the input images and perform EFD-based extraction. Perform object-feature classification. Perform ARG/LARG segmentation. Perform DWT/DWT-DEB extraction. Perform SVM/SVMs classification. Determine whether any image remains in the image queue. Figure 13 depicts the image segmentations in manufacturing processes obtained using the proposed algorithm, DWT-based method, and DWT-DEB-based method. The segmented image obtained using the proposed algorithm exhibited detailed and clear contours (Figure 13(b)). However, after the DWT-based and DWT-DEB-based methods were used, the segmented images were nearly identical for the blurred objects (Figures 13(c) and 13(d)). Furthermore, this study quantified the amount of execution time required by the proposed algorithm, DWT-based method, and DWT-DEB-based method in terms of the computational complexity. The time-cost function quantifies the amount of time required for an algorithm used in binary search tree operations, and it is given bywhere is the logarithmic time required by an algorithm for all n-sized inputs in the big- notation, which excludes coefficients and lower-order terms. Figure 14 presents the time-cost function and classification accuracy rates. The accuracy rates were 94% for the proposed algorithm, 89% for the DWT-based method, and 91% for the DWT-DEB-based method. The time-cost function for the proposed algorithm was considerably lower than that for the DWT-DEB scheme because the dynamic feature-based method can apply suitable feature-based schemes to classify the hybrid images without image deblurring. Therefore, the proposed algorithm outperformed the single-feature-based method (DWT) and hybrid deblurring method (DWT-DEB).

Figure 13: Image segmentations for (a) the local subregions of type VI image in manufacturing processes obtained using the (b) proposed, (c) DWT, and (d) DWT-DEB methods.
Figure 14: Time-cost function and accuracy rates from the test for the availability of the hybrid blurred/multiple object detection system.

5. Conclusion

This paper proposes a dynamic feature-based algorithm for detecting hybrid blurred/multiple objects in manufacturing as a solution to problems encountered in inspecting hybrid images. The proposed algorithm dynamically selects suitable inspection schemes for classifying the hybrid images and then applies the selected schemes to employ an ARG/LARG-based method to inspect the hybrid objects. The proposed algorithm can effectively classify hybrid objects in the local subregions of inspection images and solve the problem associated with concurrently inspecting hybrid objects in a single inspection image. The results demonstrate that the proposed algorithm can be used as a hybrid blurred/multiple object inspection tool for dynamically selecting suitable feature-based schemes for inspections. Moreover, the hybrid blurred/multiple object detection system can sense hybrid blurred/multiple objects and apply suitable schemes to attain an average recognition rate of 94% (from 92% to 95%). The proposed algorithm outperformed single-feature-based methods (DWT, SWT, and INV) and the hybrid deblurring method (DWT-DEB).

Competing Interests

The author has no conflict of interests to declare regarding the publication of this paper.

References

  1. M. Weyrich, M. Laurowski, P. Klein, and Y. H. Wang, “A real-time and vision-based methodology for processing 3D objects on a conveyor belt,” International Journal of Systems Applications, Engineering & Development, vol. 5, no. 4, pp. 561–569, 2011. View at Google Scholar
  2. Y.-C. Chen, J.-H. Yu, M.-C. Xie, and F.-J. Shiou, “Automated optical inspection system for analogical resistance type touch panel,” International Journal of Physical Sciences, vol. 6, no. 22, pp. 5141–5152, 2011. View at Google Scholar · View at Scopus
  3. H.-D. Lin and H.-H. Tsai, “Automated quality inspection of surface defects on touch panels,” Journal of the Chinese Institute of Industrial Engineers, vol. 29, no. 5, pp. 291–302, 2012. View at Publisher · View at Google Scholar · View at Scopus
  4. A. Rebhi, S. Abid, and F. Fnaeich, “Texture defect detection using local homogeneity and discrete cosine transform,” World Applied Sciences Journal, vol. 31, no. 9, pp. 1677–1683, 2014. View at Google Scholar
  5. W. K. Wong, C. W. M. Yuen, D. D. Fan, L. K. Chan, and E. H. K. Fung, “Stitching defect detection and classification using wavelet transform and BP neural network,” Expert Systems with Applications, vol. 36, no. 2, pp. 3845–3856, 2009. View at Publisher · View at Google Scholar · View at Scopus
  6. Z.-H. Huang, W.-J. Li, J. Shang, J. Wang, and T. Zhang, “Non-uniform patch based face recognition via 2D-DWT,” Image and Vision Computing, vol. 37, pp. 12–19, 2015. View at Publisher · View at Google Scholar · View at Scopus
  7. A. Kumar, P. Rastogi, and P. Srivastava, “Design and FPGA implementation of DWT, image text extraction,” Procedia Computer Science, vol. 57, pp. 1015–1025, 2015. View at Google Scholar
  8. Y. Zhang, S. Wang, P. Phillips, Z. Dong, G. Ji, and J. Yang, “Detection of Alzheimer's disease and mild cognitive impairment based on structural volumetric MR images using 3D-DWT and WTA-KSVM trained by PSOTVAC,” Biomedical Signal Processing and Control, vol. 21, pp. 58–73, 2015. View at Publisher · View at Google Scholar · View at Scopus
  9. B. Xiao, J.-T. Cui, H.-X. Qin, W.-S. Li, and G.-Y. Wang, “Moments and moment invariants in the Radon space,” Pattern Recognition, vol. 48, no. 9, pp. 2772–2784, 2015. View at Publisher · View at Google Scholar · View at Scopus
  10. L. Diao, J. Peng, J. Dong, and F. Kong, “Moment invariants under similarity transformation,” Pattern Recognition, vol. 48, no. 11, pp. 3641–3651, 2015. View at Publisher · View at Google Scholar · View at Scopus
  11. H. Laga, H. Takahashi, and M. Nakajima, “Spherical wavelet descriptors for content-based 3D model retrieval,” in Proceedings of the IEEE International Conference on Shape Modeling and Applications 2006 (SMI '06), p. 15, IEEE, Matsushima, Japan, June 2006. View at Publisher · View at Google Scholar · View at Scopus
  12. P. Görgel, A. Sertbas, and O. N. Ucan, “Mammographical mass detection and classification using Local Seed Region Growing-Spherical Wavelet Transform (LSRG-SWT) hybrid scheme,” Computers in Biology and Medicine, vol. 43, no. 6, pp. 765–774, 2013. View at Publisher · View at Google Scholar · View at Scopus
  13. M. Zimbres, R. Alves Batista, and E. Kemp, “Using spherical wavelets to search for magnetically-induced alignment in the arrival directions of ultra-high energy cosmic rays,” Astroparticle Physics, vol. 54, pp. 54–60, 2014. View at Publisher · View at Google Scholar · View at Scopus
  14. T. K. Lin, “A novel edge feature description method for blur detection in manufacturing processes,” Journal of Sensors, vol. 2016, Article ID 6506249, 10 pages, 2016. View at Publisher · View at Google Scholar
  15. X. Zhang, X. Li, and Y. Feng, “A medical image segmentation algorithm based on bi-directional region growing,” Optik, vol. 126, no. 20, pp. 2398–2404, 2015. View at Publisher · View at Google Scholar · View at Scopus
  16. I. Lázár and A. Hajdu, “Segmentation of retinal vessels by means of directional response vector similarity and region growing,” Computers in Biology and Medicine, vol. 66, pp. 209–221, 2015. View at Publisher · View at Google Scholar · View at Scopus
  17. R. Rouhi, M. Jafari, S. Kasaei, and P. Keshavarzian, “Benign and malignant breast tumors classification based on region growing and CNN segmentation,” Expert Systems with Applications, vol. 42, no. 3, pp. 990–1002, 2015. View at Publisher · View at Google Scholar · View at Scopus
  18. D. Qu, W. Li, Y. Zhang et al., “Support vector machines combined with wavelet-based feature extraction for identification of drugs hidden in anthropomorphic phantom,” Measurement, vol. 46, no. 1, pp. 284–293, 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. T. K. Lin, “Adaptive learning method for multiple-object detection in manufacturing,” Advances in Mechanical Engineering, vol. 7, no. 12, pp. 1–12, 2015. View at Publisher · View at Google Scholar
  20. O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Non-uniform deblurring for shaken images,” International Journal of Computer Vision, vol. 98, no. 2, pp. 168–186, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  21. Y. Xu, X. Hu, and S. Peng, “Blind motion deblurring using optical flow,” Optik, vol. 126, no. 1, pp. 87–94, 2015. View at Publisher · View at Google Scholar · View at Scopus
  22. T.-K. Lin, “A novel automated inspection approach based on adaptive region-growing image segmentation,” Journal of the Chinese Society of Mechanical Engineers, vol. 35, no. 1, pp. 57–65, 2014. View at Google Scholar · View at Scopus