Abstract

This paper developed a principal component analysis (PCA)-integrated algorithm for feature identification in manufacturing; this algorithm is based on an adaptive PCA-based scheme for identifying image features in vision-based inspection. PCA is a commonly used statistical method for pattern recognition tasks, but an effective PCA-based approach for identifying suitable image features in manufacturing has yet to be developed. Unsuitable image features tend to yield poor results when used in conventional visual inspections. Furthermore, research has revealed that the use of unsuitable or redundant features might influence the performance of object detection. To address these problems, the adaptive PCA-based algorithm developed in this study entails the identification of suitable image features using a support vector machine (SVM) model for inspecting of various object images; this approach can be used for solving the inherent problem of detection that occurs when the extraction contains challenging image features in manufacturing processes. The results of experiments indicated that the proposed algorithm can successfully be used to adaptively select appropriate image features. The algorithm combines image feature extraction and PCA/SVM classification to detect patterns in manufacturing. The algorithm was determined to achieve high-performance detection and to outperform the existing methods.

1. Introduction

Feature extraction has been widely used in automated inspection systems. Image feature extraction [1] is a dimensionality reduction approach that is frequently applied in vision-based inspection systems. Feature identification plays a major role in vision-based inspection because features in an image sequence contain redundant inputs, which might cause poor performance in inspection processes in manufacturing. Incorporating image feature identification into inspection systems has long presented a challenge to researchers. The current study proposes an adaptive learning technique for use in manufacturing. The proposed technique integrates principal component analysis (PCA) [2] with various image feature extraction techniques to identify image features on the basis of a support vector machine (SVM) [3]. The technique can effectively and adaptively analyze an image sequence and derive suitable features for detecting different objects. The adaptive PCA and feature extraction-based algorithm can handle redundant inputs in an image sequence from a sensing device.

Although PCA is a common variable reduction technique used in vision-based inspection, how to effectively use an adaptive PCA and feature extraction-based algorithm for image feature identification remains unexplored [46]. Unsuitable image features are a common cause of poor performance in manufacturing, and the use of a single-feature-based algorithm to detect different objects may fail to precisely extract features, leading to unsuitable image feature identification. Thus, this study developed an image feature identification technique for adaptively selecting suitable image features on the basis of PCA-based algorithms. The technique developed in this study uses a PCA-integrated scheme to effectively and adaptively select suitable image features in manufacturing and can identify the optimal number of image features in a processing framework when detecting different objects. The technique of adaptive dimensionality reduction is appropriate for use in establishing suitable feature extraction approaches and corresponding image features for detecting various patterns in manufacturing.

This article is organized in the following sequence. In Section 2, the author provides a review and discussion of related work. The PCA-integrated algorithm for feature identification and the detection system using the identified features are presented in Section 3. Additionally, Section 4 presents the results of experiments in which the detection system was applied and compares some machine learning methods. The final section states the conclusions.

Studies have probed the application of the PCA method or the integration of various methods with the PCA method for detecting faults and identifying, classifying, and reconstructing sensed data. Studies have proposed effective variable reduction techniques that involve diverse PCA-based approaches for processing sensed data with multiple variables. Using double-track railway lines as the detection objects, Espinosa et al. [7] used PCA-based classification for identifying broken rails; their derived experimental results demonstrated a 100% success rate. Li et al. [8] applied PCA in a nuclear power plant to detect faults and reconstruct sensors. PCA has additionally been used as a solution to reduce dimensions of feature variables for fault diagnosis and object identification [9, 10]. Liu et al. [11] used a deep PCA network to achieve more distinctive representations of face images; their proposed PCA model was determined to be insensitive to light and to be robust to occlusions. Various methods associated with PCA have been demonstrated to be highly effective for classifying and reconstructing sensed data. Lazzari et al. [12] applied Fourier transform infrared combined with PCA to classify biomass on the basis of its oil compositions; biomass was divided in their study into three groups that exhibited distinct compositions. Ren et al. [13] proposed a bidimensional empirical mode decomposition and PCA system to suppress a nonlinear fringe in an interference imaging spectrometer. They verified the feasibility of this approach by applying it to fringe and interferogram reconstruction processes by using a threshold based on principal component synthesis.

The PCA-based multivariate technique developed in this study possesses similarities to others. Nevertheless, this PCA-based technique integrates PCA with suitable feature extraction approaches when detecting objects; the proposed technique establishes suitable feature extraction approaches and corresponding image features for detecting various patterns in manufacturing. The adaptive PCA and feature extraction-based algorithm involves determining the ideal number of image features and derives the suitable features for detecting different objects; thus, it can be used for solving the detection problems that occur when employing unsuitable image sequence features with redundant or large inputs from sensing devices.

Studies on object detection are discussed as follows. A learning method based on local adaptation and region growing was proposed for the purpose of subjecting multiple-camera images to adaptive segmentation processes for multiple-object detection [14]. This method overcomes the problems that arise from the concurrent use of multiple sensing devices. To detect blurred or multiple objects that constitute hybrid images, a dynamic algorithm that employs feature scheme selection was proposed for classifying the mentioned objects constituting the images [15]. The algorithm can engage in the dynamic selection of suitable feature extraction schemes for hybrid object classification and detection. A previously executed study also proposed a PCA/SVM-based approach for identifying multivariate patterns from tactile and optical measurements in a system with multiple sensors [16]. The approach uses a PCA-based method and an algorithm based on edge feature description (EFD) to detect patterns from tactile and optical measurements, respectively. Other studies have detected objects using an image segmenting technique [14], a feature scheme selection algorithm [15], and a tactile and optical measurement scheme [16]. In contrast to the aforementioned studies, the current study proposes a PCA-integrated algorithm for identifying suitable image features for effectively detecting objects. The algorithm can eliminate redundant features, which are extracted from the extraction methods. The algorithm improves upon previous methods for detecting objects by eliminating redundant features. The adaptive PCA and feature extraction-based algorithm can solve problems encountered when employing unsuitable features, as encountered in prior research.

3. Proposed Method

This study proposes an adaptive PCA and feature extraction-based processing framework. The proposed algorithm for identifying suitable image features is introduced, and the system’s use in manufacturing is explained.

3.1. Adaptive PCA and Feature Extraction-Based Processing Framework

The proposed processing framework used for identifying suitable image features can be an alternative to approaches that entail selecting feature-based schemes [15] for effective pattern detection.

Figure 1 depicts a schematic illustrating the processing framework; this framework enables the utilization of the SVM classification results for identifying suitable image features. For this framework, the involved processing steps are as follows:

Step 1. Provide input image signals.

Step 2. Convert such signals into an image with a spatial resolution of 1024 × 768 pixels and an 8-bit gray level (normalized to the range between 0 and 1), respectively (image preprocessing).

Step 3. Code the image pixel (PIS), EFD, discrete wavelet transform (DWT), spherical wavelet transform (SWT), and moment invariants (INV) feature extraction methods as , and set sequentially. The PIS method can be used to modify the other extraction methods (EFD, DWT, SWT, and INV) [15] for precise feature extraction.

Step 4. Initialize the image seed value as 1 and apply these seeds in an adaptive region growing (ARG)-based algorithm [14] to group neighboring pixels; subsequently, execute ARG image segmentation.

Step 5. Execute the PIS, EFD, DWT, SWT, and INV feature extraction methods sequentially.

Step 6. Set the number of features n and sequentially.where i0 is set from 1 to 1024 for the number of column vectors of the image pixels; i1 is set from 1 to 7 for the number of the edge patterns; i2 is set from 1 to 6 for the number of DWT decomposition levels; i3 is set from 1 to 6 for the number of the SWT decomposition levels; and i4 is set from 1 to 7 for the number of the INV features.
The principal components are expressed aswhere pcj represents the principal components of PIS, EFD, DWT, SWT, and INV (j=0, 1, 2, 3, and 4, respectively), xn represents the corresponding features, and cn represents the numerical coefficient for xi.

Step 7. For the data translation process of the algorithm, execute data normalization to zero mean as well as unit variance.

Step 8. Establish PCA model training and execute PCA-based selection (optimal principal components can be determined using the scree test).

Step 9. Use optimal principal components to establish indicators.

Step 10. Use SVM to classify the indicators.

Step 11. Determine the recognition rate for the given image. Proceed to Step 12 if a recognition rate is achieved that is higher than a given value t; otherwise, repeat Steps 6–11.

Step 12. Once sample images are tested, stop the process; otherwise, repeat Steps 3–12. In addition, stop the algorithm if a segmented image fails to meet the condition in Step 11.

Step 13. Identify features using the optimal principal components.

Step 14. Obtain suitable image features for detecting various patterns in manufacturing.

3.2. PCA and Feature Extraction-Based Algorithm for Identifying Image Features

The PCA and feature extraction-based algorithm modifies a previously derived PCA-based method [16] to decorrelate image extraction data for identifying suitable features. The previously developed PCA-based method analyzes only vibration-sensed data for object detection. By contrast, the proposed algorithm combines PCA with image feature extraction for identifying features.

PCA is a statistical signal processing technique used to decorrelate original sensed data. It can usefully be applied to reduce data dimensionality because it converts a possibly correlated set of data into an uncorrelated data set [17]. The amount of correlated data is less than or equal to the amount of original data. The proposed PCA and feature extraction-based algorithm involves the use of a scheme of eigenvalue decomposition to generate eigenvectors as well as eigenvalues that represent the amount of variation in image features. Consequently, the transformation of high-dimensional correlated feature sets into low-dimensional uncorrelated feature sets is achieved. As presented in (3), P (P) consists of original data with n process variables (columns) as well as m raw samples (rows):where pi represents the column vector of the ith normalized sample.The covariance matrix Ω demonstrates correlation and can be expressed as follows.Ω can be subjected to a process of eigenvalue decomposition; after this process, p can be derived as follows:where pe denotes the projection vector of p onto the principal component subspace, whereas pr denotes the projection vector of p onto the residual subspace and can be applied for feature identification. The definition for pr iswhere is determined to represent an eigenvector matrix’s first k columns; the column vectors of the matrix correspond to nonnegative real eigenvalues. Sequencing the associated eigenvalues in descending order with respect to magnitude yields the following.The indicator I is used to determine if the features belong to the pattern and is defined as follows.Figure 2 depicts a flowchart illustrating the procedures of the PCA and feature extraction-based algorithm in identifying suitable image features. In this identification procedure, each data set can be considered to comprise 1024 sample images. The input images can be converted to images with a spatial resolution of 1024 × 768 pixels and an 8-bit gray level (normalized to the range between 0 and 1), respectively. The PCA model can be trained using 512 randomly selected samples for each data set, and the other samples can be used in accuracy evaluation of PCA-based selection. The PIS, EFD, DWT, SWT, and INV feature extraction methods can be executed sequentially. The matrix Pj ( representing ), presented in the following equation, consists of feature data with 512 samples (representing the rows) as well as n process features (representing the columns):where is the mth normalized sample matrix and can be expressed aswhere li is the ith intensity column vector of the image pixels and is denoted bywhere n denotes the number of image pixels in the length.where represents the ith normalized sample column vector, and k = represents and can be expressed aswhere n is the number of features for and the 1024 × 768 pixels for P0.

Selection of suitable image features for effectively detecting patterns is conducted according to the following steps in the proposed algorithm:

Step 1. Input the feature data Pj (set j = sequentially) with n process features.

Step 2. Execute normalization for Pj to zero mean as well as unit variance.

Step 3. Subject the normalized data to a procedure of eigenvalue decomposition. Subsequently, identify optimal principal components by applying the scree test [18], which can be represented by the following expression:where the parameter S is used to cut down the number of principal components and identify optimal components. Ci represents the communalities that are observed for each n process feature. When and Ci are determined to be greater than 1 and greater than 0.5, respectively, then S is equal to 1, and the component of interest is consequently retained; otherwise, S = 0, and the component of interest is eliminated.

Step 4. On the basis of the identified optimal principal components, establish the I indicator.

Step 5. Employ an SVM model to classify I on the basis of Ii.

Each Ii is attained through PCA model training; in addition, the indices i = (0, 1, 2, 3) are used to index for sample class . Each indicator consists of 512 samples for each training data set. The training process entails the following tasks: selecting operational data, normalizing the selected operational data and thus transforming them into training data, using the derived training data to identify optimal principal components, and establishing Ii according to the identified optimal principal components.

Step 6. Determine the accuracy of the executed recognition process (also referred to as the recognition rate).In the preceding equation, NC represents the number of images that have been correctly classified in the executed test run and N represents the sum of test sets (N = 512 in this case). Step 7 begins if the derived recognition rate is determined to be higher than a particular accuracy threshold t; if not, reset the feature number n (Step 6 in Section 3.1) and repeat Steps 1–6. The accuracy threshold t is described as follows:where the threshold values Ti in (17) are set in the range 0.90–0.99 sequentially; = , i = 0, 1, 2, …, 9.

Step 7. Terminate the process and obtain the optimal principal component PCj with the corresponding suitable feature fj ( representing ). The algorithm ceases operation if any segmented image cannot satisfy the condition that is outlined in the preceding step (i.e., Step 6).

Suppose that, per Step 1, new samples of class C with feature data Pj and n process features are tested for a set recognition rate t of 0.95. Step 2 normalizes Pj to the range . In Step 3, the optimal principal components are determined to be , with suitable image features being , according to the scree test. I is established in Step 4. In Step 5, I is classified using the SVM according to I2. In Step 6, the recognition rate is determined to be 0.96, which exceeds the t value of 0.95, and then Step 7 commences. In Step 7, the process stops and suitable features are obtained for detecting the class C. Thus, in this example, optimal principal components with suitable image features are obtained by the algorithm, and sample class C is effectively detected.

3.3. Detection System Using Suitable Image Features in Manufacturing

This study developed a detection system for eyeglass manufacturing (Figure 3). The detection system was modified for the executed experiments, and a decision system for identifying suitable features was applied in the experimental setup [15]. Specifically, the experimental setup included the detection system and the decision system. Table 1 presents the classes of pattern samples used in the experiments. A target panel, indicating the degree of orientation of an eyeglass, was installed on a platform. For detection, an eyeglass with an unknown degree of curvature was affixed to a telescope’s support frame, and the target panel-telescope distance was 10.67 m. Illumination for the target panel was provided by the platform’s surface light. Through the adjustment of the telescope’s focus, images from the telescope were projected onto the target panel. Manual adjustment of the telescope’s focus changed the degree of orientation of the eyeglass. The detection system was executed for processing a digital camera-captured telescopic image and rapidly determining the degree of orientation of the eyeglass without needing to focus the telescope on the target panel.

The decision system for determining suitable features includes an SVM model. Figure 4 displays a block diagram of the mentioned system’s operating procedure.

The SVM model has been used for classifying the segmented image data [14], feature selection data [15], and vibration/image sensed data [16]. The present study employed the SVM model to classify the suitable features. The model is described as follows. Take a set of samples that appertain to different classes that are associated with input (representing an N-dimensional input space) and class labels (target output) . The mentioned model [16] necessitates optimizing C (representing a positive parameter specified by the user) and γ (representing a radial basis function kernel parameter). C can typically be applied to control the trade-off between SVM complexities. In the present study, the two aforementioned parameters were derived by executing a hold-out procedure; the procedure entailed classifying samples into two categories: training and other. Training samples were used for classifier training, and other samples were used for testing classifier accuracy. In the model, each run had 1024 sample images. Training the model entailed randomly selecting a total of 512 images as training samples; the test of the images served as samples for SVM classifier accuracy evaluation. Different combinations of the two aforementioned parameters were executed for SVM model testing, and the resulting accuracy data are provided in Table 2. Highly accurate results were yielded when the testing combination C = 27, γ = 2−9 was used in the SVM model. Classification results were also derived for various sizes of samples (Table 3); as demonstrated by the tabulated classification results, increasing the sample size improved the classification results. The sample sizes appeared to be unrelated to the results for 1024 and 2048 samples.

The decision system (Figure 4) enables sample detection by employing suitable features obtained from the PCA/feature extraction-based algorithm (Figure 2). The notations used in Figure 4 are defined as follows:PCj: optimal principal components for various samples,fj: corresponding suitable features for PCj,: test sample image queue,Qi: operation executed for the popping of an image from the queue of images.

The detection procedure entails the following elements: Set . Determine ; if is not empty, then Step 3 commences, and if the reverse is the case, terminate the process. Initiate the iterative procedure from Q0 by popping an image from . Convert the image into a grayscale image with a resolution of 8 bits. Execute PCA on the basis of . Establish I indicator and then classify it on the basis of Ii (obtained from PCA model training) using the SVM model. For instance, to detect class A of samples in images , PCj and fj can be set to and , respectively. The iterative procedure can be initiated from Q0 by popping one of the images from . In each procedure, PCA, indicator establishment, and indicator determination through SVM can be executed. Once contains no images, the overall procedure is completed. Subsequently, for the detection of another sample, PCj and fj can be reset; in addition, can be chosen for executing the detection procedure, and the procedure can be considered to be completed when images no longer remain in .

4. Experimental Results and Discussion

Experiments were executed in this study to evaluate the PCA and feature extraction-based algorithm’s performance in detecting features; the detection accuracy of the system was also assessed. Image feature identification of the proposed method, algorithm performance, and detection accuracy were the major results determined through each experiment. The experimental results demonstrated the algorithm to be effective in identifying image features in the detection system. The system employs suitable image features obtained through the method to effectively classify pattern samples and efficiently detect patterns in manufacturing.

4.1. Image Feature Identification in the General Classification

A general classification test concerning the feasibility of the PCA and feature extraction-based algorithm was executed in this study. Test sample data are presented in Table 4. In the executed test, samples were selected from the 512 validation samples for each class. Moreover, one of the samples was tested for classification performance. The detection system (Figure 3) was applied to process an image captured by a digital camera.

Table 5 lists the optimal principal components along with suitable image features obtained using the PCA and feature extraction-based algorithm; the table also indicates the values of the optimal principal components PCj (PCj >1). In the test, were determined to be the optimal selections and denoted as , and Ii = (i = 0, 1, 2) were derived for the corresponding class in the classification because these values were observed to yield the highest average accuracy rate, which was 96% (Table 6). The overall classification results derived from the proposed method are provided in Table 7, revealing the average accuracy rate derived using I = to be 96%. Figure 5 illustrates the image segmentation results attained using the method with principal components and , with being observed to be the superior choices for class ; this is because produced continuous contours for the class samples and achieved higher accuracy rate (Table 6).

One study [14] quantitatively compared detection methods. The current study applied a modified quantitative comparison method to precisely evaluate the proposed detection method’s performance. The self-organizing map (SOM) [19] replaced the Bayes classifier in the study. Three learning algorithms, namely, SOM, backpropagation neural network (BPNN) [20], and K-nearest neighbor (KNN) [21] classifiers, were also used for comparison with the proposed method. To execute performance evaluations for the mentioned detection methods, 512 randomly selected images served as the training samples, with another 512 images serving as validation samples.

The following provides a summary of the procedures for selecting the parameters of the various learning methods: Input the indicators of . Convert the input data. The converted data of are the input vector for SOM, the data set for KNN, and the three neurons for BPNN. Execute the learning algorithms: The Kohonen algorithm [22] trains the SOM network. The KNN classifies input data set through a majority vote of nearest neighbors. The BPNN trains ta neural network with a three-four-three layer. Obtain the classification results with the output layers. The output layers are the three neurons of the Kohonen map for SOM, the three class memberships for KNN, and the three corresponding neurons for BPNN. For example, SOM comprises the input and output layers (the Kohonen map). The input vector of is fed into a neural network by the input layer, and each input is connected to corresponding neurons in the output layer. The Kohonen learning algorithm is used to train the SOM network. The output derived from SOM is a two-dimensional map comprising three neurons (; these neurons exist in the output layer. In KNN classification, the input data are a set of ; the class membership () constitutes the output. For classifying the input data, a majority vote can be conducted on their k = 5 nearest neighbors. The BPNN comprises three layers, with the input layer comprising three neurons (), the hidden layer comprising four corresponding neurons, and the output layer comprising three corresponding neurons (). Figure 6 displays the accuracy rates achieved in the experiments. The SVM model achieved a higher accuracy rate than the method employing other learning classifiers did because the PCA/SVM classifications can handle redundant inputs in the detection process. The accuracy rate was 96% for the SVM model, 91% for the SOM classifier, 93% for the KNN classifier, and 90% for the BPNN classifier, demonstrating the superior accuracy of the SVM model. A possible reason for such superiority is that the SVM model can efficiently perform image feature classifications (nonlinear classifications) and implicitly map its inputs into a high-dimensional feature space, which is appropriate for inputs with a reduced number of image features. Moreover, the SVM model can maximize the separation distance that exists between two classes in a feature space.

4.2. Pattern Detection Using Suitable Image Features in Manufacturing

The detection system was applied in a pattern detection process in manufacturing in order to demonstrate its feasibility. Table 1 presents data concerning test patterns, which were chosen among the 512 aforementioned validation samples for each class. During detection, an eyeglass exhibiting an unknown degree of orientation was affixed to a telescope’s support frame. Subsequently, digital camera-captured telescopic images were processed using the detection system. The camera was triggered to capture and subsequently forward images to an industrial computer wirelessly (i.e., using Wi-Fi); the forwarded images were then converted into images with a spatial resolution of 1024 × 768 pixels and an 8-bit gray level (normalized between 0 and 1).

The PCA and feature extraction-based algorithm was applied to adaptively obtain indicators along with suitable image features in order to effectively detect patterns. Table 8 lists the suitable image features, optimal principal components, and indicators for each class in the detection. The indicators I could be denoted as for the corresponding class . The indicators along with the suitable image features fj were derived using the adaptive method; the corresponding optimal principal components were determined to be I0=, I1=, I2=, and I3= for . The optimal principal and the corresponding image feature in the indicators were determined to be suitable for detecting class ; this is because the SWT feature extraction method might perform satisfactorily for spherical geometric patterns (±6.0°and ±4.5°). Furthermore, and were selected to alleviate the poor performance of the method in detecting similar patterns (±4.5° and ±3.0°). Figure 7 illustrates the image segmentation results achieved through the execution of the PCA and feature extraction-based algorithm using principal components and for detecting class . were determined to be the optimal selections for class in the detection because they produced detailed information for the class samples. Table 9 presents the detection results for class derived through the PCA and feature extraction-based algorithm; an average accuracy rate of 96% was achieved.

A previous study proposed a system for hybrid blurred/multiple-object detection [15]. In the current study, this system was used for comparison in the evaluation of the proposed method. The procedures (Figure 4) of the proposed method were executed: Derive 512 validation samples as the input for each class from the image queue. Subject the derived input images to a conversion process and executed ARG segmentation. Executed the EFD, DWT, SWT, and INV feature extraction methods sequentially, denoting the results as . Perform PCA-based selection. Establish indicators. Execute the SVM model for classification. Review the image queue to check whether any image remains. A time-cost function O() was also employed to evaluate the method. O() bounds the logarithmic time needed by a system for all n-sized inputs in the big-O notation, with the exception of lower-order terms as well as coefficients. It approximately calculates the amount of time an algorithm needs to perform binary search tree operations. In the detection process executed in this study, classification accuracy rates (%) as well as the time-cost function (μs) were derived, as displayed in Figure 8. The results revealed the accuracy rate to be 91% for PCA/EFD, 89% for PCA/DWT, 90% for PCA/SWT, 85% for PCA/INV, and 96% for the method proposed in this study. The time-cost function for the proposed method was higher than those for the other four extraction methods because the proposed method adaptively uses suitable image features, and the other four methods directly use only a single feature in the segmented image. However, for a time-cost function lower than 40 μs, the PCA/feature extraction-based method employing suitable image features outperforms the methods employing a single feature with respect to sample classification.

5. Conclusions

A PCA and feature extraction-based algorithm for identifying suitable image features for detecting class samples in manufacturing is proposed herein. The algorithm can effectively and adaptively identify suitable image features when detecting different objects. The algorithm is effective for solving the detection problems and outperforms single-feature-based methods that might not be able to accurately extract features for identifying various patterns. The results demonstrate that the proposed algorithm is suitable for application as a tool for identifying image features in a detection system. Four identified image features in this study can be effective for classification. The system adaptively obtains indicators with suitable image features to effectively detect patterns. The number of features could be reduced from 7 to 4 for EFD, from 6 to 4 for DWT, and from 7 to 3 for INV for detection. The system can apply suitable image features, achieving an average recognition rate of 96% in this study. The accuracy rates derived for existing single-feature extraction methods were 91%, 89%, 90%, and 85% for PCA/EFD, PCA/DWT, PCA/SWT, and PCA/INV, respectively. Accordingly, the proposed algorithm outperforms the existing methods with respect to detecting class samples in manufacturing.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The author has no conflicts of interest to declare regarding the publication of this paper.

Acknowledgments

This work was supported in part by a Grant (106-05-05002-03) from Shih Chien University.