Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015, Article ID 638926, 12 pages
http://dx.doi.org/10.1155/2015/638926
Research Article

An Enhanced Artificial Bee Colony-Based Support Vector Machine for Image-Based Fault Detection

1College of Information Engineering, Taiyuan University of Technology, Taiyuan, Shanxi 030024, China
2Department of Mathematics and Computer Science, Virginia Wesleyan College, Norfolk, VA 23502, USA

Received 18 May 2015; Revised 7 August 2015; Accepted 23 August 2015

Academic Editor: Marco Mussetta

Copyright © 2015 Guijun Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Fault detection has become extremely important in industrial production so that numerous potential losses caused from equipment failures could be saved. As a noncontact method, machine vision can satisfy the needs of real-time fault monitoring. However, image-based fault features often have the characteristics of high-dimensionality and redundant correlation. To optimize feature subsets and SVM parameters, this paper presents an enhanced artificial bee colony-based support vector machine (EABC-SVM) approach. The method is applied to the image-based fault detection for the conveyor belt. To improve the optimized capability of original ABC, the EABC algorithm introduces two enhanced strategies including the Cat chaotic mapping initialization and current optimum based search equations. Several UCI datasets have been used to evaluate the performance of EABC-SVM and the experimental results show that this approach has better classification accuracy and convergence performance than the ABC-SVM and other ABC variants-based SVM. Furthermore, the EABC-SVM can achieve a significant detection accuracy of 95% and reduce the amount of features about 65% in the conveyor belt fault detection.

1. Introduction

In industrial production, to improve the reliability and reduce the possible loss due to equipment failures, fault detection has become extremely important. For example, in modern coal mining enterprises, the tear fault of conveyor belt has often occurred due to the impact of falling sharp materials in operation [1]. If the fault can be detected early, the damage of transmission devices caused by belts breakdown can be minimized or even avoided. As a noncontact method, the machine vision can capture the rich image-based information of detected regions in real time by the CCD camera deployed on the equipment. Therefore, image-based fault detection can satisfy the needs of modem industrial production including manufacturing processes [2], electrified railways [3], and defect detection [4].

Generally, the procedure of image-based fault detection consists of image acquisition, image preprocessing, feature extraction and analysis, and alarm control. After the images are acquired, the key step is feature extraction and analysis. The extracted features usually include colors, textures, shapes, points, and edges, which are combined into a multidimensional feature vector. However, existing methods cannot fully utilize the feature vector due to its complex characteristics including the high-dimensionality and redundant correlation [5]. Feature selection is an effective approach to eliminate irrelevant or redundant features by picking a feature subset from the original features [6]. The selection methods of feature subset include forward selection, backward elimination, and bidirectional search [7]. These methods usually start from an initial subset (empty set or complete feature set) and generate a new subset by adding or discarding a feature. Then, the generated feature subset is evaluated by the classification accuracy. Thus, an efficient global search technique is needed for the large feature space.

At present, some intelligent optimization algorithms have been proposed for feature selection, such as the genetic algorithm (GA) [8], particle swarm optimization (PSO) [9], ant colony optimization (ACO) [10], and artificial bee colony (ABC) [6]. These methods can select some informative or actual useful feature variables and improve efficiency and accuracy of the data analysis. Moreover, due to few control parameters and superior optimization performance, the ABC algorithm has attracted much attention [11, 12]. In [13], a hybrid approach based on ABC algorithm and artificial neural networks (ANN) was presented to select feature subset effectively. Schiezaro and Pedrini [6] proposed a feature selection method using the ABC algorithm to classify different UCI datasets and the selected feature set could provide better classification accuracy. In [14], a feature selection technique based on the ABC and -Nearest Neighbor (-NN) was employed for image steganalysis. Compared with ANN and -NN classifier, support vector machine (SVM) is a more powerful classification method due to its excellent classification accuracy and generalization performance [15, 16].

On the other hand, some variants of ABC algorithm [17] have been proposed to improve the global search performance and the convergence speed, which mainly focus on the population initialization and the solution search strategy [1824]. Zhu and Kwong [21] proposed the gbest-guide ABC (GABC) algorithm by incorporating the information of global best solution into the solution search equation to improve the exploitation. Gao and Liu [22] presented a modified ABC (MABC) algorithm by introducing the chaotic and opposition-based initialization and the best solution of the previous iteration. To improve the exploitation and keep the exploration of ABC, Zhang and Liu [19] proposed a novel ABC (NABC) algorithm by incorporating the global best solution and the random solution into the search equations of the onlookers and the employed bees, respectively. He et al. [20] introduced a modified ABC (SDABC) algorithm from three aspects including the search space division initialization, disruptive selection strategy, and the improved scout bee phase.

In the paper, an enhanced artificial bee colony-based support vector machine (EABC-SVM) classifier is proposed for image-based fault detection. To improve the convergence of the ABC, two enhanced strategies including the Cat chaotic mapping initialization and the current optimum based search equations are presented in the enhanced ABC (EABC) algorithm. The EABC algorithm is employed to optimize the feature subset and parameters of SVM. In order to assess EABC-SVM’s capability, six benchmark datasets from UCI are first used. And then the EABC-SVM is applied to detect image-based tear fault of the conveyor belt.

This paper is organized as follows. Fundamentals about the ABC algorithm and SVM are introduced in Section 2. In Section 3, the proposed EABC-SVM classification method is given in detail. In Section 4, experimental results are presented to demonstrate the capability of the EABC-SVM using six benchmark datasets from UCI and show the detection performance applied to the image-based fault detection for conveyor belt. Finally, the conclusion is drawn in Section 5.

2. Fundamentals

This section will briefly introduce the fundamentals about artificial colony bee algorithm and support vector machine.

2.1. Artificial Colony Bee (ABC) Algorithm

Artificial bee colony (ABC) algorithm [25] is an intelligent system inspired by foraging behavior of a bee colony and widely used to solve continuous numerical optimization problems. In the ABC algorithm, the bee colony consists of three kinds of bees: employment bees, onlooker bees, and scout bees. During the process of optimization, the position of a food source represents a possible solution to the optimization problem. The operation procedure of ABC is an iterative process, as shown in Algorithm 1, which refers to repeated searches for the solution with employment bees, onlooker bees, and scout bees until the maximum cycle number, MAXcycles, or the allowable minimum error reaches [26, 27]. The details of the four phases are described as follows.

Algorithm 1: The pseudocode of ABC algorithm.

(1) The Initialization Phase. The population of the ABC algorithm is initialized randomly as where is the number of population, is the dimension of population, is a random number in , and and are the lower and upper bounds of the th optimization parameter, respectively.

After the population initialization, food sources are evaluated by the fitness function. The greater the fitness value, the better quality the food source. For the maximum optimization problem, the fitness function is the objective function . For the minimum optimization problem, the fitness function is defined as

(2) The Employed Bee Phase. In this phase, each employed bee generates a new food source with the following search equation:where , , and is a random number in .

When all employed bees complete the search for new food sources, the fitness values of new food sources are calculated and compared to the old ones according to the greedy selection mechanism of

(3) The Onlooker Bee Phase. After all employed bees complete the updated process, they share their information about the amounts and the positions of food sources with onlooker bees. An onlooker bee evaluates all food sources from employed bees and selects a good one to update the new solution based on the probability calculated with the roulette wheel selection scheme inwhere is the fitness value of the solution .

(4) The Scout Bee Phase. If the food source cannot be further updated through a predetermined number of trials LIMIT, the employed bee is converted to scout bee, which abandons the old food source and searches the new one using (1).

Among the operation procedure, there are some drawbacks in the ABC algorithm. Firstly, due to the randomness of the initialization equation (1), the diversity of initial population may be limited, which would affect the convergence efficiency. Secondly, in the phases of employed bee and onlooker bee, the new solution is searched out around the current solution according to (3) by moving the current solution near to or away from another random solution of the population. However, the generation process does not consider the current optimal solution as a guidance of the search. So the search strategy is considered to have good exploration but poor exploitation capacity [21]. To overcome these drawbacks, an enhanced ABC algorithm is proposed in Section 3.1 in detail.

2.2. Support Vector Machine (SVM)

The SVM classifier is briefly described as follows. Given the training set , where and , , the goal is to find a separating hyperplane and classify the training data into two categories accurately based on the principle of margin maximization [28]. The following optimization problem is constructed:where and are the normal vector and the offset of the separating hyperplane, respectively; is the penalty parameter of error term ; is the mapping function that makes sample mapped onto a high-dimension space. Equation (6) can be translated to the following Lagrange dual problem:The Lagrange multipliers are obtained from (7), and then we can construct the classification decision function as follows:

Generally, is defined as the kernel function. This paper mainly discusses the Gaussian kernel function because the Gaussian kernel can approximate most kernel functions if the kernel parameter is chosen appropriately [29]. Its form is expressed as follows:where is the kernel parameter.

The parameters of SVM with the Gaussian kernel function refer to the penalty parameter and the kernel parameter , which should be optimized by the user. The optimal SVM parameters have important influence on the classification performance.

3. Methodology

In this section, the enhanced artificial bee colony-based support vector machine (EABC-SVM) classifier is proposed, which can effectively improve the classification accuracy by feature selection and parameters optimization for SVM simultaneously.

3.1. Enhanced Artificial Bee Colony

To solve the above drawbacks of the ABC algorithm, two enhanced strategies are introduced as follows.

(1) The Cat Chaotic Mapping Based Initialization. Similar to other intelligent algorithms, the initialized population plays an important role in convergence to the global optimal solution of the ABC algorithm. As one of the useful initialization techniques, the chaotic mapping method can generate the random sequence with the ergodicity and nonperiodicity. Currently, the chaotic maps mainly include the Logistic mapping function, the Tent mapping function, and the Cat mapping function. According to the analysis on chaotic characteristics of the mapping functions [30], the Cat mapping function has better ergodic uniformity and does not easily fall into the minor cycle compared with other mapping functions. So this paper will employ the Cat mapping function to initialize the population. The Cat mapping function is shown aswhere , is the iteration number of the Cat chaotic sequence, , and mod is the modulus operator. When , , and vice versa. And then the new initialized equation of population is expressed as

(2) The Current Optimum Based Search Equations. To improve the search efficiency, many studies have been conducted to modify the search equation, most of which introduced the current optimal solution to search process [2123]. However, since the guidance of the current optimal solution, the search process may cause an “oscillation” phenomenon [24]. Therefore, to overcome the defect of the above method, we introduce two search equations for employed bees and onlooker bees, respectively, in (12) and (13). The equations can take full advantage of the current optimal solution and the random solutions of population based on the characteristics of different search phases: where is the current optimal solution, the indices and are random integers chosen from that are different from the index , and is a random number in .

The principle of the current optimum based search is shown in Figure 1. In the employed bee phase, each employed bee can find out a better solution around the current optimal solution with (12). That is, all employed bees will move toward the space centered on , as the dotted circle in Figure 1. In the onlooker bee phase, two different current solutions, and , are randomly selected and must be located in the dotted circle because they come from the employed bees. In this way, the onlooker bees can complete the search of the global optimal solution in a fairly small solution space with (13). Therefore, the different foraging processes of employed bees and onlooker bees can be simulated well with the two search equations. As a result, employed bees can implement the directional search with the guidance of current optimal solution, and onlooker bees will further complete the wide range search in the confined solution space provided by employed bees. A detailed convergence analysis of the EABC algorithm based on the Markov chain theory is presented in the Appendix.

Figure 1: The principle of the current optimum based search.
3.2. EABC-SVM Classifier
3.2.1. Classifier Initialization

The classifier initialization involves two parts: the optimization parameters and the fitness function. The optimization parameters are expressed as the combination of SVM parameters with the Gaussian kernel function and the selection probability on each feature, as shown in Figure 2.

Figure 2: Optimization parameters.

The fitness function is used to evaluate the selected feature subsets and SVM parameters. Two factors that are usually considered include the classification accuracy and the number of selected features [8]. The fitness function is defined as (14). When the classification accuracy is high and the number of selected features is small, the fitness value will be large: In (14), is a predefined weight, Acc is the classification accuracy, and is the total number of features; “” represents that feature is selected while “” represents that feature is not selected. The weight can be adjusted to balance between classification accuracy and feature selection. In general, can be chosen from 0.7 to 0.9 according to the different datasets.

3.2.2. Classifier Architecture

The architecture of the proposed EABC-SVM classifier is shown in Figure 3. The main implemented steps can be described as follows.

Figure 3: The architecture of proposed EABC-SVM system.

Step 1 (preparing the dataset). The feature dataset discards the feature elements with the selected probability of “” and becomes the actually used feature subset. And then the feature subset is divided into the training set and the testing set. Meanwhile, the training set is used to train the SVM model by -fold cross validation (CV).

Step 2 (training the classifier). The SVM classifier is trained with the training set and the initialized parameters . The optimal classifier parameters and feature subset are evaluated with the fitness function.

Step 3 (testing the classifier). The testing set is used to verify the trained SVM classifier and feature subset for their performance of classification. Finally, the output SVM classifier and the regulation of feature selection are obtained.

4. Experimental Results and Analysis

The numerical experiments were performed in a PC with Intel Pentium (R) G630, 2.7 GHz CPU, 2G RAM, 32-bit Windows 7 operating systems. The development environment is MATLAB R2010a. The simulation toolbox of SVM is Libsvm [31]. In the experiment, the SVM classifier with the Gaussian kernel function was used, in which the searching range of parameter is and that of parameter is .

4.1. UCI Datasets

To examine the capability of the proposed EABC-SVM, we used the UCI benchmark datasets [32] for classification and compare the results with the original ABC and other four ABC variants including GABC [21], MABC [22], NABC [19], and SDABC [20]. For the ABC, GABC, MABC, NABC, SDABC, and EABC algorithms, the parameters were set as follows: size of employed bee , size of onlooker bee , maximum number of iterations , and control parameters LIMIT = 10. For the GABC algorithm, ; for the MABC algorithm, . The experiment adopted six real world datasets: Breast Cancer Wisconsin (WDBC), Ionosphere, Musk1, Sonar, Vehicle, and Wine. Their number of classes, number of instances, and number of features (dimensions) are shown in Table 1.

Table 1: UCI datasets.

For the above six different datasets, the range of each feature value was scaled to the range of . In the experiment, the 10-fold cross validation method was used to divide the dataset into the training set and the testing set. Each experiment was run 20 times and then the average accuracy was taken.

Table 2 shows the results of optimal accuracy (Opt Acc), average accuracy (Ave Acc), standard deviation of accuracy (Acc SD), value for -test, and selected feature dimensions (average ± standard deviation) from the experiment of each dataset with the comparison of ABC-SVM and EABC-SVM. The EABC-SVM method yields the higher classification accuracy in six datasets, especially in Musk1 and Vehicle. Meanwhile, the standard deviation of accuracy of EABC-SVM method is smaller than that of ABC-SVM method in each dataset, which means the EABC-SVM is more stable. Moreover, the independent sample -test was implemented to verify whether the difference is significant or not. Through the value for -test, the proposed EABC-SVM method outperforms ABC-SVM for six datasets under 95% confidence levels. In addition, the EABC-SVM method can select fewer features with smaller standard deviation than the ABC-SVM. Due to fewer features and higher accuracy, EABC-SVM is capable of selecting more appropriate features with the global search.

Table 2: Experimental results for EABC-SVM and ABC-SVM method.

In order to illustrate convergence of EABC-SVM, the convergence curves are showed in Figure 4. From the figure, we see that EABC-SVM can reach optimal solution with fewer iterations than ABC in six datasets. Meanwhile, for each dataset, the initial solution in EABC-SVM is superior to that in ABC-SVM, which further verifies the ergodicity of the Cat chaotic mapping function.

Figure 4: Convergence curves for ABC-SVM and EABC-SVM method.

Tables 3 and 4 show the experimental comparisons including the classification accuracy and the selected feature dimensions (average ± standard deviation) of our proposed EABC-SVM with other ABC variants-SVM. From Table 3, we can find that EABC-SVM yields higher classification accuracy in 6 datasets than other four ABC variants-SVM. For the feature selection, as shown in Table 4, EABC-SVM can produce a moderate-sized feature subset, while NABC-SVM has fewer features in five datasets. Thus, the proposed method can perform better feature selection with high classification accuracy, which is suitable for the redundant feature reduction and classification application, such as the image-based dataset. In particular, when the redundant dimensions of feature vectors are very large, the reduced effect is more apparent and the classification accuracy would be improved greatly.

Table 3: Classification accuracy of the proposed EABC-SVM and other ABC variants-SVM.
Table 4: Selected feature dimensions of the proposed EABC-SVM and other ABC variants-SVM.
4.2. Image-Based Conveyor Belt Fault Dataset

In this section, we use EABC-SVM to implement feature selection and parameters optimization for SVM, and then the fault detection of the conveyor belt is used for empirical analysis. Firstly, the visual monitoring system of conveyor belt was established to acquire the image-based belt fault signal and create a fault feature dataset. The schematic diagram of monitoring system is shown as in Figure 5. A series of CCD cameras and light sources were installed at the bottom of the conveyor belt for image acquisition, and then the images were transmitted to the industrial computer for online fault detection. According to the failure analysis of conveyor belt in [1], the fault dataset is classified into four different types, which include normal, rope fracture, scratches, and tear as shown in Figure 6.

Figure 5: Schematic diagram of monitoring system.
Figure 6: Fault types of conveyor belt.

The dataset comprises 180 gray belt images with size of 640 × 480 and each class has 45 images. Due to the irregular shape of the scratches or tear, we extracted the gray histogram features and texture features as the basis of fault detection. Figure 7 illustrates the flowchart of image preprocessing, feature extraction, and analysis. First, the belt image was enhanced with the Gaussian filter, and then the belt fault features were extracted. The details are shown as follows.(i)Gray Histogram. The image in the tearing part of belt has a much lower gray value than that in the background. So we chose the first 30 grayscale histogram values as features. Figure 8 shows the grayscale histogram for each image of Figure 6. From Figure 8, we can find that the first 30 grayscale values have obvious difference corresponding to the different fault images.(ii)Texture Features. As a good texture descriptor, gray level cooccurrence matrix (GLCM) can reflect the grayscale difference of the direction and adjacent interval effectively. As it is described in [33], the GLCM in this paper was calculated in four different directions of 0°, 45°, 90°, and 135° and the interpixel distance to improve the computational efficiency. The corresponding energy, entropy, contrast, and correlation were then selected to describe the belt texture.

Figure 7: Flowchart of feature extraction and analysis.
Figure 8: Grayscale histogram for each image of Figure 6.

On the above feature extraction process, the size of feature vector is . Next, the EABC-SVM method was employed to reduce the number of features and detect the fault types. The 180 samples were divided into training set () and testing set () randomly. Moreover, the 5-fold cross validation method was used in the training phase. Table 5 gives the final results of EABC-SVM and another ABC-related method about the feature selection and classification accuracy. Results in Table 5 show that EABC-SVM can achieve high classification accuracy of 95.0%, while the MABC-SVM yields the same classification accuracy. Moreover, the proposed EABC-SVM selected merely 16 features and the obtained features only cost 34.78% (16/46) of the memory needed for the original 46 features. The selected array of feature variables includes 4, 8, 13, 15, 21, 22, 25, 32, 34, 37, 38, 41, 43, 44, 45, and 46, in which 7 features belong to gray histogram and 9 features belong to GLCM texture features.

Table 5: Fault detection results of EABC-SVM and other ABC-related methods.

The confusion matrix of fault detection for EABC-SVM is shown in Figure 9. Each row represents the predicted class (output), and each column represents the actual class (target). The number in th row and th column represents the rate of samples whose target is the th class that is classified as th class. From Figure 9, we find that the faults of rope fracture and tear can be detected to be completely correct as 100% accuracy, while other two types between normal and scratches would be easily misclassified. Because of nonuniform illumination and dusts in real environment, the detection results of these two classes are influenced at different levels. So how the two classes can be recognized effectively remains one of our future tasks.

Figure 9: Confusion matrix of EABC-SVM for fault detection.

5. Conclusions

In this paper, we proposed an enhanced ABC (EABC) algorithm to search for the optimal feature subset and parameters of SVM simultaneously. The experimental results demonstrate that the proposed EABC-SVM approach has better classification accuracy and convergence performance than ABC-SVM and other four ABC variants-SVM on six UCI datasets. Furthermore, the EABC-SVM approach was applied to the image-based fault detection for the conveyor belt. Through the combination of gray histogram and GLCM texture features, the EABC-SVM can achieve a significant classification accuracy of 95% and reduce the amount of feature storage about 65%.

The proposed EABC algorithm has the following characteristics: (1) the algorithm possesses two enhanced strategies including the Cat chaotic mapping initialization and current optimum based search equations to improve the convergence speed and global optimization performance; (2) the algorithm is employed to optimize the feature subset and parameters for SVM simultaneously, which can improve overall classification accuracy and reduce the computation complexity.

Further research plan is to (1) reduce the belt image noise and optimize fault features to increase the detection accuracy and robustness and (2) take some other classification problems with a high number of feature variables to test and extend the proposed EABC-SVM approach.

Appendix

Convergence Analysis of the EABC Algorithm

According to the enhanced strategies described in the paper, the convergence of the EABC algorithm will be analyzed based on the Markov chain theory. Herein, we will theoretically prove that the EABC algorithm can converge in probability to the global optimum.

Definition A.1 (convergence in probability [34]). Let the set be a population sequence generated with a population-based stochastic algorithm; the stochastic sequence weakly converges in probability to the global optimum, if and only if where is the set of the global optima of an optimization problem. Convergence in probability of a population sequence means that the probability of the individual population from converging to the global optimum approximates to 1 with the increase of iteration number . The convergence of EABC algorithm is proved in Lemma A.2 as follows.

Lemma A.2. Suppose that is the population sequence generated with the EABC algorithm; then,(i)the set is a finite homogeneous Markov chain on the state space;(ii)the set converges in probability to the global optimum.

Proof. (i) Let denote the population (i.e., food source position) generated with the EABC algorithm in the th iteration, and is the number of population. The dimension of is usually finite in the optimization problem, and its value is set to . So the state space of is finite, and its size is in each iteration. The search equations of new food sources in the EABC algorithm are independent of the iteration time and dependent only on the current state   (, , or ) that is based on (12) and (13). Thus, the stochastic sequence is a finite homogeneous Markov chain.
(ii) Since the best food source would be retained in each iteration of the EABC algorithm, the optimal fitness sequence , , is nondecreasing, and the property is shown as follows: Let denote the set of the global optima, and then the sequence of optimal population is .
Let denote the transition probability from the state to in a step iteration, denote the probability of in the state , and then the probability of in the state based on Markov chain theory is .
Define , and then  + .
Suppose , and then ; this contradicts the property in (A.2).
In this case, ; that is, .
For , then , so ; that is, .
We can get .
From (A.2), the EABC algorithm converges in probability to the global optimum.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grants nos. 61072087 and 61371193), the Characteristic Discipline Fund of Shanxi Province, China (Grant no. 80010302010053), the Major Science and Technology Program of Shanxi Province, China (Grant no. 20121101004), and the Key Technologies R & D Program of Shanxi Province, China (Grant no. 20130321004-01).

References

  1. G. Fedorko, V. Molnar, D. Marasova et al., “Failure analysis of belt conveyor damage caused by the falling material. Part II: application of computer metrotomography,” Engineering Failure Analysis, vol. 34, pp. 431–442, 2013. View at Publisher · View at Google Scholar · View at Scopus
  2. F. M. Megahed and J. A. Camelio, “Real-time fault detection in manufacturing environments using face recognition techniques,” Journal of Intelligent Manufacturing, vol. 23, no. 3, pp. 393–408, 2012. View at Publisher · View at Google Scholar · View at Scopus
  3. I. Aydin, M. Karaköse, and E. Akin, “A new contactless fault diagnosis approach for pantograph-catenary system using pattern recognition and image processing methods,” Advances in Electrical and Computer Engineering, vol. 14, no. 3, pp. 79–88, 2014. View at Publisher · View at Google Scholar · View at Scopus
  4. Y. Yang, C. Miao, X. Li, and X. Mei, “On-line conveyor belts inspection based on machine vision,” Optik, vol. 125, no. 19, pp. 5803–5807, 2014. View at Publisher · View at Google Scholar · View at Scopus
  5. H. Yan, K. Paynabar, and J. Shi, “Image-based process monitoring using low-rank tensor decomposition,” IEEE Transactions on Automation Science and Engineering, vol. 12, no. 1, pp. 216–227, 2015. View at Publisher · View at Google Scholar · View at Scopus
  6. M. Schiezaro and H. Pedrini, “Data feature selection based on artificial bee colony algorithm,” EURASIP Journal on Image and Video Processing, vol. 2013, no. 1, article 47, 8 pages, 2013. View at Publisher · View at Google Scholar
  7. M. Reif and F. Shafait, “Efficient feature size reduction via predictive forward selection,” Pattern Recognition, vol. 47, no. 4, pp. 1664–1673, 2014. View at Publisher · View at Google Scholar · View at Scopus
  8. M. Zhao, C. Fu, L. Ji, K. Tang, and M. Zhou, “Feature selection and parameter optimization for support vector machines: a new approach based on genetic algorithm with feature chromosomes,” Expert Systems with Applications, vol. 38, no. 5, pp. 5197–5204, 2011. View at Publisher · View at Google Scholar · View at Scopus
  9. B. Xue, M. Zhang, and W. N. Browne, “Particle swarm optimisation for feature selection in classification: novel initialisation and updating mechanisms,” Applied Soft Computing, vol. 18, pp. 261–276, 2014. View at Publisher · View at Google Scholar · View at Scopus
  10. B. Chen, L. Chen, and Y. Chen, “Efficient ant colony optimization for image feature selection,” Signal Processing, vol. 93, no. 6, pp. 1566–1576, 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. B. Akay and D. Karaboga, “A survey on the applications of artificial bee colony in signal, image, and video processing,” Signal Image & Video Processing, vol. 9, no. 4, pp. 967–990, 2015. View at Google Scholar
  12. D. L. Jia, X. T. Duan, and M. K. Khan, “Modified artificial bee colony optimization with block perturbation strategy,” Engineering Optimization, vol. 47, no. 5, pp. 642–655, 2014. View at Publisher · View at Google Scholar · View at Scopus
  13. M. Shokouhifar and S. Sabet, “A hybrid approach for effective feature selection using neural networks and artificial bee colony optimization,” in Proceedings of the 3rd International Conference on Machine Vision (ICMV '10), pp. 502–506, December 2010.
  14. F. G. Mohammadi and M. S. Abadeh, “A new metaheuristic feature subset selection approach for image steganalysis,” Journal of Intelligent and Fuzzy Systems, vol. 27, no. 3, pp. 1445–1455, 2014. View at Publisher · View at Google Scholar · View at Scopus
  15. X. Zhang, X. Liu, and Z. J. Wang, “Evaluation of a set of new ORF kernel functions of SVM for speech recognition,” Engineering Applications of Artificial Intelligence, vol. 26, no. 10, pp. 2574–2580, 2013. View at Publisher · View at Google Scholar · View at Scopus
  16. A. Moosavian, H. Ahmadi, and A. Tabatabaeefar, “Fault diagnosis of main engine journal bearing based on vibration analysis using Fisher linear discriminant, K-nearest neighbor and support vector machine,” Journal of Vibroengineering, vol. 14, no. 2, pp. 894–906, 2012. View at Google Scholar · View at Scopus
  17. D. Karaboga, B. Gorkemli, C. Ozturk, and N. Karaboga, “A comprehensive survey: artificial bee colony (ABC) algorithm and applications,” Artificial Intelligence Review, vol. 42, no. 1, pp. 21–57, 2014. View at Publisher · View at Google Scholar · View at Scopus
  18. H. Duan and Q. Luo, “New progresses in swarm intelligence-based computation,” International Journal of Bio-Inspired Computation, vol. 7, no. 1, pp. 26–35, 2015. View at Publisher · View at Google Scholar
  19. S. Zhang and S. Liu, “A novel artificial bee colony algorithm for function optimization,” Mathematical Problems in Engineering, vol. 2015, Article ID 129271, 10 pages, 2015. View at Publisher · View at Google Scholar
  20. Z.-A. He, C. Ma, X. Wang et al., “A modified artificial bee colony algorithm based on search space division and disruptive selection strategy,” Mathematical Problems in Engineering, vol. 2014, Article ID 432654, 14 pages, 2014. View at Publisher · View at Google Scholar · View at Scopus
  21. G. Zhu and S. Kwong, “Gbest-guided artificial bee colony algorithm for numerical function optimization,” Applied Mathematics and Computation, vol. 217, no. 7, pp. 3166–3173, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  22. W.-F. Gao and S.-Y. Liu, “A modified artificial bee colony algorithm,” Computers & Operations Research, vol. 39, no. 3, pp. 687–697, 2012. View at Publisher · View at Google Scholar · View at Scopus
  23. A. Banharnsakun, T. Achalakul, and B. Sirinaovakul, “The best-so-far selection in artificial bee colony algorithm,” Applied Soft Computing Journal, vol. 11, no. 2, pp. 2888–2901, 2011. View at Publisher · View at Google Scholar · View at Scopus
  24. W.-F. Gao, S.-Y. Liu, and L.-L. Huang, “A novel artificial bee colony algorithm based on modified search equation and orthogonal learning,” IEEE Transactions on Cybernetics, vol. 43, no. 3, pp. 1011–1024, 2013. View at Publisher · View at Google Scholar · View at Scopus
  25. D. Karaboga, “An idea based on honey bee swarm for numerical optimization,” Tech. Rep. tr06, Computer Engineering Department, Engineering Faculty, Erciyes University, Kayseri, Turkey, 2005. View at Google Scholar
  26. D. Karaboga and B. Basturk, “On the performance of artificial bee colony (ABC) algorithm,” Applied Soft Computing Journal, vol. 8, no. 1, pp. 687–697, 2008. View at Publisher · View at Google Scholar · View at Scopus
  27. B. Akay and D. Karaboga, “A modified artificial bee colony algorithm for real-parameter optimization,” Information Sciences, vol. 192, pp. 120–142, 2012. View at Publisher · View at Google Scholar · View at Scopus
  28. C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, 1995. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  29. S. S. Keerthi and C.-J. Lin, “Asymptotic behaviors of support vector machines with gaussian kernel,” Neural Computation, vol. 15, no. 7, pp. 1667–1689, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  30. M.-W. Li, W.-C. Hong, and H.-G. Kang, “Urban traffic flow forecasting using Gauss-SVR with cat mapping, cloud model and PSO hybrid algorithm,” NeuroComputing, vol. 99, pp. 230–240, 2013. View at Publisher · View at Google Scholar · View at Scopus
  31. C. C. Chang and C. J. Lin, “LIBSVM: a library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, article 27, 2011. View at Publisher · View at Google Scholar
  32. C. L. Blake and C. J. Merz, UCI Repository of Machine Learning Databases, vol. 55, Department of Information and Computer Science, University of California, Irvine, Calif, USA, 1998, http://www.ics.uci.edu/~mlearn/MLRepository.html.
  33. J. L. Raheja, S. Kumar, and A. Chaudhary, “Fabric defect detection based on GLCM and Gabor filter: a comparison,” Optik International Journal for Light & Electron Optics, vol. 124, no. 23, pp. 6469–6474, 2013. View at Publisher · View at Google Scholar · View at Scopus
  34. Z. Hu, S. Xiong, Q. Su, and Z. Fang, “Finite Markov chain analysis of classical differential evolution algorithm,” Journal of Computational & Applied Mathematics, vol. 268, pp. 121–134, 2014. View at Publisher · View at Google Scholar · View at Scopus